Reaction Ensemble Monte Carlo Simulation of Complex Molecular Systems.
Rosch, Thomas W; Maginn, Edward J
2011-02-08
Acceptance rules for reaction ensemble Monte Carlo (RxMC) simulations containing classically modeled atomistic degrees of freedom are derived for complex molecular systems where insertions and deletions are achieved gradually by utilizing the continuous fractional component (CFC) method. A self-consistent manner in which to utilize statistical mechanical data contained in ideal gas free energy parameters during RxMC moves is presented. The method is tested by applying it to two previously studied systems containing intramolecular degrees of freedom: the propene metathesis reaction and methyl-tert-butyl-ether (MTBE) synthesis. Quantitative agreement is found between the current results and those of Keil et al. (J. Chem. Phys. 2005, 122, 164705) for the propene metathesis reaction. Differences are observed between the equilibrium concentrations of the present study and those of Lísal et al. (AIChE J. 2000, 46, 866-875) for the MTBE reaction. It is shown that most of this difference can be attributed to an incorrect formulation of the Monte Carlo acceptance rule. Efficiency gains using CFC MC as opposed to single stage molecule insertions are presented.
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
Mullen, Ryan Gotchy; Maginn, Edward J
2017-08-17
The original reaction move for the reaction ensemble Monte Carlo (RxMC) method is adapted to align both the position and orientation of inserted product molecules and deleted reactant molecules. The accuracy and efficiency of this move is demonstrated for xylene isomerization in vapor, liquid, and supercritical phases. Classical RxMC requires the ideal gas free energy of reaction ΔGrxn(ideal) as an input. We compare three methods for computing ΔGrxn(ideal): using tabulated enthalpies and entropies of formation, using the harmonic oscillator and rigid rotor approximations and using QM/MM alchemical transformation combined with multistate Bennett acceptance ratio. We find that the tabulated free energies of reaction give the best agreement with experimental equilibrium compositions in bulk fluids. RxMC simulations in a carbon nanotube with an inner diameter of approximately 6 Å show that p-xylene becomes the dominant isomer under confinement, an effect consistent with the production of p-xylene in the zeolite ZSM-5. We also show that o-xylene becomes the dominant isomer in nanotubes with an inner diameter of 7-8 Å. We find that both m- and p-xylene exhibit a loss of rotational entropy in nanotubes of this diameter, effectively allowing o-xylene to fit into cavities inaccessible to the other isomers.
Quantifying Monte Carlo uncertainty in ensemble Kalman filter
Thulin, Kristian; Naevdal, Geir; Skaug, Hans Julius; Aanonsen, Sigurd Ivar
2009-01-15
This report is presenting results obtained during Kristian Thulin PhD study, and is a slightly modified form of a paper submitted to SPE Journal. Kristian Thulin did most of his portion of the work while being a PhD student at CIPR, University of Bergen. The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated fram the updated ensemble, but because of the low rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with smaller ensemble size to obtain truly independent samples from the posterior distribution. This allows a point-wise confidence interval for the posterior cumulative distribution function (CDF) to be constructed. We present a methodology for finding an optimal combination of ensemble batch size (n) and number of EnKF runs (m) while keeping the total number of ensemble members ( m x n) constant. The optimal combination of n and m is found through minimizing the integrated mean square error (MSE) for the CDFs and we choose to define an EnKF run with 10.000 ensemble members as having zero Monte Carlo error. The methodology is tested on a simplistic, synthetic 2D model, but should be applicable also to larger, more realistic models. (author). 12 refs., figs.,tabs
Non-Boltzmann Ensembles and Monte Carlo Simulations
Murthy, K. P. N.
2016-10-01
Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc. This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g(E, M), as a function of both energy E, and order parameter M. This is carried out in two stages. We estimate g(E) in the first stage. Employing g
Gibbs Ensemble Monte-Carlo方法及其应用%GIBBS ENSEMBLE MONTE-CARLO SIMULATION AND ITS APPLICATION
无
2000-01-01
It introduces a Gibbs Ensemble Monte-Carlo method,which was recently invented.This method has been applied for studying the phase diagrams of Lennard-Jones mixtures,and Methane and Tetrafluoromethane.The obtained results of simulations are in good agreement with experimental and previous results.%介绍一种新的且非常实用的Gibbs Ensemble Monte-Carlo模拟技术。并给出模拟甲烷和氟甲烷混合物的液体-液体相图以及Lennard-Jones混合物相图。将模拟的相图与实验得出的相图进行了对比。
Grand Canonical Ensemble Monte Carlo Simulation of Depletion Interactions in Colloidal Suspensions
GUO Ji-Yuan; XIAO Chang-Ming
2008-01-01
Depletion interactions in colloidal suspensions confined between two parallel plates are investigated by using acceptance ratio method with grand canonical ensemble Monte Carlo simulation.The numerical results show that both the depletion potential and depletion force are affected by the confinement from the two parallel plates.Furthermore,it is found that in the grand canonical ensemble Monte Carlo simulation,the depletion interactions are strongly affected by the generalized chemical potential.
Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling
Vrugt, J.A.; Diks, C.G.H.; Clark, M.
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t
Nonlinear reaction coordinate analysis in the reweighted path ensemble
Lechner, W.; Rogal, J.; Juraszek, J.; Ensing, B.; Bolhuis, P.G.
2010-01-01
We present a flexible nonlinear reaction coordinate analysis method for the transition path ensemble based on the likelihood maximization approach developed by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)] . By parametrizing the reaction coordinate by a string of images in a collective variab
A benchmark for reaction coordinates in the transition path ensemble.
Li, Wenjin; Ma, Ao
2016-04-01
The molecular mechanism of a reaction is embedded in its transition path ensemble, the complete collection of reactive trajectories. Utilizing the information in the transition path ensemble alone, we developed a novel metric, which we termed the emergent potential energy, for distinguishing reaction coordinates from the bath modes. The emergent potential energy can be understood as the average energy cost for making a displacement of a coordinate in the transition path ensemble. Where displacing a bath mode invokes essentially no cost, it costs significantly to move the reaction coordinate. Based on some general assumptions of the behaviors of reaction and bath coordinates in the transition path ensemble, we proved theoretically with statistical mechanics that the emergent potential energy could serve as a benchmark of reaction coordinates and demonstrated its effectiveness by applying it to a prototypical system of biomolecular dynamics. Using the emergent potential energy as guidance, we developed a committor-free and intuition-independent method for identifying reaction coordinates in complex systems. We expect this method to be applicable to a wide range of reaction processes in complex biomolecular systems.
Hard, charged spheres in spherical pores. Grand canonical ensemble Monte Carlo calculations
Sloth, Peter; Sørensen, T. S.
1992-01-01
A model consisting of hard charged spheres inside hard spherical pores is investigated by grand canonical ensemble Monte Carlo calculations. It is found that the mean ionic density profiles in the pores are almost the same when the wall of the pore is moderately charged as when it is uncharged...
Hartnett, Michael; Ren, Lei
2013-04-01
This paper describes the application of Ensemble Optimal Interpolation (EnOI) with Monte Carlo (MC) simulation for surface currents forecasting. Environment Fluid Dynamics Codes (EFDC) is run for 7 days with initial conditions and boundary conditions. For the assimilation process, Direct Insertion (DI), Optimal Interpolation (OI) and Ensemble Optimal Interpolation (EnOI) approaches are applied from t=5.0d, and wind forcing is switched off during updating process. For Optimal Interpolation, background error covariance is estimated from the first run combining empirical correlation function, while for Ensemble Optimal Interpolation, background error covariance is calculated from the ensemble of first run, optimal number of ensemble is acquired by comparing different assimilation. Different strategies have been proposed to obtain the measurement error covariance, optimal measurement error covariance gives the least forecast error. Different kinds of pseudo measurements are produced from Monte Carlo simulation by adding different type of perturbations, which obey certain distribution. A series of experiments with distinct perturbations are carried out to show the improvement of simulating the stochastic process. Three types of reference points: inside of the assimilation area, outside of the assimilation area, and the boundary points are analyzed to show the improvement of the assimilation process and the influence after assimilation. This study also investigates the impacts of the updating interval for the assimilation process, the felicitous updating interval is chosen by comparison. To compare the improvement of operating Ensemble Optimal Interpolation with Direct Insertion and Optimal Interpolation, RMS error and data assimilation skill are calculated.
Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles
Du, Shouhong
2012-05-01
This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.
Nuclear reactions in Monte Carlo codes.
Ferrari, A; Sala, P R
2002-01-01
The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references.
Lísal, Martin; Brennan, John K.; Smith, William R.; Siperstein, Flor R.
2004-09-01
We present a simulation tool to study fluid mixtures that are simultaneously chemically reacting and adsorbing in a porous material. The method is a combination of the reaction ensemble Monte Carlo method and the dual control volume grand canonical molecular dynamics technique. The method, termed the dual control cell reaction ensemble molecular dynamics method, allows for the calculation of both equilibrium and nonequilibrium transport properties in porous materials such as diffusion coefficients, permeability, and mass flux. Control cells, which are in direct physical contact with the porous solid, are used to maintain the desired reaction and flow conditions for the system. The simulation setup closely mimics an actual experimental system in which the thermodynamic and flow parameters are precisely controlled. We present an application of the method to the dry reforming of methane reaction within a nanoscale reactor model in the presence of a semipermeable membrane that was modeled as a porous material similar to silicalite. We studied the effects of the membrane structure and porosity on the reaction species permeability by considering three different membrane models. We also studied the effects of an imposed pressure gradient across the membrane on the mass flux of the reaction species. Conversion of syngas (H2/CO) increased significantly in all the nanoscale membrane reactor models considered. A brief discussion of further potential applications is also presented.
Generalized Ensemble Sampling of Enzyme Reaction Free Energy Pathways
Wu, Dongsheng; Fajer, Mikolai I.; Cao, Liaoran; Cheng, Xiaolin; Yang, Wei
2016-01-01
Free energy path sampling plays an essential role in computational understanding of chemical reactions, particularly those occurring in enzymatic environments. Among a variety of molecular dynamics simulation approaches, the generalized ensemble sampling strategy is uniquely attractive for the fact that it not only can enhance the sampling of rare chemical events but also can naturally ensure consistent exploration of environmental degrees of freedom. In this review, we plan to provide a tutorial-like tour on an emerging topic: generalized ensemble sampling of enzyme reaction free energy path. The discussion is largely focused on our own studies, particularly ones based on the metadynamics free energy sampling method and the on-the-path random walk path sampling method. We hope that this mini presentation will provide interested practitioners some meaningful guidance for future algorithm formulation and application study. PMID:27498634
Puibasset, Joël
2005-04-01
The effect of confinement on phase behavior of simple fluids is still an area of intensive research. In between experiment and theory, molecular simulation is a powerful tool to study the effect of confinement in realistic porous materials, containing some disorder. Previous simulation works aiming at establishing the phase diagram of a confined Lennard-Jones-type fluid, concentrated on simple pore geometries (slits or cylinders). The development of the Gibbs ensemble Monte Carlo technique by Panagiotopoulos [Mol. Phys. 61, 813 (1987)], greatly favored the study of such simple geometries for two reasons. First, the technique is very efficient to calculate the phase diagram, since each run (at a given temperature) converges directly to an equilibrium between a gaslike and a liquidlike phase. Second, due to volume exchange procedure between the two phases, at least one invariant direction of space is required for applicability of this method, which is the case for slits or cylinders. Generally, the introduction of some disorder in such simple pores breaks the initial invariance in one of the space directions and prevents to work in the Gibbs ensemble. The simulation techniques for such disordered systems are numerous (grand canonical Monte Carlo, molecular dynamics, histogram reweighting, N-P-T+test method, Gibbs-Duhem integration procedure, etc.). However, the Gibbs ensemble technique, which gives directly the coexistence between phases, was never generalized to such systems. In this work, we focus on two weakly disordered pores for which a modified Gibbs ensemble Monte Carlo technique can be applied. One of the pores is geometrically undulated, whereas the second is cylindrical but presents a chemical variation which gives rise to a modulation of the wall potential. In the first case almost no change in the phase diagram is observed, whereas in the second strong modifications are reported.
Transition state ensemble optimization for reactions of arbitrary complexity
Zinovjev, Kirill; Tuñón, Iñaki
2015-10-01
In the present work, we use Variational Transition State Theory (VTST) to develop a practical method for transition state ensemble optimization by looking for an optimal hyperplanar dividing surface in a space of meaningful trial collective variables. These might be interatomic distances, angles, electrostatic potentials, etc. Restrained molecular dynamics simulations are used to obtain on-the-fly estimates of ensemble averages that guide the variations of the hyperplane maximizing the transmission coefficient. A central result of our work is an expression that quantitatively estimates the importance of the coordinates used for the localization of the transition state ensemble. Starting from an arbitrarily large set of trial coordinates, one can distinguish those that are indeed essential for the advance of the reaction. This facilitates the use of VTST as a practical theory to study reaction mechanisms of complex processes. The technique was applied to the reaction catalyzed by an isochorismate pyruvate lyase. This reaction involves two simultaneous chemical steps and has a shallow transition state region, making it challenging to define a good reaction coordinate. Nevertheless, the hyperplanar transition state optimized in the space of 18 geometrical coordinates provides a transmission coefficient of 0.8 and a committor histogram well-peaked about 0.5, proving the strength of the method. We have also tested the approach with the study of the NaCl dissociation in aqueous solution, a stringest test for a method based on transition state theory. We were able to find essential degrees of freedom consistent with the previous studies and to improve the transmission coefficient with respect to the value obtained using solely the NaCl distance as the reaction coordinate.
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach
Alba, Vincenzo
By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.
Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: siepmann@umn.edu [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)
2015-09-21
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard
Sloth, Peter
1990-01-01
Density profiles and partition coefficients are obtained for hard-sphere fluids inside hard, spherical pores of different sizes by grand canonical ensemble Monte Carlo calculations. The Monte Carlo results are compared to the results obtained by application of different kinds of integral equation...... approximations. Also, some exact, analytical results for the partition coefficients are given, which are valid in the case of (very) small pores or at low density, respectively. The Journal of Chemical Physics is copyrighted by The American Institute of Physics....
Zhang, Zhigang; Duan, Zhenhao
2002-10-01
A new technique of temperature scaling method combined with the conventional Gibbs Ensemble Monte Carlo simulation was used to study liquid-vapor phase equilibria of the methane-ethane (CH 4-C 2H 6) system. With this efficient method, a new set of united-atom Lennard-Jones potential parameters for pure C 2H 6 was found to be more accurate than those of previous models in the prediction of phase equilibria. Using the optimized potentials for liquid simulations (OPLS) potential for CH 4 and the potential of this study for C 2H 6, together with a simple mixing rule, we simulated the equilibrium compositions and densities of the CH 4-C 2H 6 mixtures with accuracy close to experiments. The simulated data are supplements to experiments, and may cover a larger temperature-pressure-composition space than experiments. Compared with some well-established equations of state such as Peng-Robinson equation of state (PR-EQS), the simulated results are found to be closer to experiments, at least in some temperature and pressure ranges.
Typicality in Ensembles of Quantum States: Monte Carlo Sampling vs Analytical Approximations
Fresch, Barbara
2009-01-01
Random Quantum States are presently of interest in the fields of quantum information theory and quantum chaos. Moreover, a detailed study of their properties can shed light on some foundational issues of the quantum statistical mechanics such as the emergence of well defined thermal properties from the pure quantum mechanical description of large many body systems. When dealing with an ensemble of pure quantum states, two questions naturally arise: what is the probability density function on the parameters which specify the state of the system in a given ensemble? And, does there exist a most typical value of a function of interest in the considered ensemble? Here two different ensembles are considered: the Random Pure State Ensemble (RPSE) and the Fixed Expectation Energy Ensemble (FEEE). By means of a suitable parameterization of the wave function in terms of populations and phases, we focus on the probability distribution of the populations in such ensembles. A comparison is made between the distribution i...
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Bai, Peng; Siepmann, J Ilja
2017-02-14
Particle swap moves between phases are usually the rate-limiting step for Gibbs ensemble Monte Carlo (GEMC) simulations of fluid phase equilibria at low reduced temperatures because the acceptance probabilities for these moves can become very low for molecules with articulated architecture and/or highly directional interactions. The configurational-bias Monte Carlo (CBMC) technique can greatly increase the acceptance probabilities, but the efficiency of the CBMC algorithm is influenced by multiple parameters. In this work we assess the performance of different CBMC strategies for GEMC simulations using the SPC/E and TIP4P water models at 283, 343, and 473 K, demonstrate that much higher acceptance probabilities can be achieved than previously reported in the literature, and make recommendations for CBMC strategies leading to optimal efficiency.
Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks
Moraes, Alvaro
2015-01-07
Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.
Gartner, Thomas E; Epps, Thomas H; Jayaraman, Arthi
2016-11-08
We describe an extension of the Gibbs ensemble molecular dynamics (GEMD) method for studying phase equilibria. Our modifications to GEMD allow for direct control over particle transfer between phases and improve the method's numerical stability. Additionally, we found that the modified GEMD approach had advantages in computational efficiency in comparison to a hybrid Monte Carlo (MC)/MD Gibbs ensemble scheme in the context of the single component Lennard-Jones fluid. We note that this increase in computational efficiency does not compromise the close agreement of phase equilibrium results between the two methods. However, numerical instabilities in the GEMD scheme hamper GEMD's use near the critical point. We propose that the computationally efficient GEMD simulations can be used to map out the majority of the phase window, with hybrid MC/MD used as a follow up for conditions under which GEMD may be unstable (e.g., near-critical behavior). In this manner, we can capitalize on the contrasting strengths of these two methods to enable the efficient study of phase equilibria for systems that present challenges for a purely stochastic GEMC method, such as dense or low temperature systems, and/or those with complex molecular topologies.
Study of Phase Equilibria of Petrochemical Fluids using Gibbs Ensemble Monte Carlo Methods
Nath, Shyamal
2001-03-01
Knowledge of phase behavior of hydrocarbons and related compounds are highly of interest to chemical and petrochemical industries. For example, design of processes such as supercritical fluid extraction, petroleum refining, enhanced oil recovery, gas treatment, and fractionation of wax products. A precise knowledge of the phase equilibria of alkanes, alkenes and related compounds and their mixtures are required for efficient design of these processes. Experimental studies to understand the related phase equilibria often become unsuitable for various reasons. With the advancement of simulation technology, molecular simulations could provide a useful complement and alternative in the study and description of phase behavior of these systems. In this work we study vapor-liquid phase equilibria of pure hydrocarbons and their mixtures using Gibbs ensemble simulation. Insertion of long and articulated chain molecules are facilitated in our simulations by means of configurational bias and expanded ensemble methods. We use the newly developed NERD force field in our simulation. In this work NERD force field is extended to provide coverage for hydrocarbons with any arbitrary architecture. Our simulation results provide excellent quantitative agreement with available experimental phase equilibria data for both the pure components and mixtures.
Moučka, Filip; Lísal, Martin; Škvor, Jiří; Jirsák, Jan; Nezbeda, Ivo; Smith, William R
2011-06-23
We present a new and computationally efficient methodology using osmotic ensemble Monte Carlo (OEMC) simulation to calculate chemical potential-concentration curves and the solubility of aqueous electrolytes. The method avoids calculations for the solid phase, incorporating readily available data from thermochemical tables that are based on well-defined reference states. It performs simulations of the aqueous solution at a fixed number of water molecules, pressure, temperature, and specified overall electrolyte chemical potential. Insertion/deletion of ions to/from the system is implemented using fractional ions, which are coupled to the system via a coupling parameter λ that varies between 0 (no interaction between the fractional ions and the other particles in the system) and 1 (full interaction between the fractional ions and the other particles of the system). Transitions between λ-states are accepted with a probability following from the osmotic ensemble partition function. Biasing weights associated with the λ-states are used in order to efficiently realize transitions between them; these are determined by means of the Wang-Landau method. We also propose a novel scaling procedure for λ, which can be used for both nonpolarizable and polarizable models of aqueous electrolyte systems. The approach is readily extended to involve other solvents, multiple electrolytes, and species complexation reactions. The method is illustrated for NaCl, using SPC/E water and several force field models for NaCl from the literature, and the results are compared with experiment at ambient conditions. Good agreement is obtained for the chemical potential-concentration curve and the solubility prediction is reasonable. Future improvements to the predictions will require improved force field models.
Trajectory study of dissociation reactions. The single-ensemble method. II
Kutz, H. Douglas; Burns, George
1981-04-01
The single uniform ensemble method was previously employed in 3D classical trajectory calculations [H. D. Kutz and G. Burns, J. Chem. Phys. 72, 3652 (1980)]. Presently it is applied to the Br2+Ar system to study nonequilbrium effects in diatom dissociation over a wide temperature range. It was found that, for a given large set of trajectories, observables, such as reaction cross sections or rate constants, are indepedent within four significant figures of the initial distribution function. This indicates a high degree of reliability of the single uniform ensemble method, once the choice of a set of trajectories is made. In order to study dissociation from the low lying energy states, the uniform velocity selection method in trajectory calculations was used. It was found that dissociation from these states contributes but little to the overall dissociation reaction. The latter finding is consistent with the attractive nature of the potential energy surface used, and constitutes an argument against those current theories of diatom dissociation reaction which explains experimental data by postulating a high probability of dissociation from low lying energy states of diatoms. It was found that the contribution from the low lying states to dissociation can be estimated with good accuracy using information theory expressions. Temperature dependence of nonequilibrium effects was investigated between 1 500 and 6 000 °K. In this range the nonequilibrium correction factor varies between 0.2 and 0.5. Angular momentum dependence of such observables as reaction rate constant and reaction cross section was investigated.
Zhang, Minhua; Chen, Lihang; Yang, Huaming; Sha, Xijiang; Ma, Jing
2016-07-01
Gibbs ensemble Monte Carlo simulation with configurational bias was employed to study the vapor-liquid equilibrium (VLE) for pure acetic acid and for a mixture of acetic acid and ethylene. An improved united-atom force field for acetic acid based on a Lennard-Jones functional form was proposed. The Lennard-Jones well depth and size parameters for the carboxyl oxygen and hydroxyl oxygen were determined by fitting the interaction energies of acetic acid dimers to the Lennard-Jones potential function. Four different acetic acid dimers and the proportions of them were considered when the force field was optimized. It was found that the new optimized force field provides a reasonable description of the vapor-liquid phase equilibrium for pure acetic acid and for the mixture of acetic acid and ethylene. Accurate values were obtained for the saturated liquid density of the pure compound (average deviation: 0.84 %) and for the critical points. The new optimized force field demonstrated greater accuracy and reliability in calculations of the solubility of the mixture of acetic acid and ethylene as compared with the results obtained with the original TraPPE-UA force field.
Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent
2015-09-14
A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.
Database of atomistic reaction mechanisms with application to kinetic Monte Carlo.
Terrell, Rye; Welborn, Matthew; Chill, Samuel T; Henkelman, Graeme
2012-07-07
Kinetic Monte Carlo is a method used to model the state-to-state kinetics of atomic systems when all reaction mechanisms and rates are known a priori. Adaptive versions of this algorithm use saddle searches from each visited state so that unexpected and complex reaction mechanisms can also be included. Here, we describe how calculated reaction mechanisms can be stored concisely in a kinetic database and subsequently reused to reduce the computational cost of such simulations. As all accessible reaction mechanisms available in a system are contained in the database, the cost of the adaptive algorithm is reduced towards that of standard kinetic Monte Carlo.
McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)
2016-06-01
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little
Multilevel ensemble Kalman filter
Chernov, Alexey
2016-01-06
This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF). In terms of computational cost vs. approximation error the asymptotic performance of the multilevel ensemble Kalman filter (MLEnKF) is superior to the EnKF s.
Evolution of a spallation reaction: experiment and Monte Carlo simulation
Enke, M; Hilscher, D; Jahnke, U; Schapiro, O; Letourneau, A; Galin, J; Goldenbaum, F; Lott, B; Peghaire, A; Filges, D; Neef, R D; Nünnighoff, K; Paul, N; Schaal, H; Sterzenbach, G; Tietze, A; Pienkowski, L
1999-01-01
Reaction cross sections and production cross sections for neutrons, hydrogen, and helium have been measured for 1.2, 1.8 GeV p+Fe, Ni, Ag, Ta, W, Au, Pb and U and are compared with different intra-nuclear-cascade- combined with evaporation-models. Agreement for neutrons and considerable differences for light charged particles are observed between experiment and calculation as well as between different models. The discrepancies are associated with specific deficiencies in the models. The exclusive data measured with two 4 pi-detectors for neutron and charged particle detection allowed furthermore a systematic comparison of observables characteristic of different stages of the temporal evolution of a spallation reaction: inelastic collision probability, excitation energy distribution, pre-equilibrium emission, and inclusive production cross sections.
Stolarski, R. S.; Butler, D. M.; Rundel, R. D.
1977-01-01
A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.
REX: A Monte Carlo simulation of thick gas target resonant scattering reactions
Curtis, N., E-mail: n.curtis@bham.ac.uk; Walshe, J.
2015-10-11
A Monte Carlo code has been developed to simulate resonant scattering reactions using the thick gas target technique in inverse kinematics. Results are presented for the {sup 4}He({sup 20}Ne,α){sup 20}Ne reaction at 70 MeV, and compared to an experimental measurement which utilised an array of segmented silicon strip detectors. In the case studied, angular straggling in the chamber window is found to dominate the excitation energy resolution.
De Napoli, M.; Romano, F.; D'Urso, D.; Licciardello, T.; Agodi, C.; Candiano, G.; Cappuzzello, F.; Cirrone, G. A. P.; Cuttone, G.; Musumarra, A.; Pandola, L.; Scuderi, V.
2014-12-01
When a carbon beam interacts with human tissues, many secondary fragments are produced into the tumor region and the surrounding healthy tissues. Therefore, in hadrontherapy precise dose calculations require Monte Carlo tools equipped with complex nuclear reaction models. To get realistic predictions, however, simulation codes must be validated against experimental results; the wider the dataset is, the more the models are finely tuned. Since no fragmentation data for tissue-equivalent materials at Fermi energies are available in literature, we measured secondary fragments produced by the interaction of a 55.6 MeV u-1 12C beam with thick muscle and cortical bone targets. Three reaction models used by the Geant4 Monte Carlo code, the Binary Light Ions Cascade, the Quantum Molecular Dynamic and the Liege Intranuclear Cascade, have been benchmarked against the collected data. In this work we present the experimental results and we discuss the predictive power of the above mentioned models.
Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Watanabe, Takehito [Los Alamos National Laboratory; Chadwick, Mark [Los Alamos National Laboratory
2010-01-01
Monte Carlo simulations for particle and {gamma}-ray emissions from an excited nucleus based on the Hauser-Feshbach statistical theory are performed to obtain correlated information between emitted particles and {gamma}-rays. We calculate neutron induced reactions on {sup 51}V to demonstrate unique advantages of the Monte Carlo method. which are the correlated {gamma}-rays in the neutron radiative capture reaction, the neutron and {gamma}-ray correlation, and the particle-particle correlations at higher energies. It is shown that properties in nuclear reactions that are difficult to study with a deterministic method can be obtained with the Monte Carlo simulations.
Hofmann, H.M.; Mertelmeier, T. (Erlangen-Nuernberg Univ., Erlangen (Germany, F.R.). Inst. fuer Theoretische Physik); Mello, P.A. (Instituto Nacional de Investigaciones Nucleares, Mexico City. Lab. del Acelerador); Seligman, T.H. (Universidad Nacional Autonoma de Mexico, Mexico City. Inst. de Fisica)
1981-12-14
A comparison is presented between predictions of the entropy approach to statistical nuclear reactions, and numerical calculations performed by generating an ensemble of S-matrices in terms of K-matrices with specified statistical distributions for their parameters. The comparison is done for: (a) the 2nd, 3rd and 4th moments of S in a 4-channel case and (b) the actual distribution of the S-matrix elements in a 2-channel case. In both cases the agreement is found to be very good in the domain of strong absorption.
Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
Kadoura, Ahmad Salim
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.
Lattice based Kinetic Monte Carlo Simulations of a complex chemical reaction network
Danielson, Thomas; Savara, Aditya; Hin, Celine
Lattice Kinetic Monte Carlo (KMC) simulations offer a powerful alternative to using ordinary differential equations for the simulation of complex chemical reaction networks. Lattice KMC provides the ability to account for local spatial configurations of species in the reaction network, resulting in a more detailed description of the reaction pathway. In KMC simulations with a large number of reactions, the range of transition probabilities can span many orders of magnitude, creating subsets of processes that occur more frequently or more rarely. Consequently, processes that have a high probability of occurring may be selected repeatedly without actually progressing the system (i.e. the forward and reverse process for the same reaction). In order to avoid the repeated occurrence of fast frivolous processes, it is necessary to throttle the transition probabilities in such a way that avoids altering the overall selectivity. Likewise, as the reaction progresses, new frequently occurring species and reactions may be introduced, making a dynamic throttling algorithm a necessity. We present a dynamic steady-state detection scheme with the goal of accurately throttling rate constants in order to optimize the KMC run time without compromising the selectivity of the reaction network. The algorithm has been applied to a large catalytic chemical reaction network, specifically that of methanol oxidative dehydrogenation, as well as additional pathways on CeO2(111) resulting in formaldehyde, CO, methanol, CO2, H2 and H2O as gas products.
Monte Carlo simulations of surface reactions: NO reduction by CO or H2
Álvarez-Falcón, L.; Alas, S. J.; Vicente, L.
2014-01-01
The development of surface science has given an opportunity to investigate the process of heterogeneous catalysis at a molecular level. In this way there has been a great progress in understanding the mechanism in NO decomposition. Modeling has been an very important tool in this goal. In this work we analyze the reactions NO+H2 and NO+CO. The extremely narrow production peak of N2 and CO2 which occurs in the reaction of NO+CO on Pt(100), a phenomenon known as "surface explosion," is studied using a dynamic Monte Carlo method on a square lattice at low pressure under isothermal conditions. The catalytic reduction of nitric oxide by hydrogen over a Pt surface is also studied by using a dynamic Monte Carlo. Using a Langmuir-Hinshelwod mechanism of reaction, a simplified model with only four adsorbed species (NO, H, O, and N) is constructed. The effect on NO dissociation rate, the limiting step in the whole reaction, is inhibited by coadsorbed NO and H2 molecules, and is enhanced both by the presence of empty sites and adsorbed N atoms as nearest-neighbors. In these simulations experimental parameters values are included, such as: adsorption, desorption and diffusion of the reactants. The phenomenon is studied changing the temperature in the range of 300-550 K. The modeling reproduces well observed TPD and TPR experimental results and allows a visualization of the spatial development of the surface explosion.
Sample Duplication Method for Monte Carlo Simulation of Large Reaction-Diffusion System
张红东; 陆建明; 杨玉良
1994-01-01
The sample duplication method for the Monte Carlo simulation of large reaction-diffusion system is proposed in this paper. It is proved that the sample duplication method will effectively raise the efficiency and statistical precision of the simulation without changing the kinetic behaviour of the reaction-diffusion system and the critical condition for the bifurcation of the steady-states. The method has been applied to the simulation of spatial and time dissipative structure of Brusselator under the Dirichlet boundary condition. The results presented in this paper definitely show that the sample duplication method provides a very efficient way to sol-’e the master equation of large reaction-diffusion system. For the case of two-dimensional system, it is found that the computation time is reduced at least by a factor of two orders of magnitude compared to the algorithm reported in literature.
Barrier heights of hydrogen-transfer reactions with diffusion quantum monte carlo method.
Zhou, Xiaojun; Wang, Fan
2017-04-30
Hydrogen-transfer reactions are an important class of reactions in many chemical and biological processes. Barrier heights of H-transfer reactions are underestimated significantly by popular exchange-correlation functional with density functional theory (DFT), while coupled-cluster (CC) method is quite expensive and can be applied only to rather small systems. Quantum Monte-Carlo method can usually provide reliable results for large systems. Performance of fixed-node diffusion quantum Monte-Carlo method (FN-DMC) on barrier heights of the 19 H-transfer reactions in the HTBH38/08 database is investigated in this study with the trial wavefunctions of the single-Slater-Jastrow form and orbitals from DFT using local density approximation. Our results show that barrier heights of these reactions can be calculated rather accurately using FN-DMC and the mean absolute error is 1.0 kcal/mol in all-electron calculations. Introduction of pseudopotentials (PP) in FN-DMC calculations improves efficiency pronouncedly. According to our results, error of the employed PPs is smaller than that of the present CCSD(T) and FN-DMC calculations. FN-DMC using PPs can thus be applied to investigate H-transfer reactions involving larger molecules reliably. In addition, bond dissociation energies of the involved molecules using FN-DMC are in excellent agreement with reference values and they are even better than results of the employed CCSD(T) calculations using the aug-cc-pVQZ basis set. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Monte Carlo simulations of surface reactions: NO reduction by CO or H{sub 2}
Álvarez-Falcón, L.; Vicente, L. [Departamento de Física y Química Teórica, Facultad de Química, Universidad Nacional Autónoma de México, Circuito Escolar s/n, Ciudad Universitaria, 04510 México Distrito Federal (Mexico); Alas, S. J. [Departamento de Ciencias Naturales, Universidad Autónoma Metropolitana-Cuajimalpa, Av. Vasco de Quiroga 4871, Col. Santa Fe Cuajimalpa, 05348 México Distrito Federal (Mexico)
2014-01-14
The development of surface science has given an opportunity to investigate the process of heterogeneous catalysis at a molecular level. In this way there has been a great progress in understanding the mechanism in NO decomposition. Modeling has been an very important tool in this goal. In this work we analyze the reactions NO+H{sub 2} and NO+CO. The extremely narrow production peak of N{sub 2} and CO{sub 2} which occurs in the reaction of NO+CO on Pt(100), a phenomenon known as “surface explosion,” is studied using a dynamic Monte Carlo method on a square lattice at low pressure under isothermal conditions. The catalytic reduction of nitric oxide by hydrogen over a Pt surface is also studied by using a dynamic Monte Carlo. Using a Langmuir-Hinshelwod mechanism of reaction, a simplified model with only four adsorbed species (NO, H, O, and N) is constructed. The effect on NO dissociation rate, the limiting step in the whole reaction, is inhibited by coadsorbed NO and H{sub 2} molecules, and is enhanced both by the presence of empty sites and adsorbed N atoms as nearest-neighbors. In these simulations experimental parameters values are included, such as: adsorption, desorption and diffusion of the reactants. The phenomenon is studied changing the temperature in the range of 300–550 K. The modeling reproduces well observed TPD and TPR experimental results and allows a visualization of the spatial development of the surface explosion.
Francis, S.; van Zyl, R. R.; Perold, W. J.
2015-08-01
The ensemble Monte Carlo particle simulation technique is used to determine the upper operational frequency limit of the transferred electron mechanism in bulk GaAs and GaN empirically. This mechanism manifests as a decrease in the average velocity of the electrons in the bulk material with an increase in the electric field bias, which yields the characteristic negative slope in the velocity-field curves of these materials. A novel approach is proposed whereby the hysteresis in the simulated dynamic, high-frequency velocity-field curves is exploited. The upper operational frequency limit supported by the material is defined as that frequency, where the average gradient of the dynamic characteristic curve over a radio frequency cycle approaches zero. Effects of temperature and doping level on the operational frequency limit are reported. The frequency limit thus obtained is also useful to predict the highest fundamental frequency of operation of transferred electron devices, such as Gunn diodes, which are based on materials that support the transferred electron mechanism. Based on the method presented here, the upper operational frequency limits of the transferred electron mechanism in bulk GaAs and GaN are 80 and 255 GHz, respectively, at typical doping levels and operating temperatures of Gunn diodes.
Monaco, James Peter; Madabhushi, Anant
2011-07-01
The ability of classification systems to adjust their performance (sensitivity/specificity) is essential for tasks in which certain errors are more significant than others. For example, mislabeling cancerous lesions as benign is typically more detrimental than mislabeling benign lesions as cancerous. Unfortunately, methods for modifying the performance of Markov random field (MRF) based classifiers are noticeably absent from the literature, and thus most such systems restrict their performance to a single, static operating point (a paired sensitivity/specificity). To address this deficiency we present weighted maximum posterior marginals (WMPM) estimation, an extension of maximum posterior marginals (MPM) estimation. Whereas the MPM cost function penalizes each error equally, the WMPM cost function allows misclassifications associated with certain classes to be weighted more heavily than others. This creates a preference for specific classes, and consequently a means for adjusting classifier performance. Realizing WMPM estimation (like MPM estimation) requires estimates of the posterior marginal distributions. The most prevalent means for estimating these--proposed by Marroquin--utilizes a Markov chain Monte Carlo (MCMC) method. Though Marroquin's method (M-MCMC) yields estimates that are sufficiently accurate for MPM estimation, they are inadequate for WMPM. To more accurately estimate the posterior marginals we present an equally simple, but more effective extension of the MCMC method (E-MCMC). Assuming an identical number of iterations, E-MCMC as compared to M-MCMC yields estimates with higher fidelity, thereby 1) allowing a far greater number and diversity of operating points and 2) improving overall classifier performance. To illustrate the utility of WMPM and compare the efficacies of M-MCMC and E-MCMC, we integrate them into our MRF-based classification system for detecting cancerous glands in (whole-mount or quarter) histological sections of the prostate.
Sadi, M; Dabir, B
2003-01-01
Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.
Dynamic Monte Carlo simulation of the NO+H reaction on Pt(100): TPR spectra
Álvarez-Falcón, L.; Alas, S. J.; Vicente, L.
2011-11-01
The catalytic reduction of nitric oxide by hydrogen over a Pt surface is studied using a dynamic Monte Carlo (MC) method on a square lattice under low pressure conditions. Using a Langmuir-Hinshelwood reaction mechanism, a simplified model with only four adsorbed species (NO, H, O, and N) is constructed. The effect on the NO dissociation rate, the limiting step in the whole reaction, is inhibited by co-adsorbed NO and H 2 molecules and is enhanced both by the presence of empty sites and adsorbed N atoms at nearest neighbors. In these simulations, several experimental parameter values are included, such as: adsorption, desorption and diffusion of the reactants. The phenomenon is studied while varying the temperature over the 300-550 K range. The model reproduces well-observed TPD and TPR experimental results. For the whole NO+H 2 reaction, the phenomena of “surface explosion” is observed and can be explained as the result of the abrupt production of N 2 due to both the autocatalytic NO decomposition favored by the presence of vacant sites and the development of inhomogeneous fluctuations. MA simulations also allow a visualization of the spatial development of the surface explosion as heating proceeds.
Saritas, Kayahan; Grossman, Jeffrey C.
2015-03-01
Molecules that undergo pericyclic isomerization reactions find interesting optical and energy storage applications, because of their usually high quantum yields, large spectral shifts and small structural changes upon light absorption. These reactions induce a drastic change in the conjugated structure such that substituents that become a part of the conjugated system upon isomerization can play an important role in determining properties such as enthalpy of isomerization and HOMO-LUMO gap. Therefore, theoretical investigations dealing with such systems should be capable of accurately capturing the interplay between electron correlation and exchange effects. In this work, we examine the dihydroazulene isomerization as an example conjugated system. We employ the highly accurate quantum Monte Carlo (QMC) method to predict thermochemical properties and to benchmark results from density functional theory (DFT) methods. Although DFT provides sufficient accuracy for similar systems, in this particular system, DFT predictions of ground state and reaction paths are inconsistent and non-systematic errors arise. We present a comparison between QMC and DFT results for enthalpy of isomerization, HOMO-LUMO gap and charge densities with a range of DFT functionals.
Application of proton boron fusion reaction to radiation therapy: A Monte Carlo simulation study
Yoon, Do-Kun; Jung, Joo-Young; Suh, Tae Suk
2014-12-01
Three alpha particles are emitted from the point of reaction between a proton and boron. The alpha particles are effective in inducing the death of a tumor cell. After boron is accumulated in the tumor region, the emitted from outside the body proton can react with the boron in the tumor region. An increase of the proton's maximum dose level is caused by the boron and only the tumor cell is damaged more critically. In addition, a prompt gamma ray is emitted from the proton boron reaction point. Here, we show that the effectiveness of the proton boron fusion therapy was verified using Monte Carlo simulations. We found that a dramatic increase by more than half of the proton's maximum dose level was induced by the boron in the tumor region. This increase occurred only when the proton's maximum dose point was located within the boron uptake region. In addition, the 719 keV prompt gamma ray peak produced by the proton boron fusion reaction was positively detected. This therapy method features the advantages such as the application of Bragg-peak to the therapy, the accurate targeting of tumor, improved therapy effects, and the monitoring of the therapy region during treatment.
A global reaction route mapping-based kinetic Monte Carlo algorithm
Mitchell, Izaac; Irle, Stephan; Page, Alister J.
2016-07-01
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.
Monte carlo simulations of Yttrium reaction rates in Quinta uranium target
Suchopár M.
2017-01-01
Full Text Available The international collaboration Energy and Transmutation of Radioactive Waste (E&T RAW performed intensive studies of several simple accelerator-driven system (ADS setups consisting of lead, uranium and graphite which were irradiated by relativistic proton and deuteron beams in the past years at the Joint Institute for Nuclear Research (JINR in Dubna, Russia. The most recent setup called Quinta, consisting of natural uranium target-blanket and lead shielding, was irradiated by deuteron beams in the energy range between 1 and 8 GeV in three accelerator runs at JINR Nuclotron in 2011 and 2012 with yttrium samples among others inserted inside the setup to measure the neutron flux in various places. Suitable activation detectors serve as one of possible tools for monitoring of proton and deuteron beams and for measurements of neutron field distribution in ADS studies. Yttrium is one of such suitable materials for monitoring of high energy neutrons. Various threshold reactions can be observed in yttrium samples. The yields of isotopes produced in the samples were determined using the activation method. Monte Carlo simulations of the reaction rates leading to production of different isotopes were performed in the MCNPX transport code and compared with the experimental results obtained from the yttrium samples.
Battogtokh, D.; Asch, D. K.; Case, M. E.; Arnold, J.; Schüttler, H.-B.
2002-01-01
A chemical reaction network for the regulation of the quinic acid (qa) gene cluster of Neurospora crassa is proposed. An efficient Monte Carlo method for walking through the parameter space of possible chemical reaction networks is developed to identify an ensemble of deterministic kinetics models with rate constants consistent with RNA and protein profiling data. This method was successful in identifying a model ensemble fitting available RNA profiling data on the qa gene cluster. PMID:12477937
Kawano, Toshihiko [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-10
This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedom ν _{a} is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.
Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks
Ben Hammouda, Chiheb
2015-05-12
In biochemical systems, stochastic e↵ects can be caused by the presence of small numbers of certain reactant molecules. In this setting, discrete state-space and stochastic simulation approaches were proved to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti↵ness. For such problems, the existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap method, can be very slow. Therefore, implicit tau-leap approxima- tions were developed to improve the numerical stability and provide more e cient simulation algorithms for these systems. One of the interesting tasks for SRNs is to approximate the expected values of some observables of the process at a certain fixed time T. This is can be achieved using Monte Carlo (MC) techniques. However, in a recent work, Anderson and Higham in 2013, proposed a more computationally e cient method which combines multi-level Monte Carlo (MLMC) technique with explicit tau-leap schemes. In this MSc thesis, we propose new fast stochastic algorithm, particularly designed 5 to address sti↵ systems, for approximating the expected values of some observables of SRNs. In fact, we take advantage of the idea of MLMC techniques and drift-implicit tau-leap approximation to construct a drift-implicit MLMC tau-leap estimator. In addition to accurately estimating the expected values of a given observable of SRNs at a final time T , our proposed estimator ensures the numerical stability with a lower cost than the MLMC explicit tau-leap algorithm, for systems including simultane- ously fast and slow species. The key contribution of our work is the coupling of two drift-implicit tau-leap paths, which is the basic brick for
Deng, Yu-Jia; Tripkovic, Vladimir; Rossmeisl, Jan;
2016-01-01
We study the oxygen reduction reaction (ORR), the catalytic process occurring at the cathode in fuel cells, on Pt layers prepared by electrodeposition onto an Au substrate. Using a nominal Pt layer by layer deposition method previously proposed, imperfect layers of Pt on Au are obtained. The ORR ...
Brons, S; Elsässer, T; Ferrari, A; Gadioli, E; Mairani, A; Parodi, K; Sala, P; Scholz, M; Sommerer, F
2010-01-01
Monte Carlo codes are rapidly spreading among hadron therapy community due to their sophisticated nuclear/electromagnetic models which allow an improved description of the complex mixed radiation field produced by nuclear reactions in therapeutic irradiation. In this contribution results obtained with the Monte Carlo code FLUKA are presented focusing on the production of secondary fragments in carbon ion interaction with water and on CT-based calculations of absorbed and biological effective dose for typical clinical situations. The results of the simulations are compared with the available experimental data and with the predictions of the GSI analytical treatment planning code TRiP.
Lu, Shih-I
2005-05-15
Ab initio calculations of transition state structure and reaction enthalpy of the F + H2-->HF + H reaction has been carried out by the fixed-node diffusion quantum Monte Carlo method in this study. The Monte Carlo sampling is based on the Ornstein-Uhlenbeck random walks guided by a trial wave function constructed from the floating spherical Gaussian orbitals and spherical Gaussian geminals. The Monte Carlo calculated barrier height of 1.09(16) kcal/mol is consistent with the experimental values, 0.86(10)/1.18(10) kcal/mol, and the calculated value from the multireference-type coupled-cluster (MRCC) calculation with the aug-cc-pVQZ(F)/cc-pVQZ(H) basis set, 1.11 kcal/mol. The Monte Carlo-based calculation also gives a similar value of the reaction enthalpy, -32.00(4) kcal/mol, compared with the experimental value, -32.06(17) kcal/mol, and the calculated value from a MRCC/aug-cc-pVQZ(F)/cc-pVQZ(H) calculation, -31.94 kcal/mol. This study clearly indicates a further application of the random-walk-based approach in the field of quantum chemical calculation.
A. U. Qaisrani; M. Khalid; M. K.Khan
2005-01-01
@@ The CO-NO catalytic reaction on body-centred cubic (bcc) lattice is studied by Monte Carlo simulation. The simple Langmuir-Hinshelwood (LH) mechanism yields a steady reactive window, which is separated by continuous and discontinuous irreversible phase transitions. The effect of precursor mechanism on the phase diagram of the system is also studied. According to this mechanism, the precursor motion of CO molecules is considered only on the surface of bcc lattice. Some interesting observations are reported.
M. Khalid; A. U. Qaisrani; M. G. Ullah
2008-01-01
@@ We study a model based on precursor mechanism for CO-NO catalytic reaction on square lattice with Monte Carlo simulation. The precursor mechanism clearly demonstrates its impact on the phase diagram. The steady reactive state (SRS) gets established. The width of reactive region increases by increasing the range of precursor mobility. When the precursor mobility is increased to third-nearest neighboarhood, the second-order transition disappears.
MonChER: Monte-Carlo generator for CHarge Exchange Reactions. Version 1.1. Physics and Manual
Ryutin, R. A.; Sobol, A E.; Petrov, V. A.
2011-01-01
MonChER is a Monte Carlo event generator for simulation of single and double charge exchange reactions in proton-proton collisions at energies from 0.9 to 14 TeV. Such reactions, $pp\\to n+X$ and $pp\\to n+X+n$, are characterized by leading neutron production. They are dominated by $\\pi^+$ exchange and could provide us with more information about total and elastic $\\pi^+ p$ and $\\pi^+\\pi^+$ cross sections and parton distributions in pions in the still unexplored kinematical region.
Multilevel ensemble Kalman filtering
Hoel, Haakon
2016-01-08
The ensemble Kalman filter (EnKF) is a sequential filtering method that uses an ensemble of particle paths to estimate the means and covariances required by the Kalman filter by the use of sample moments, i.e., the Monte Carlo method. EnKF is often both robust and efficient, but its performance may suffer in settings where the computational cost of accurate simulations of particles is high. The multilevel Monte Carlo method (MLMC) is an extension of classical Monte Carlo methods which by sampling stochastic realizations on a hierarchy of resolutions may reduce the computational cost of moment approximations by orders of magnitude. In this work we have combined the ideas of MLMC and EnKF to construct the multilevel ensemble Kalman filter (MLEnKF) for the setting of finite dimensional state and observation spaces. The main ideas of this method is to compute particle paths on a hierarchy of resolutions and to apply multilevel estimators on the ensemble hierarchy of particles to compute Kalman filter means and covariances. Theoretical results and a numerical study of the performance gains of MLEnKF over EnKF will be presented. Some ideas on the extension of MLEnKF to settings with infinite dimensional state spaces will also be presented.
Non-Thermal Effects on CO-NO Surface Catalytic Reaction on Square Surface: Monte Carlo Study
M. Khalid; A. U. Qaisrani; W. Ahmad
2005-01-01
@@ A Monte Carlo simulation of the CO-NO heterogeneous catalytic reaction over a square surface has already been studied with a model based on the Langmuir-Hinshelwood (LH) mechanism. The results of this study are well known. Here we study the effects of transient non-thermal mobility of monomer (CO) based on precursor mechanism, diffusion of adsorbed nitrogen and oxygen atoms, on the phase diagram. The interesting feature of this model is the yield of a steady reactiw window, while simple LH mechanism is not capable of producing a steady reactive state.
Multilevel ensemble Kalman filtering
Hoel, Hakon
2016-06-14
This work embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. The resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
惠丰; 张敏华; 马静
2012-01-01
Gibbs系综蒙特卡罗方法（简称GEMC）作为一种计算纯物质及混合物相平衡的模拟手段已经发展得较为成熟。本文描述了GEMC方法的基本原理和发展概况,介绍和评价了与GEMC方法紧密相关的力场,以第六届工业流体性质模拟挑战赛结果为例说明了GEMC模拟方法的有效性,并分类评述了国内外GEMC方法用于纯物质和混合物物系相平衡数据计算的发展情况。同时指出了GEMC方法的局限性,并展望了其未来的发展趋势。%Gibbs ensemble Monte Carlo（GEMC） is used to predict fluid phase equilibria.The basis of GEMC method is first introduced.Force fields are important to GEMC.Different kinds of force fields are reviewed.The effectiveness of GEMC prediction on fluid phase equilibria is discussed.And then,examples in which GEMC is used to provide pure component properties and phase behavior of mixtures are given.In addition,the developments of GEMC in China are specially addressed;the current limitations and possible improvements of Gibbs ensemble Monte Carlo methods are also discussed.
Kanai, Y; Takeuchi, N
2009-10-14
We revisit the molecular line growth mechanism of styrene on the hydrogenated Si(001) 2x1 surface. In particular, we investigate the energetics of the radical chain reaction mechanism by means of diffusion quantum Monte Carlo (QMC) and density functional theory (DFT) calculations. For the exchange correlation (XC) functional we use the non-empirical generalized-gradient approximation (GGA) and meta-GGA. We find that the QMC result also predicts the intra dimer-row growth of the molecular line over the inter dimer-row growth, supporting the conclusion based on DFT results. However, the absolute magnitudes of the adsorption and reaction energies, and the heights of the energy barriers differ considerably between the QMC and DFT with the GGA/meta-GGA XC functionals.
Dybeck, Eric Christopher; Plaisance, Craig Patrick; Neurock, Matthew
2017-02-14
A novel algorithm has been developed to achieve temporal acceleration during kinetic Monte Carlo (KMC) simulations of surface catalytic processes. This algorithm allows for the direct simulation of reaction networks containing kinetic processes occurring on vastly disparate timescales which computationally overburden standard KMC methods. Previously developed methods for temporal acceleration in KMC have been designed for specific systems and often require a priori information from the user such as identifying the fast and slow processes. In the approach presented herein, quasi-equilibrated processes are identified automatically based on previous executions of the forward and reverse reactions. Temporal acceleration is achieved by automatically scaling the intrinsic rate constants of the quasi-equilibrated processes, bringing their rates closer to the timescales of the slow kinetically relevant non-equilibrated processes. All reactions are still simulated directly, although with modified rate constants. Abrupt changes in the underlying dynamics of the reaction network are identified during the simulation and the reaction rate constants are rescaled accordingly. The algorithm has been utilized here to model the Fischer-Tropsch synthesis reaction over ruthenium nanoparticles. This reaction network has multiple timescale-disparate processes which would be intractable to simulate without the aid of temporal acceleration. The accelerated simulations are found to give reaction rates and selectivities indistinguishable from those calculated by an equivalent mean-field kinetic model. The computational savings of the algorithm can span many orders of magnitude in realistic systems and the computational cost is not limited by the magnitude of the timescale disparity in the system processes. Furthermore, the algorithm has been designed in a generic fashion and can easily be applied to other surface catalytic processes of interest.
Kerisit, Sebastien N.; Pierce, Eric M.; Ryan, Joseph V.
2015-01-01
Borosilicate nuclear waste glasses develop complex altered layers as a result of coupled processes such as hydrolysis of network species, condensation of Si species, and diffusion. However, diffusion has often been overlooked in Monte Carlo models of the aqueous corrosion of borosilicate glasses. Therefore, three different models for dissolved Si diffusion in the altered layer were implemented in a Monte Carlo model and evaluated for glasses in the compositional range (75-x) mol% SiO2 (12.5+x/2) mol% B2O3 and (12.5+x/2) mol% Na2O, where 0 ≤ x ≤ 20%, and corroded in static conditions at a surface-to-volume ratio of 1000 m-1. The three models considered instantaneous homogenization (M1), linear concentration gradients (M2), and concentration profiles determined by solving Fick’s 2nd law using a finite difference method (M3). Model M3 revealed that concentration profiles in the altered layer are not linear and show changes in shape and magnitude as corrosion progresses, unlike those assumed in model M2. Furthermore, model M3 showed that, for borosilicate glasses with a high forward dissolution rate compared to the diffusion rate, the gradual polymerization and densification of the altered layer is significantly delayed compared to models M1 and M2. Models M1 and M2 were found to be appropriate models only for glasses with high release rates such as simple borosilicate glasses with low ZrO2 content.
Bouland, Olivier H.
2016-03-01
This article supplies an overview of issues related to the interpretation of surrogate measurement results for neutron-incident cross section predictions; difficulties that are somehow masked by the historical conversion route based on Weisskopf-Ewing approximation. Our proposal is to handle the various difficulties by using a more rigorous approach relying on Monte Carlo simulation of transfer reactions with extended R-matrix theory. The multiple deficiencies of the historical surrogate treatment are recalled but only one is examined in some details here; meaning the calculation of in-out-going channel Width Fluctuation Correction Factors (WFCF) which behavior witness partly the failure of Niels Bohr's compound nucleus theoretical landmark. Relevant WFCF calculations according to neutron-induced surrogate- and cross section-types as a function of neutron-induced fluctuating energy range [0 - 2.1 MeV] are presented and commented in the case of the 240Pu* and 241Pu* compound nucleus isotopes.
Fracchia, Francesco; Filippi, Claudia; Amovilli, Claudio
2014-01-05
We present here several novel features of our recently proposed Jastrow linear generalized valence bond (J-LGVB) wave functions, which allow a consistently accurate description of complex potential energy surfaces (PES) of medium-large systems within quantum Monte Carlo (QMC). In particular, we develop a multilevel scheme to treat different regions of the molecule at different levels of the theory. As prototypical study case, we investigate the decomposition of α-hydroxy-dimethylnitrosamine, a carcinogenic metabolite of dimethylnitrosamine (NDMA), through a two-step mechanism of isomerization followed by a retro-ene reaction. We compute a reliable reaction path with the quadratic configuration interaction method and employ QMC for the calculation of the electronic energies. We show that the use of multideterminantal wave functions is very important to correctly describe the critical points of this PES within QMC, and that our multilevel J-LGVB approach is an effective tool to significantly reduce the cost of QMC calculations without loss of accuracy. As regards the complex PES of α-hydroxy-dimethylnitrosamine, the accurate energies computed with our approach allows us to confirm the validity of the two-step reaction mechanism of decomposition originally proposed within density functional theory, but with some important differences in the barrier heights of the individual steps.
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been
Iftimie, R; Schofield, J P; Iftimie, Radu; Salahub, Dennis; Schofield, Jeremy
2003-01-01
In this article, we propose an efficient method for sampling the relevant state space in condensed phase reactions. In the present method, the reaction is described by solving the electronic Schr\\"{o}dinger equation for the solute atoms in the presence of explicit solvent molecules. The sampling algorithm uses a molecular mechanics guiding potential in combination with simulated tempering ideas and allows thorough exploration of the solvent state space in the context of an ab initio calculation even when the dielectric relaxation time of the solvent is long. The method is applied to the study of the double proton transfer reaction that takes place between a molecule of acetic acid and a molecule of methanol in tetrahydrofuran. It is demonstrated that calculations of rates of chemical transformations occurring in solvents of medium polarity can be performed with an increase in the cpu time of factors ranging from 4 to 15 with respect to gas-phase calculations.
2002-01-01
NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx
On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles
Luo, Xiaodong
2010-09-19
The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].
Optimization of {sup 67}Cu production via {sup 70}Zn(p,α) reaction using Monte Carlo method
Kim, Gye Hong; Yoo, Jae Jun; Chun, Kwon Soo; An, Gwang Il; Park, Hyun; Kim, Byung Il [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)
2014-11-15
Copper-67(T{sub 1/2} =61.9 h,) is a radioisotope with significant potential for therapeutic application in nuclear medicine. This radionuclide emits β -particles with a maximum energy of 561.7 keV (mean E{sub β}-= 141 keV) and γ-rays of 91.266 keV (7.0 %), 93.311 keV (16.1 %) and 184.577 keV (48.7 %). These γ-rays emitted from {sup 67}Cu make it suitable for imaging the tracer distribution by single photon emission computed tomography (SPECT) and dosimetry calculations. The Monte Carlo code MCNPX was used to model the interaction of proton radiation with a zinc target for the production of {sup 67}Cu. The optimum irradiation condition of the solid target to obtain high production rate of {sup 67}Cu was investigated. Theoretical production yields were predicted for the {sup 70}Zn(p,α){sup 67}Cu reactions over a broad range of energy levels using MCNPX and SRIM codes. The results of these calculations were compared with published data for the same reactions. Reasonable agreement between the experimental and theoretical production yields was obtained. The results of the simulations confirmed that the MCNPX code is a useful and accurate tool for the prediction of medical radioisotope production and the optimization of the target design.
Kawano, Toshihiko [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-10
This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedom ν _{a} is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.
Long Wang; Xiao-mei Yang; Xue-hao He
2013-01-01
The AB2 type bulk polymerization of 3,5-bis(trimethylsiloxy)benzoyl chloride is studied by the reactive 3d bond fluctuation lattice model (3d-BFLM).Through tuning the reactivity parameters,the experimental data are fitted well via an iterative dichotomy method.By using the optimized reactivity parameters,the number-average degree of polymerization and degree of branching obtained in simulation are very close to experimental data.Meanwhile,the information about the weight-average degree of polymerization and the polydispersity index is provided,and the internal structural properties of hyperbranched polyesters are investigated.Simulation results demonstrate that the 3d-BFLM can be used to study specific hyperbranched polymerizations semi-quantitatively which is helpful to deep understand the kinetics of reactions and make predictions for specific polymerization systems.
Bouland Olivier H.
2016-01-01
Full Text Available This article supplies an overview of issues related to the interpretation of surrogate measurement results for neutron-incident cross section predictions; difficulties that are somehow masked by the historical conversion route based on Weisskopf-Ewing approximation. Our proposal is to handle the various difficulties by using a more rigorous approach relying on Monte Carlo simulation of transfer reactions with extended R-matrix theory. The multiple deficiencies of the historical surrogate treatment are recalled but only one is examined in some details here; meaning the calculation of in-out-going channel Width Fluctuation Correction Factors (WFCF which behavior witness partly the failure of Niels Bohr’s compound nucleus theoretical landmark. Relevant WFCF calculations according to neutron-induced surrogate- and cross section-types as a function of neutron-induced fluctuating energy range [0 - 2.1 MeV] are presented and commented in the case of the 240Pu* and 241Pu* compound nucleus isotopes.
Stoller, Roger E [ORNL; Golubov, Stanislav I [ORNL; Becquart, C. S. [Universite de Lille; Domain, C. [EDF R& D, Clamart, France
2006-09-01
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory (RT), Monte Carlo (MC), or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales ( m to >mm), and timescales from diffusion (~ s) to long-term microstructural evolution (~years). Phenomena at this scale have the most direct impact on mechanical properties in structural materials of interest to nuclear energy systems, and are also the most accessible to direct comparison between the results of simulations and experiments. Recent advances in computational power have substantially expanded the range of application for MC models. Although the RT and MC models can be used simulate the same phenomena, many of the details are handled quite differently in the two approaches. A direct comparison of the RT and MC descriptions has been made in the domain of point defect cluster dynamics modeling, which is relevant to both the nucleation and evolution of radiation-induced defect structures. The relative merits and limitations of the two approaches are discussed, and the predictions of the two approaches are compared for specific irradiation conditions.
Biedermann, Frank; Nau, Werner M
2014-05-26
Ternary complexes between the macrocyclic host cucurbit[8]uril, dicationic dyes, and chiral aromatic analytes afford strong induced circular dichroism (ICD) signals in the near-UV and visible regions. This allows for chirality sensing and peptide-sequence recognition in water at low micromolar analyte concentrations. The reversible and noncovalent mode of binding ensures an immediate response to concentration changes, which allows the real-time monitoring of chemical reactions. The introduced supramolecular method is likely to find applications in bioanalytical chemistry, especially enzyme assays, for drug-related analytical applications, and for continuous monitoring of enantioselective reactions, particularly asymmetric catalysis.
Rundel, R. D.; Butler, D. M.; Stolarski, R. S.
1978-01-01
The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.
Borgia, Alessandro; Wensley, Beth G; Soranno, Andrea; Nettels, Daniel; Borgia, Madeleine B; Hoffmann, Armin; Pfeil, Shawn H; Lipman, Everett A; Clarke, Jane; Schuler, Benjamin
2012-01-01
Theory, simulations and experimental results have suggested an important role of internal friction in the kinetics of protein folding. Recent experiments on spectrin domains provided the first evidence for a pronounced contribution of internal friction in proteins that fold on the millisecond timescale. However, it has remained unclear how this contribution is distributed along the reaction and what influence it has on the folding dynamics. Here we use a combination of single-molecule Förster resonance energy transfer, nanosecond fluorescence correlation spectroscopy, microfluidic mixing and denaturant- and viscosity-dependent protein-folding kinetics to probe internal friction in the unfolded state and at the early and late transition states of slow- and fast-folding spectrin domains. We find that the internal friction affecting the folding rates of spectrin domains is highly localized to the early transition state, suggesting an important role of rather specific interactions in the rate-limiting conformational changes.
Mello, P.A.; Pereyra, P.; Seligman, T.H.
1985-05-01
Ensembles of scattering S-matrices have been used in the past to describe the statistical fluctuations exhibited by many nuclear-reaction cross sections as a function of energy. In recent years, there have been attempts to construct these ensembles explicitly in terms of S, by directly proposinng a statistical law for S. In the present paper, it is shown that, for an arbitrary number of channels, one can incorporate, in the ensemble of S-matrices, the conditions of flux conservation, time-reversal invariance, causality, ergodicity, and the requirement that the ensemble average coincide with the optical scattering matrix. Since these conditions do not specify the ensemble uniquely, the ensemble that has maximum information-entropy is dealt with among those that satisfy the above requirements. Some applications to few-channel problems and comparisons to Monte-Carlo calculations are presented.
Aken, Bronwen L.; Achuthan, Premanand; Akanni, Wasiu; Amode, M. Ridwan; Bernsdorff, Friederike; Bhai, Jyothish; Billis, Konstantinos; Carvalho-Silva, Denise; Cummins, Carla; Clapham, Peter; Gil, Laurent; Girón, Carlos García; Gordon, Leo; Hourlier, Thibaut; Hunt, Sarah E.; Janacek, Sophie H.; Juettemann, Thomas; Keenan, Stephen; Laird, Matthew R.; Lavidas, Ilias; Maurel, Thomas; McLaren, William; Moore, Benjamin; Murphy, Daniel N.; Nag, Rishi; Newman, Victoria; Nuhn, Michael; Ong, Chuang Kee; Parker, Anne; Patricio, Mateus; Riat, Harpreet Singh; Sheppard, Daniel; Sparrow, Helen; Taylor, Kieron; Thormann, Anja; Vullo, Alessandro; Walts, Brandon; Wilder, Steven P.; Zadissa, Amonida; Kostadima, Myrto; Martin, Fergal J.; Muffato, Matthieu; Perry, Emily; Ruffier, Magali; Staines, Daniel M.; Trevanion, Stephen J.; Cunningham, Fiona; Yates, Andrew; Zerbino, Daniel R.; Flicek, Paul
2017-01-01
Ensembl (www.ensembl.org) is a database and genome browser for enabling research on vertebrate genomes. We import, analyse, curate and integrate a diverse collection of large-scale reference data to create a more comprehensive view of genome biology than would be possible from any individual dataset. Our extensive data resources include evidence-based gene and regulatory region annotation, genome variation and gene trees. An accompanying suite of tools, infrastructure and programmatic access methods ensure uniform data analysis and distribution for all supported species. Together, these provide a comprehensive solution for large-scale and targeted genomics applications alike. Among many other developments over the past year, we have improved our resources for gene regulation and comparative genomics, and added CRISPR/Cas9 target sites. We released new browser functionality and tools, including improved filtering and prioritization of genome variation, Manhattan plot visualization for linkage disequilibrium and eQTL data, and an ontology search for phenotypes, traits and disease. We have also enhanced data discovery and access with a track hub registry and a selection of new REST end points. All Ensembl data are freely released to the scientific community and our source code is available via the open source Apache 2.0 license. PMID:27899575
K.Iqbal; A.Basit
2011-01-01
@@ The presence of oxygen in the subsurface in monomer-dimer reactions(CO-O2 and NO-CO)is observed experimentally.The effect of subsurface oxygen on a CO-O2 catalytic reaction on a face-centered cubic(FCC)lattice is studied using Monte Carlo simulation.The effect of adding subsurface neighbours on the phase diagram is also extensively explored.It is observed that the subsurface oxygen totally eliminates the typical second order phase transition.It is also shown that the introduction of the diffusion of O atoms and the subsurface of the FCC lattice shifts the single transition point towards the stoichiometric ratio.%The presence of oxygen in the subsurface in monomer-dimer reactions (CO-O2 and NO-CO) is observed experimentally. The effect of subsurface oxygen on a CO-O2 catalytic reaction on a face-centered cubic (FCC) lattice is studied using Monte Carlo simulation. The effect of adding subsurface neighbours on the phase diagram is also extensively explored. It is observed that the subsurface oxygen totally eliminates the typical second order phase transition. It is also shown that the introduction of the diffusion of O atoms and the subsurface of the FCC lattice shifts the single transition point towards the stoichiometric ratio.
Fracchia, F.; Filippi, C.; Amovilli, C.
2014-01-01
We present here several novel features of our recently proposed Jastrow linear generalized valence bond (J-LGVB) wave functions, which allow a consistently accurate description of complex potential energy surfaces (PES) of medium-large systems within quantum Monte Carlo (QMC). In particular, we deve
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
Monte Carlo Simulation of CO Oxidation Reaction on Fractal Surface%分形表面上CO氧化反应的蒙特卡罗模拟
曾健青; 张镜澄; 钟炳
1999-01-01
The CO oxidation reaction on DLA fractal surface has been studied in detail by Monte Carlo method.It was found that (1) when only adsorption and surface reaction were considered the O atoms and CO molecules on surface would self-organize after a reation period,which greatly decreased the touching chance among different reactant molecules and consequently decreased the reaction rate.Furthermore,O atoms tended to be adsorbed at the center or the inner area of DLA surface while CO molecules could exist only at the exterior; (2) the clusters of O atoms could be cut off by vacant active sites after the introduction of CO diffusion on surface,which would further give rise to accelerateion of the reaction rate;(3) when the reversible adsorption of CO was introduced O atoms and CO molecules could adsorb evenly on the whole DLA surface and then the reaction would be speeded up greatly,which suggested that for a good catalyst the adsorbing strength should be moderate.
Composed ensembles of random unitary ensembles
Pozniak, M; Kus, M; Pozniak, Marcin; Zyczkowski, Karol; Kus, Marek
1997-01-01
Composed ensembles of random unitary matrices are defined via products of matrices, each pertaining to a given canonical circular ensemble of Dyson. We investigate statistical properties of spectra of some composed ensembles and demonstrate their physical relevance. We discuss also the methods of generating random matrices distributed according to invariant Haar measure on the orthogonal and unitary group.
Heinisch, H.L.; Trinkaus, H.; Singh, Bachu Narain
2007-01-01
and confirmed by kinetic Monte Carlo (KMC) simulations. Here we report on KMC simulations investigating a different transition from 1D to 3D diffusion of 1D gliding loops for which their 1D migration is interrupted by occasional 2D migration due to conservative climb by dislocation core diffusion within a plane...... transverse to their 1D glide direction. Their transition from 1D to 3D kinetics is significantly different from that due to direction changes. The KMC results are compared to an analytical description of this diffusion mode in the form of a master curve relating the 1D normalized sink strength...
A Localized Ensemble Kalman Smoother
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
Stochastic Simulation of CSTR-BZ Reaction System%CSTR-BZ复杂化学反应的Monte-Carlo模拟
魏庆莉; 王贵昌; 蔡遵生; 赵学庄
2000-01-01
在Monte-Carlo算法中引入流率项, 通过对具有流率分叉结构的CSTR-BZ反应体系SNB(Showalter-Noyes-Bar-Eli)模型的随机分析, 讨论了最低反应浓度的意义及流率项连续性的影响. 与实验结果及确定性数值积分结果比较表明, 对于流动的实际反应体系, 基于"质量作用定律"的宏观确定性方法仍具有适用性.
倪嵩波; 黄凯; 傅淑霞
2011-01-01
The mechanism of oxidative coupling of methane over Mn-Na2WO4/SiO2 catalyst is studled. Base on the double-activity-site Mn / W catalytic centers, the reaction model of methane and oxygen co-feeding to product C2 hydrocarbon was simulated by means of Monte Carlo approach. The computer simulation results are in good agreement with experiment under similar conditions, which indicate that the complex surface model presented is reasonable. The influence of grid size and feeding ratio of methane to oxygen are also taken into account. Further analysis of the surface coverage with virtual reaction time is also discussed in the paper. The results show that the surface reaction is restrained by enlargement of grid size; feeding ratio of NCH4/NO2 has significant effect on the conversion of methane and selectivity of C2 hydrocarbon. Base on the research of the surface coverage with virtual reaction time, short of lattice oxygen supplement results in low activity of the catalyst;most of the oxidative reaction of ethane and ethylene occur in the gas-phase.%通过对Mn-Na2WO4/SiO2催化剂的甲烷氧化偶联反应机理进行初步研究,以Mn/W双金属活性中心为基础,分析了CH4和O2共进料氧化偶联制C2烃的反应模型,并采用Monte Carlo方法对该反应的微观过程进行模拟,在与实验相近的条件下,得到了较准确的模拟结果.在模拟实验中,考察了活性位规模和原料气组成对反应结果的影响以及反应过程中表面吸附物种的变化规律.结果表明:活性位规模增大将减缓表面反应的速度,而不同进料烷氧比将对甲烷转化率以及C2烃收率产生影响;同时在对表面吸附物种的考察中发现,反应末期催化剂催化活性不高与晶格氧的补充不足有关,而C2H6和C2H4的反应更多地发生在气相主体.
Authier, N
1998-12-01
One of the questions asked in radiation shielding problems is the estimation of the radiation level in particular to determine accessibility of working persons in controlled area (nuclear power plants, nuclear fuel reprocessing plants) or to study the dose gradients encountered in material (iron nuclear vessel, medical therapy, electronics in satellite). The flux and reaction rate estimators used in Monte Carlo codes give average values in volumes or on surfaces of the geometrical description of the system. But in certain configurations, the knowledge of punctual deposited energy and dose estimates are necessary. The Monte Carlo estimate of the flux at a point of interest is a calculus which presents an unbounded variance. The central limit theorem cannot be applied thus no easy confidencelevel may be calculated. The convergence rate is then very poor. We propose in this study a new solution for the photon flux at a point estimator. The method is based on the 'once more collided flux estimator' developed earlier for neutron calculations. It solves the problem of the unbounded variance and do not add any bias to the estimation. We show however that our new sampling schemes specially developed to treat the anisotropy of the photon coherent scattering is necessary for a good and regular behavior of the estimator. This developments integrated in the TRIPOLI-4 Monte Carlo code add the possibility of an unbiased punctual estimate on media interfaces. (author)
Exploring ensemble visualization
Phadke, Madhura N.; Pinto, Lifford; Alabi, Oluwafemi; Harter, Jonathan; Taylor, Russell M., II; Wu, Xunlei; Petersen, Hannah; Bass, Steffen A.; Healey, Christopher G.
2012-01-01
An ensemble is a collection of related datasets. Each dataset, or member, of an ensemble is normally large, multidimensional, and spatio-temporal. Ensembles are used extensively by scientists and mathematicians, for example, by executing a simulation repeatedly with slightly different input parameters and saving the results in an ensemble to see how parameter choices affect the simulation. To draw inferences from an ensemble, scientists need to compare data both within and between ensemble members. We propose two techniques to support ensemble exploration and comparison: a pairwise sequential animation method that visualizes locally neighboring members simultaneously, and a screen door tinting method that visualizes subsets of members using screen space subdivision. We demonstrate the capabilities of both techniques, first using synthetic data, then with simulation data of heavy ion collisions in high-energy physics. Results show that both techniques are capable of supporting meaningful comparisons of ensemble data.
Lima, Maria Carolina P; Coutinho, Kaline; Canuto, Sylvio; Rocha, Willian R
2006-06-08
A combined Monte Carlo and quantum mechanical study was carried out to analyze the tautomeric equilibrium of 2-mercaptopyrimidine in the gas phase and in aqueous solution. Second- and fourth-order Møller-Plesset perturbation theory calculations indicate that in the gas phase thiol (Pym-SH) is more stable than the thione (Pym-NH) by ca. 8 kcal/mol. In aqueous solution, thermodynamic perturbation theory implemented on a Monte Carlo NpT simulation indicates that both the differential enthalpy and Gibbs free energy favor the thione form. The calculated differential enthalpy is DeltaH(SH)(-->)(NH)(solv) = -1.7 kcal/mol and the differential Gibbs free energy is DeltaG(SH)(-->)(NH)(solv) = -1.9 kcal/mol. Analysis is made of the contribution of the solute-solvent hydrogen bonds and it is noted that the SH group in the thiol and NH group in the thione tautomers act exclusively as a hydrogen bond donor in aqueous solution. The proton transfer reaction between the tautomeric forms was also investigated in the gas phase and in aqueous solution. Two distinct mechanisms were considered: a direct intramolecular transfer and a water-assisted mechanism. In the gas phase, the intramolecular transfer leads to a large energy barrier of 34.4 kcal/mol, passing through a three-center transition state. The proton transfer with the assistance of one water molecule decreases the energy barrier to 17.2 kcal/mol. In solution, these calculated activation barriers are, respectively, 32.0 and 14.8 kcal/mol. The solvent effect is found to be sizable but it is considerably more important as a participant in the water-assisted mechanism than the solvent field of the solute-solvent interaction. Finally, the calculated total Gibbs free energy is used to estimate the equilibrium constant.
朱群雄; 赵乃伟; 徐圆
2012-01-01
Chemical processes are complex, for which traditional neural network models usually can not lead to satisfactory accuracy. Selective neural network ensemble is an effective way to enhance the generalization accuracy of networks, but there are some problems, e.g., lacking of unified definition of diversity among component neural networks and difficult to improve the accuracy by selecting if the diversities of available networks are small. In this study, the output errors of networks are vectorized, the diversity of networks is defined based on the error vectors, and the size of ensemble is analyzed. Then an error vectorization based selective neural network ensemble (EVSNE) is proposed, in which the error vector of each network can offset that of the other networks by training the component networks orderly. Thus the component networks have large diversity. Experiments and comparisons over standard data sets and actual chemical process data set for production of high-density polyethylene demonstrate that EVSNE performs better in generalization ability.
Multiscale ensemble filtering for reservoir engineering applications
Lawniczak, W.; Hanea, R.G.; Heemink, A.; Mclaughlin, D.
2009-01-01
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo a...
Demonstrating the value of larger ensembles in forecasting physical systems
Reason L. Machete
2016-12-01
Full Text Available Ensemble simulation propagates a collection of initial states forward in time in a Monte Carlo fashion. Depending on the fidelity of the model and the properties of the initial ensemble, the goal of ensemble simulation can range from merely quantifying variations in the sensitivity of the model all the way to providing actionable probability forecasts of the future. Whatever the goal is, success depends on the properties of the ensemble, and there is a longstanding discussion in meteorology as to the size of initial condition ensemble most appropriate for Numerical Weather Prediction. In terms of resource allocation: how is one to divide finite computing resources between model complexity, ensemble size, data assimilation and other components of the forecast system. One wishes to avoid undersampling information available from the model's dynamics, yet one also wishes to use the highest fidelity model available. Arguably, a higher fidelity model can better exploit a larger ensemble; nevertheless it is often suggested that a relatively small ensemble, say ~16 members, is sufficient and that larger ensembles are not an effective investment of resources. This claim is shown to be dubious when the goal is probabilistic forecasting, even in settings where the forecast model is informative but imperfect. Probability forecasts for a ‘simple’ physical system are evaluated at different lead times; ensembles of up to 256 members are considered. The pure density estimation context (where ensemble members are drawn from the same underlying distribution as the target differs from the forecasting context, where one is given a high fidelity (but imperfect model. In the forecasting context, the information provided by additional members depends also on the fidelity of the model, the ensemble formation scheme (data assimilation, the ensemble interpretation and the nature of the observational noise. The effect of increasing the ensemble size is quantified by
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Vasyunin, A I
2012-01-01
The observed gas-phase molecular inventory of hot cores is believed to be significantly impacted by the products of chemistry in interstellar ices. In this study, we report the construction of a full macroscopic Monte Carlo model of both the gas-phase chemistry and the chemistry occurring in the icy mantles of interstellar grains. Our model treats icy grain mantles in a layer-by-layer manner, which incorporates laboratory data on ice desorption correctly. The ice treatment includes a distinction between a reactive ice surface and an inert bulk. The treatment also distinguishes between zeroth and first order desorption, and includes the entrapment of volatile species in more refractory ice mantles. We apply the model to the investigation of the chemistry in hot cores, in which a thick ice mantle built up during the previous cold phase of protostellar evolution undergoes surface reactions and is eventually evaporated. For the first time, the impact of a detailed multilayer approach to grain mantle formation on ...
T. Berkemeier
2017-06-01
Full Text Available We present a Monte Carlo genetic algorithm (MCGA for efficient, automated, and unbiased global optimization of model input parameters by simultaneous fitting to multiple experimental data sets. The algorithm was developed to address the inverse modelling problems associated with fitting large sets of model input parameters encountered in state-of-the-art kinetic models for heterogeneous and multiphase atmospheric chemistry. The MCGA approach utilizes a sequence of optimization methods to find and characterize the solution of an optimization problem. It addresses an issue inherent to complex models whose extensive input parameter sets may not be uniquely determined from limited input data. Such ambiguity in the derived parameter values can be reliably detected using this new set of tools, allowing users to design experiments that should be particularly useful for constraining model parameters. We show that the MCGA has been used successfully to constrain parameters such as chemical reaction rate coefficients, diffusion coefficients, and Henry's law solubility coefficients in kinetic models of gas uptake and chemical transformation of aerosol particles as well as multiphase chemistry at the atmosphere–biosphere interface. While this study focuses on the processes outlined above, the MCGA approach should be portable to any numerical process model with similar computational expense and extent of the fitting parameter space.
Berkemeier, Thomas; Ammann, Markus; Krieger, Ulrich K.; Peter, Thomas; Spichtinger, Peter; Pöschl, Ulrich; Shiraiwa, Manabu; Huisman, Andrew J.
2017-06-01
We present a Monte Carlo genetic algorithm (MCGA) for efficient, automated, and unbiased global optimization of model input parameters by simultaneous fitting to multiple experimental data sets. The algorithm was developed to address the inverse modelling problems associated with fitting large sets of model input parameters encountered in state-of-the-art kinetic models for heterogeneous and multiphase atmospheric chemistry. The MCGA approach utilizes a sequence of optimization methods to find and characterize the solution of an optimization problem. It addresses an issue inherent to complex models whose extensive input parameter sets may not be uniquely determined from limited input data. Such ambiguity in the derived parameter values can be reliably detected using this new set of tools, allowing users to design experiments that should be particularly useful for constraining model parameters. We show that the MCGA has been used successfully to constrain parameters such as chemical reaction rate coefficients, diffusion coefficients, and Henry's law solubility coefficients in kinetic models of gas uptake and chemical transformation of aerosol particles as well as multiphase chemistry at the atmosphere-biosphere interface. While this study focuses on the processes outlined above, the MCGA approach should be portable to any numerical process model with similar computational expense and extent of the fitting parameter space.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R H
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Making Tree Ensembles Interpretable
Hara, Satoshi; Hayashi, Kohei
2016-01-01
Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited. In this paper, we propose a post processing method that improves the model interpretability of tree ensembles. After learning a complex tree ensembles in a standard way, we approximate it by a simpler model that is interpretable for human. To obtain the simpler model, we derive the EM algorithm minimizing the KL divergence from the ...
The Ensembl REST API: Ensembl Data for Any Language.
Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul
2015-01-01
We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.
Ensemble Forecasting of Major Solar Flares
Guerra, J A; Uritsky, V M
2015-01-01
We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte Carlo-type algorithm by applying a decision threshold $P_{th}$ to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the probabilities and events time series from 13 recent solar active regions (2012 - 2014), we found that a linear combination of probabilities can improve both probabilistic and categorical forecasts. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of $P_{th}$. According to the maximum values of HSS, a performance-based weights ...
Oza, Nikunj C.
2004-01-01
Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.
National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...
Marin-Garcia Pablo
2010-05-01
Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.
Monte Carlo approach to turbulence
Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik
2009-11-15
The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)
Ensemble Bayesian forecasting system Part I: Theory and algorithms
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of
Statistical ensembles and fragmentation of finite nuclei
Das, P.; Mallik, S.; Chaudhuri, G.
2017-09-01
Statistical models based on different ensembles are very commonly used to describe the nuclear multifragmentation reaction in heavy ion collisions at intermediate energies. Canonical model results are more appropriate for finite nuclei calculations while those obtained from the grand canonical ones are more easily calculable. A transformation relation has been worked out for converting results of finite nuclei from grand canonical to canonical and vice versa. The formula shows that, irrespective of the particle number fluctuation in the grand canonical ensemble, exact canonical results can be recovered for observables varying linearly or quadratically with the number of particles. This result is of great significance since the baryon and charge conservation constraints can make the exact canonical calculations extremely difficult in general. This concept developed in this work can be extended in future for transformation to ensembles where analytical solutions do not exist. The applicability of certain equations (isoscaling, etc.) in the regime of finite nuclei can also be tested using this transformation relation.
The semantic similarity ensemble
Andrea Ballatore
2013-12-01
Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.
王冠博; 杨鑫; 王侃; 刘汉刚; 李润东; 窦海峰
2013-01-01
Coupled simulation of deuteron or triton ionization and secondary fusion reac-tion was studied with a Monte Carlo tool named RSMC (Reaction Sequence Monte Carlo) .The detail history and condensed history methods were employed for ionization simulation .Fusion cross sections of deuteron and triton were adopted from ENDF or TENDL . The “forced particle production” variance reduction technique was also employed to improve the simulation efficiency .As a validation ,three types of examples were introduced , including neutron depth profiling , accelerator based mono-energy neutron source ,and thermal-to-fusion neutron convertor .%氘或氚离子在靶物质中电离输运的同时会发生次级反应，为模拟这一过程，开发了耦合蒙特卡罗工具RSMC。程序用详细历史法和浓缩历史法模拟电离过程，调用ENDF或 TENDL中D、T核数据计算次级反应，同时使用“强迫次级粒子产生”降方差技巧提高模拟效率。对中子深度分析问题、加速器单能中子源问题和热中子-聚变中子转换靶问题进行研究，验证了RSMC的正确性。
Dynamic Analogue Initialization for Ensemble Forecasting
LI Shan; RONG Xingyao; LIU Yun; LIU Zhengyu; Klaus FRAEDRICH
2013-01-01
This paper introduces a new approach for the initialization of ensemble numerical forecasting:Dynamic Analogue Initialization (DAI).DAI assumes that the best model state trajectories for the past provide the initial conditions for the best forecasts in the future.As such,DAI performs the ensemble forecast using the best analogues from a full size ensemble.As a pilot study,the Lorenz63 and Lorenz96 models were used to test DAI's effectiveness independently.Results showed that DAI can improve the forecast significantly.Especially in lower-dimensional systems,DAI can reduce the forecast RMSE by ～50％ compared to the Monte Carlo forecast (MC).This improvement is because DAI is able to recognize the direction of the analysis error through the embedding process and therefore selects those good trajectories with reduced initial error.Meanwhile,a potential improvement of DAI is also proposed,and that is to find the optimal range of embedding time based on the error's growing speed.
Wakefield, M. E.
1982-01-01
Protective garment ensemble with internally-mounted environmental- control unit contains its own air supply. Alternatively, a remote-environmental control unit or an air line is attached at the umbilical quick disconnect. Unit uses liquid air that is vaporized to provide both breathing air and cooling. Totally enclosed garment protects against toxic substances.
Music Ensemble: Course Proposal.
Kovach, Brian
A proposal is presented for a Music Ensemble course to be offered at the Community College of Philadelphia for music students who have had previous vocal or instrumental training. A standardized course proposal cover form is followed by a statement of purpose for the course, a list of major course goals, a course outline, and a bibliography. Next,…
Hansen, Lars Kai; Salamon, Peter
1990-01-01
We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....
Ensembles and their modules as objects of cartosemiotic inquiry
Hansgeorg Schlichtmann
2010-01-01
Full Text Available The structured set of signs in a map face -- here called map-face aggregate or MFA -- and the associated marginal notes make up an ensemble of modules or components (modular ensemble. Such ensembles are recognized where groups of entries are intuitively viewed as complex units, which includes the case that entries are consulted jointly and thus are involved in the same process of sign reception. Modular ensembles are amenable to semiotic study, just as are written or pictorial stories. Four kinds (one of them mentioned above are discussed in detail, two involving single MFAs, the other two being assemblages of maps, such as atlases. In terms of their internal structure, two types are recognized: the combinate (or grouping, in which modules are directly linked by combinatorial relations (example above, and the cumulate (or collection (of documents, in which modules are indirectly related through some conceptual commonality (example: series of geological maps. The discussion then turns to basic points concerning modular ensembles (identification of a module, internal organization of an ensemble, and characteristics which establish an ensemble as a unit and further to a few general semiotic concepts as they relate to the present research. Since this paper originated as a reaction to several of A. Wolodtschenko’s recent publications, it concludes with comments on some of his arguments which pertain to modular ensembles.
Properties of the Affine Invariant Ensemble Sampler in high dimensions
Huijser, David; Brewer, Brendon J
2015-01-01
We present theoretical and practical properties of the affine-invariant ensemble sampler Markov chain Monte Carlo method. In high dimensions the affine-invariant ensemble sampler shows unusual and undesirable properties. We demonstrate this with an $n$-dimensional correlated Gaussian toy problem with a known mean and covariance structure, and analyse the burn-in period. The burn-in period seems to be short, however upon closer inspection we discover the mean and the variance of the target distribution do not match the expected, known values. This problem becomes greater as $n$ increases. We therefore conclude that the affine-invariant ensemble sampler should be used with caution in high dimensional problems. We also present some theoretical results explaining this behaviour.
Effective Visualization of Temporal Ensembles.
Hao, Lihua; Healey, Christopher G; Bass, Steffen A
2016-01-01
An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions.
An Ensemble-Based Smoother with Retrospectively Updated Weights for Highly Nonlinear Systems
Chin, T. M.; Turmon, M. J.; Jewell, J. B.; Ghil, M.
2006-01-01
Monte Carlo computational methods have been introduced into data assimilation for nonlinear systems in order to alleviate the computational burden of updating and propagating the full probability distribution. By propagating an ensemble of representative states, algorithms like the ensemble Kalman filter (EnKF) and the resampled particle filter (RPF) rely on the existing modeling infrastructure to approximate the distribution based on the evolution of this ensemble. This work presents an ensemble-based smoother that is applicable to the Monte Carlo filtering schemes like EnKF and RPF. At the minor cost of retrospectively updating a set of weights for ensemble members, this smoother has demonstrated superior capabilities in state tracking for two highly nonlinear problems: the double-well potential and trivariate Lorenz systems. The algorithm does not require retrospective adaptation of the ensemble members themselves, and it is thus suited to a streaming operational mode. The accuracy of the proposed backward-update scheme in estimating non-Gaussian distributions is evaluated by comparison to the more accurate estimates provided by a Markov chain Monte Carlo algorithm.
Imprinting and recalling cortical ensembles.
Carrillo-Reid, Luis; Yang, Weijian; Bando, Yuki; Peterka, Darcy S; Yuste, Rafael
2016-08-12
Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion. Copyright © 2016, American Association for the Advancement of Science.
Ensemble Forecasting of Major Solar Flares -- First Results
Pulkkinen, A. A.; Guerra, J. A.; Uritsky, V. M.
2015-12-01
We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte-Carlo-type algorithm that applies a decision threshold PthP_{th} to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the data for 13 recent solar active regions between years 2012 - 2014, we found that linear combination methods can improve the overall probabilistic prediction and improve the categorical prediction for certain values of decision thresholds. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of PthP_{th}. According to the maximum values of HSS, a performance-based weights calculated by averaging over the sample, performed similarly to a equally weighted model. The values PthP_{th} for which the ensemble forecast performs the best are 25 % for M-class flares and 15 % for X-class flares. When the human-adjusted probabilities from NOAA are excluded from the ensemble, the ensemble performance in terms of the Heidke score, is reduced.
Critical behavior in topological ensembles
Bulycheva, K; Nechaev, S
2014-01-01
We consider the relation between three physical problems: 2D directed lattice random walks in an external magnetic field, ensembles of torus knots, and 5d Abelian SUSY gauge theory with massless hypermultiplet in $\\Omega$ background. All these systems exhibit the critical behavior typical for the "area+length" statistics of grand ensembles of 2D directed paths. In particular, using the combinatorial description, we have found the new critical behavior in the ensembles of the torus knots and in the instanton ensemble in 5d gauge theory. The relation with the integrable model is discussed.
Phase-selective entrainment of nonlinear oscillator ensembles
Zlotnik, Anatoly; Nagao, Raphael; Kiss, István Z.; Li-Shin, Jr.
2016-03-01
The ability to organize and finely manipulate the hierarchy and timing of dynamic processes is important for understanding and influencing brain functions, sleep and metabolic cycles, and many other natural phenomena. However, establishing spatiotemporal structures in biological oscillator ensembles is a challenging task that requires controlling large collections of complex nonlinear dynamical units. In this report, we present a method to design entrainment signals that create stable phase patterns in ensembles of heterogeneous nonlinear oscillators without using state feedback information. We demonstrate the approach using experiments with electrochemical reactions on multielectrode arrays, in which we selectively assign ensemble subgroups into spatiotemporal patterns with multiple phase clusters. The experimentally confirmed mechanism elucidates the connection between the phases and natural frequencies of a collection of dynamical elements, the spatial and temporal information that is encoded within this ensemble, and how external signals can be used to retrieve this information.
Walcott, Sam
2013-03-01
Interactions between the proteins actin and myosin drive muscle contraction. Properties of a single myosin interacting with an actin filament are largely known, but a trillion myosins work together in muscle. We are interested in how single-molecule properties relate to ensemble function. Myosin's reaction rates depend on force, so ensemble models keep track of both molecular state and force on each molecule. These models make subtle predictions, e.g. that myosin, when part of an ensemble, moves actin faster than when isolated. This acceleration arises because forces between molecules speed reaction kinetics. Experiments support this prediction and allow parameter estimates. A model based on this analysis describes experiments from single molecule to ensemble. In vivo, actin is regulated by proteins that, when present, cause the binding of one myosin to speed the binding of its neighbors; binding becomes cooperative. Although such interactions preclude the mean field approximation, a set of linear ODEs describes these ensembles under simplified experimental conditions. In these experiments cooperativity is strong, with the binding of one molecule affecting ten neighbors on either side. We progress toward a description of myosin ensembles under physiological conditions.
ESPC Coupled Global Ensemble Design
2014-09-30
coupled system infrastructure and forecasting capabilities. Initial operational capability is targeted for 2018. APPROACH 1. It is recognized...provided will be the probability distribution function (PDF) of environmental conditions. It is expected that this distribution will have skill. To...system would be the initial capability for ensemble forecasts . Extensions to fully coupled ensembles would be the next step. 2. Develop an extended
Botnet analysis using ensemble classifier
Anchit Bijalwan
2016-09-01
Full Text Available This paper analyses the botnet traffic using Ensemble of classifier algorithm to find out bot evidence. We used ISCX dataset for training and testing purpose. We extracted the features of both training and testing datasets. After extracting the features of this dataset, we bifurcated these features into two classes, normal traffic and botnet traffic and provide labelling. Thereafter using modern data mining tool, we have applied ensemble of classifier algorithm. Our experimental results show that the performance for finding bot evidence using ensemble of classifiers is better than single classifier. Ensemble based classifiers perform better than single classifier by either combining powers of multiple algorithms or introducing diversification to the same classifier by varying input in bot analysis. Our results are showing that by using voting method of ensemble based classifier accuracy is increased up to 96.41% from 93.37%.
A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision
Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.
1998-01-01
We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation
SAChES: Scalable Adaptive Chain-Ensemble Sampling.
Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2017-08-01
We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.
Efficient Kernel-Based Ensemble Gaussian Mixture Filtering
Liu, Bo
2015-11-11
We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Diurnal Ensemble Surface Meteorology Statistics
U.S. Environmental Protection Agency — Excel file containing diurnal ensemble statistics of 2-m temperature, 2-m mixing ratio and 10-m wind speed. This Excel file contains figures for Figure 2 in the...
2004-01-01
Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output...... is available via two password-protected web-pages hosted at IMM and is used daily by Elsam and E2....
Similarity measures for protein ensembles
Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper
2009-01-01
Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations...... a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single...
2004-01-01
Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output is a...... is available via two password-protected web-pages hosted at IMM and is used daily by Elsam and E2....
Direct Monte Carlo Measurement of the Surface Tension in Ising Models
Hasenbusch, M
1992-01-01
I present a cluster Monte Carlo algorithm that gives direct access to the interface free energy of Ising models. The basic idea is to simulate an ensemble that consists of both configurations with periodic and with antiperiodic boundary conditions. A cluster algorithm is provided that efficently updates this joint ensemble. The interface tension is obtained from the ratio of configurations with periodic and antiperiodic boundary conditions, respectively. The method is tested for the 3-dimensional Ising model.
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Ensemble models of neutrophil trafficking in severe sepsis.
Sang Ok Song
Full Text Available A hallmark of severe sepsis is systemic inflammation which activates leukocytes and can result in their misdirection. This leads to both impaired migration to the locus of infection and increased infiltration into healthy tissues. In order to better understand the pathophysiologic mechanisms involved, we developed a coarse-grained phenomenological model of the acute inflammatory response in CLP (cecal ligation and puncture-induced sepsis in rats. This model incorporates distinct neutrophil kinetic responses to the inflammatory stimulus and the dynamic interactions between components of a compartmentalized inflammatory response. Ensembles of model parameter sets consistent with experimental observations were statistically generated using a Markov-Chain Monte Carlo sampling. Prediction uncertainty in the model states was quantified over the resulting ensemble parameter sets. Forward simulation of the parameter ensembles successfully captured experimental features and predicted that systemically activated circulating neutrophils display impaired migration to the tissue and neutrophil sequestration in the lung, consequently contributing to tissue damage and mortality. Principal component and multiple regression analyses of the parameter ensembles estimated from survivor and non-survivor cohorts provide insight into pathologic mechanisms dictating outcome in sepsis. Furthermore, the model was extended to incorporate hypothetical mechanisms by which immune modulation using extracorporeal blood purification results in improved outcome in septic rats. Simulations identified a sub-population (about 18% of the treated population that benefited from blood purification. Survivors displayed enhanced neutrophil migration to tissue and reduced sequestration of lung neutrophils, contributing to improved outcome. The model ensemble presented herein provides a platform for generating and testing hypotheses in silico, as well as motivating further experimental
Bardenet, R.
2012-01-01
ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...
Lei, Lili; Whitaker, Jeffrey S.
2017-06-01
The current NCEP operational four-dimensional ensemble-variational data assimilation system uses a control forecast at T1534 resolution coupled with an 80 member ensemble at T574 resolution. Given an increase in computing resources, and assuming the control forecast resolution is fixed, would it be better to increase the ensemble size and keep the ensemble resolution the same, or increase the ensemble resolution and keep the ensemble size the same? To answer this question, experiments are conducted at reduced resolutions. Two sets of experiments are conducted which both use approximately four times more computational resources than the control experiment that uses a control forecast at T670 and an 80 member ensemble at T254. One increases the ensemble size to 320 but keeps the ensemble resolution at T254; and the other increases the ensemble resolution to T670 but retains an 80 ensemble size. When ensemble size increases to 320, turning off the static component of the background-error covariance does not degrade performance. When the data assimilation parameters are tuned for optimal performance, increasing either ensemble size or ensemble resolution can improve the forecast performance. Increasing ensemble resolution is slightly, but significantly better than increasing ensemble size for these experiments, particularly when considering errors at smaller scales. Much of the benefit of increasing ensemble resolution comes about by eliminating the need for a deterministic control forecast and running all of the background forecasts at the same resolution. In this "single-resolution" mode, the control forecast is replaced by an ensemble average, which reduces small-scale errors significantly.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
CME Ensemble Forecasting - A Primer
Pizzo, V. J.; de Koning, C. A.; Cash, M. D.; Millward, G. H.; Biesecker, D. A.; Codrescu, M.; Puga, L.; Odstrcil, D.
2014-12-01
SWPC has been evaluating various approaches for ensemble forecasting of Earth-directed CMEs. We have developed the software infrastructure needed to support broad-ranging CME ensemble modeling, including composing, interpreting, and making intelligent use of ensemble simulations. The first step is to determine whether the physics of the interplanetary propagation of CMEs is better described as chaotic (like terrestrial weather) or deterministic (as in tsunami propagation). This is important, since different ensemble strategies are to be pursued under the two scenarios. We present the findings of a comprehensive study of CME ensembles in uniform and structured backgrounds that reveals systematic relationships between input cone parameters and ambient flow states and resulting transit times and velocity/density amplitudes at Earth. These results clearly indicate that the propagation of single CMEs to 1 AU is a deterministic process. Thus, the accuracy with which one can forecast the gross properties (such as arrival time) of CMEs at 1 AU is determined primarily by the accuracy of the inputs. This is no tautology - it means specifically that efforts to improve forecast accuracy should focus upon obtaining better inputs, as opposed to developing better propagation models. In a companion paper (deKoning et al., this conference), we compare in situ solar wind data with forecast events in the SWPC operational archive to show how the qualitative and quantitative findings presented here are entirely consistent with the observations and may lead to improved forecasts of arrival time at Earth.
Estimating preselected and postselected ensembles
Massar, Serge [Laboratoire d' Information Quantique, C.P. 225, Universite libre de Bruxelles (U.L.B.), Av. F. D. Rooselvelt 50, B-1050 Bruxelles (Belgium); Popescu, Sandu [H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Hewlett-Packard Laboratories, Stoke Gifford, Bristol BS12 6QZ (United Kingdom)
2011-11-15
In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.
An Introduction to Monte Carlo Simulation of Statistical physics Problem
Murthy, K. P. N.
2001-01-01
A brief introduction to the technique of Monte Carlo simulations in statistical physics is presented. The topics covered include statistical ensembles random and pseudo random numbers, random sampling techniques, importance sampling, Markov chain, Metropolis algorithm, continuous phase transition, statistical errors from correlated and uncorrelated data, finite size scaling, n-fold way, critical slowing down, blocking technique,percolation, cluster algorithms, cluster counting, histogram tech...
Linking neuronal ensembles by associative synaptic plasticity.
Qi Yuan
Full Text Available Synchronized activity in ensembles of neurons recruited by excitatory afferents is thought to contribute to the coding information in the brain. However, the mechanisms by which neuronal ensembles are generated and modified are not known. Here we show that in rat hippocampal slices associative synaptic plasticity enables ensembles of neurons to change by incorporating neurons belonging to different ensembles. Associative synaptic plasticity redistributes the composition of different ensembles recruited by distinct inputs such as to specifically increase the similarity between the ensembles. These results show that in the hippocampus, the ensemble of neurons recruited by a given afferent projection is fluid and can be rapidly and persistently modified to specifically include neurons from different ensembles. This linking of ensembles may contribute to the formation of associative memories.
A mollified Ensemble Kalman filter
Bergemann, Kay
2010-01-01
It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high frequency adjustment processes in the model dynamics. Various methods have been devised to continuously spread out the analysis increments over a fixed time interval centered about analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.
Excitation energies from ensemble DFT
Borgoo, Alex; Teale, Andy M.; Helgaker, Trygve
2015-12-01
We study the evaluation of the Gross-Oliveira-Kohn expression for excitation energies E1-E0=ɛ1-ɛ0+∂E/xc,w[ρ] ∂w | ρ =ρ0. This expression gives the difference between an excitation energy E1 - E0 and the corresponding Kohn-Sham orbital energy difference ɛ1 - ɛ0 as a partial derivative of the exchange-correlation energy of an ensemble of states Exc,w[ρ]. Through Lieb maximisation, on input full-CI density functions, the exchange-correlation energy is evaluated accurately and the partial derivative is evaluated numerically using finite difference. The equality is studied numerically for different geometries of the H2 molecule and different ensemble weights. We explore the adiabatic connection for the ensemble exchange-correlation energy. The latter may prove useful when modelling the unknown weight dependence of the exchange-correlation energy.
The Partition Ensemble Fallacy Fallacy
Nemoto, K; Nemoto, Kae; Braunstein, Samuel L.
2002-01-01
The Partition Ensemble Fallacy was recently applied to claim no quantum coherence exists in coherent states produced by lasers. We show that this claim relies on an untestable belief of a particular prior distribution of absolute phase. One's choice for the prior distribution for an unobservable quantity is a matter of `religion'. We call this principle the Partition Ensemble Fallacy Fallacy. Further, we show an alternative approach to construct a relative-quantity Hilbert subspace where unobservability of certain quantities is guaranteed by global conservation laws. This approach is applied to coherent states and constructs an approximate relative-phase Hilbert subspace.
Wissdorf, Walter; Seifert, Luzia; Derpmann, Valerie; Klee, Sonja; Vautz, Wolfgang; Benter, Thorsten
2013-04-01
For the comprehensive simulation of ion trajectories including reactive collisions at elevated pressure conditions, a chemical reaction simulation (RS) extension to the popular SIMION software package was developed, which is based on the Monte Carlo statistical approach. The RS extension is of particular interest to SIMION users who wish to simulate ion trajectories in collision dominated environments such as atmospheric pressure ion sources, ion guides (e.g., funnels, transfer multi poles), chemical reaction chambers (e.g., proton transfer tubes), and/or ion mobility analyzers. It is well known that ion molecule reaction rate constants frequently reach or exceed the collision limit obtained from kinetic gas theory. Thus with a typical dwell time of ions within the above mentioned devices in the ms range, chemical transformation reactions are likely to occur. In other words, individual ions change critical parameters such as mass, mobility, and chemical reactivity en passage to the analyzer, which naturally strongly affects their trajectories. The RS method simulates elementary reaction events of individual ions reflecting the behavior of a large ensemble by a representative set of simulated reacting particles. The simulation of the proton bound water cluster reactant ion peak (RIP) in ion mobility spectrometry (IMS) was chosen as a benchmark problem. For this purpose, the RIP was experimentally determined as a function of the background water concentration present in the IMS drift tube. It is shown that simulation and experimental data are in very good agreement, demonstrating the validity of the method.
Multimodel ensembles of wheat growth
Martre, Pierre; Wallach, Daniel; Asseng, Senthold
2015-01-01
, but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24...
An introduction to Monte Carlo methods
Walter, J.-C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.
Global Ensemble Forecast System (GEFS) [1 Deg.
National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...
Squeezing of Collective Excitations in Spin Ensembles
Kraglund Andersen, Christian; Mølmer, Klaus
2012-01-01
We analyse the possibility to create two-mode spin squeezed states of two separate spin ensembles by inverting the spins in one ensemble and allowing spin exchange between the ensembles via a near resonant cavity field. We investigate the dynamics of the system using a combination of numerical an...
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Classical and Quantum Ensembles via Multiresolution. II. Wigner Ensembles
2004-01-01
We present the application of the variational-wavelet analysis to the analysis of quantum ensembles in Wigner framework. (Naive) deformation quantization, the multiresolution representations and the variational approach are the key points. We construct the solutions of Wigner-like equations via the multiscale expansions in the generalized coherent states or high-localized nonlinear eigenmodes in the base of the compactly supported wavelets and the wavelet packets. We demonstrate the appearanc...
Hydrological Ensemble Prediction System (HEPS)
Thielen-Del Pozo, J.; Schaake, J.; Martin, E.; Pailleux, J.; Pappenberger, F.
2010-09-01
Flood forecasting systems form a key part of ‘preparedness' strategies for disastrous floods and provide hydrological services, civil protection authorities and the public with information of upcoming events. Provided the warning leadtime is sufficiently long, adequate preparatory actions can be taken to efficiently reduce the impacts of the flooding. Following on the success of the use of ensembles for weather forecasting, the hydrological community now moves increasingly towards Hydrological Ensemble Prediction Systems (HEPS) for improved flood forecasting using operationally available NWP products as inputs. However, these products are often generated on relatively coarse scales compared to hydrologically relevant basin units and suffer systematic biases that may have considerable impact when passed through the non-linear hydrological filters. Therefore, a better understanding on how best to produce, communicate and use hydrologic ensemble forecasts in hydrological short-, medium- und long term prediction of hydrological processes is necessary. The "Hydrologic Ensemble Prediction Experiment" (HEPEX), is an international initiative consisting of hydrologists, meteorologist and end-users to advance probabilistic hydrologic forecast techniques for flood, drought and water management applications. Different aspects of the hydrological ensemble processor are being addressed including • Production of useful meteorological products relevant for hydrological applications, ranging from nowcasting products to seasonal forecasts. The importance of hindcasts that are consistent with the operational weather forecasts will be discussed to support bias correction and downscaling, statistically meaningful verification of HEPS, and the development and testing of operating rules; • Need for downscaling and post-processing of weather ensembles to reduce bias before entering hydrological applications; • Hydrological model and parameter uncertainty and how to correct and
Quantum Monte Carlo simulation
Wang, Yazhen
2011-01-01
Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
Laver, M.; Forgan, E.M.; Abrahamsen, Asger Bech
2008-01-01
We describe the use of reverse Monte Carlo refinement to extract structural information from angle-resolved data of a Bragg peak. Starting with small-angle neutron scattering data, the positional order of an ensemble of flux lines in superconducting Nb is revealed. We discuss the uncovered correl...
2011-09-01
several in- dependent, locally stationary processes with simple parametric stationary (or isotropic) covariance func- tions ( Fuentes 2001). Parametric...230, 99–111. ——, and S. L. Anderson, 1999: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimi- lations and...Q. Yao, 2003: Nonlinear Time Series: Nonparametric and Parametric Methods. Springer-Verlag, 552 pp. Fuentes , M., 2001: A high frequency kriging
Toward an Operational Particle Filter-Based Ensemble Data Assimilation System
2014-09-22
a peer reviewed book chapter, and 11 conference presentations. 15. SUB.JECT TERMS Data Assimilation, Ensemble Kalman Filter, Markov chain Monte...Transform Kalman Filter (ETKF) to represent convective processes. Previous research found that the probability density functions (PDFs) of cloud...effect of these changes on the model output, but it was unclear whether an EnKF algorithm is capable of doing the same. We generated posterior probability
MCMC for non-linear state space models using ensembles of latent sequences
2013-01-01
Non-linear state space models are a widely-used class of models for biological, economic, and physical processes. Fitting these models to observed data is a difficult inference problem that has no straightforward solution. We take a Bayesian approach to the inference of unknown parameters of a non-linear state model; this, in turn, requires the availability of efficient Markov Chain Monte Carlo (MCMC) sampling methods for the latent (hidden) variables and model parameters. Using the ensemble ...
Bridging the ensemble Kalman filter and particle filters: the adaptive Gaussian mixture filter
Stordal, Andreas Størksen; Karlsen, Hans A.; Nævdal, Geir; Hans J. Skaug; Vallès, Brice
2010-01-01
The nonlinear filtering problem occurs in many scientific areas. Sequential Monte Carlo solutions with the correct asymptotic behavior such as particle filters exist, but they are computationally too expensive when working with high-dimensional systems. The ensemble Kalman filter (EnKF) is a more robust method that has shown promising results with a small sample size, but the samples are not guaranteed to come from the true posterior distribution. By approximating the model error with a Gauss...
Ensemble models of proteins and protein domains based on distance distribution restraints.
Jeschke, Gunnar
2016-04-01
Conformational ensembles of intrinsically disordered peptide chains are not fully determined by experimental observations. Uncertainty due to lack of experimental restraints and due to intrinsic disorder can be distinguished if distance distributions restraints are available. Such restraints can be obtained from pulsed dipolar electron paramagnetic resonance (EPR) spectroscopy applied to pairs of spin labels. Here, we introduce a Monte Carlo approach for generating conformational ensembles that are consistent with a set of distance distribution restraints, backbone dihedral angle statistics in known protein structures, and optionally, secondary structure propensities or membrane immersion depths. The approach is tested with simulated restraints for a terminal and an internal loop and for a protein with 69 residues by using sets of sparse restraints for underlying well-defined conformations and for published ensembles of a premolten globule-like and a coil-like intrinsically disordered protein.
Spectral diagonal ensemble Kalman filters
Kasanický, Ivan; Vejmelka, Martin
2015-01-01
A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.
Symanzik flow on HISQ ensembles
Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2013-01-01
We report on a scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The lattice scale $w_0/a$, originally proposed by the BMW collaboration, is computed using Symanzik flow at four lattice spacings ranging from 0.15 to 0.06 fm. With a Taylor series ansatz, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We give a preliminary determination of the scale $w_0$ in physical units, along with associated systematic errors, and compare with results from other groups. We also present a first estimate of autocorrelation lengths as a function of flowtime for these ensembles.
Statistical Analysis of Protein Ensembles
Máté, Gabriell; Heermann, Dieter
2014-04-01
As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
Statistical Analysis of Protein Ensembles
Gabriell eMáté
2014-04-01
Full Text Available As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
Ensemble modeling for aromatic production in Escherichia coli.
Matthew L Rizk
Full Text Available Ensemble Modeling (EM is a recently developed method for metabolic modeling, particularly for utilizing the effect of enzyme tuning data on the production of a specific compound to refine the model. This approach is used here to investigate the production of aromatic products in Escherichia coli. Instead of using dynamic metabolite data to fit a model, the EM approach uses phenotypic data (effects of enzyme overexpression or knockouts on the steady state production rate to screen possible models. These data are routinely generated during strain design. An ensemble of models is constructed that all reach the same steady state and are based on the same mechanistic framework at the elementary reaction level. The behavior of the models spans the kinetics allowable by thermodynamics. Then by using existing data from the literature for the overexpression of genes coding for transketolase (Tkt, transaldolase (Tal, and phosphoenolpyruvate synthase (Pps to screen the ensemble, we arrive at a set of models that properly describes the known enzyme overexpression phenotypes. This subset of models becomes more predictive as additional data are used to refine the models. The final ensemble of models demonstrates the characteristic of the cell that Tkt is the first rate controlling step, and correctly predicts that only after Tkt is overexpressed does an increase in Pps increase the production rate of aromatics. This work demonstrates that EM is able to capture the result of enzyme overexpression on aromatic producing bacteria by successfully utilizing routinely generated enzyme tuning data to guide model learning.
Classical and Quantum Ensembles via Multiresolution. II. Wigner Ensembles
Fedorova, A N; Fedorova, Antonina N.; Zeitlin, Michael G.
2004-01-01
We present the application of the variational-wavelet analysis to the analysis of quantum ensembles in Wigner framework. (Naive) deformation quantization, the multiresolution representations and the variational approach are the key points. We construct the solutions of Wigner-like equations via the multiscale expansions in the generalized coherent states or high-localized nonlinear eigenmodes in the base of the compactly supported wavelets and the wavelet packets. We demonstrate the appearance of (stable) localized patterns (waveletons) and consider entanglement and decoherence as possible applications.
Monte Carlo molecular simulation of phase-coexistence for oil production and processing
Li, Jun
2011-01-01
The Gibbs-NVT ensemble Monte Carlo method is used to simulate the liquid-vapor coexistence diagram and the simulation results of methane agree well with the experimental data in a wide range of temperatures. For systems with two components, the Gibbs-NPT ensemble Monte Carlo method is employed in the simulation while the mole fraction of each component in each phase is modeled as a Leonard-Jones fluid. As the results of Monte Carlo simulations usually contain huge statistical error, the blocking method is used to estimate the variance of the simulation results. Additionally, in order to improve the simulation efficiency, the step sizes of different trial moves is adjusted automatically so that their acceptance probabilities can approach to the preset values.
2012-01-01
Licence; En 1935, un groupe de mathématiciens français eut l'ambition de reconstruire tout l'édifice mathématique (sans S pour bien montrer l'unité) selon la pensée formaliste de Hilbert. Les membres fondateurs ont été Henri Cartan, Claude Chevalley, Jean Delsarte, Jean Dieudonné, André Weil auxquels se joindra René de Possel. En juillet 1935 fut donc créé, lors d'un séminaire en Auvergne le groupe 'Nicolas Bourbaki'. Le nom de cette association fait référence en fait à une anecdote qui se pa...
Thermodynamics and kinetics of a molecular motor ensemble.
Baker, J E; Thomas, D D
2000-10-01
If, contrary to conventional models of muscle, it is assumed that molecular forces equilibrate among rather than within molecular motors, an equation of state and an expression for energy output can be obtained for a near-equilibrium, coworking ensemble of molecular motors. These equations predict clear, testable relationships between motor structure, motor biochemistry, and ensemble motor function, and we discuss these relationships in the context of various experimental studies. In this model, net work by molecular motors is performed with the relaxation of a near-equilibrium intermediate step in a motor-catalyzed reaction. The free energy available for work is localized to this step, and the rate at which this free energy is transferred to work is accelerated by the free energy of a motor-catalyzed reaction. This thermodynamic model implicitly deals with a motile cell system as a dynamic network (not a rigid lattice) of molecular motors within which the mechanochemistry of one motor influences and is influenced by the mechanochemistry of other motors in the ensemble.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Analysis of mesoscale forecasts using ensemble methods
Gross, Markus
2016-01-01
Mesoscale forecasts are now routinely performed as elements of operational forecasts and their outputs do appear convincing. However, despite their realistic appearance at times the comparison to observations is less favorable. At the grid scale these forecasts often do not compare well with observations. This is partly due to the chaotic system underlying the weather. Another key problem is that it is impossible to evaluate the risk of making decisions based on these forecasts because they do not provide a measure of confidence. Ensembles provide this information in the ensemble spread and quartiles. However, running global ensembles at the meso or sub mesoscale involves substantial computational resources. National centers do run such ensembles, but the subject of this publication is a method which requires significantly less computation. The ensemble enhanced mesoscale system presented here aims not at the creation of an improved mesoscale forecast model. Also it is not to create an improved ensemble syste...
Measuring social interaction in music ensembles.
Volpe, Gualtiero; D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano
2016-05-05
Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances.
Gibbs Ensembles of Nonintersecting Paths
Borodin, Alexei
2008-01-01
We consider a family of determinantal random point processes on the two-dimensional lattice and prove that members of our family can be interpreted as a kind of Gibbs ensembles of nonintersecting paths. Examples include probability measures on lozenge and domino tilings of the plane, some of which are non-translation-invariant. The correlation kernels of our processes can be viewed as extensions of the discrete sine kernel, and we show that the Gibbs property is a consequence of simple linear relations satisfied by these kernels. The processes depend on infinitely many parameters, which are closely related to parametrization of totally positive Toeplitz matrices.
Wind Power Prediction using Ensembles
Giebel, Gregor; Badger, Jake; Landberg, Lars
2005-01-01
offshore wind farm and the whole Jutland/Funen area. The utilities used these forecasts for maintenance planning, fuel consumption estimates and over-the-weekend trading on the Leipzig power exchange. Othernotable scientific results include the better accuracy of forecasts made up from a simple...... superposition of two NWP provider (in our case, DMI and DWD), an investigation of the merits of a parameterisation of the turbulent kinetic energy within thedelivered wind speed forecasts, and the finding that a “naïve” downscaling of each of the coarse ECMWF ensemble members with higher resolution HIRLAM did...
Ensemble Methods Foundations and Algorithms
Zhou, Zhi-Hua
2012-01-01
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a
Quantum Repeaters and Atomic Ensembles
Borregaard, Johannes
a previous protocol, thereby enabling fast local processing, which greatly enhances the distribution rate. We then move on to describe our work on improving the stability of atomic clocks using entanglement. Entanglement can potentially push the stability of atomic clocks to the so-called Heisenberg limit......, which is the absolute upper limit of the stability allowed by the Heisenberg uncertainty relation. It has, however, been unclear whether entangled state’s enhanced sensitivity to noise would prevent reaching this limit. We have developed an adaptive measurement protocol, which circumvents this problem...... based on atomic ensembles....
Walcott, Sam
2014-10-01
Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.
Heterogeneous versus Homogeneous Machine Learning Ensembles
Petrakova Aleksandra
2015-12-01
Full Text Available The research demonstrates efficiency of the heterogeneous model ensemble application for a cancer diagnostic procedure. Machine learning methods used for the ensemble model training are neural networks, random forest, support vector machine and offspring selection genetic algorithm. Training of models and the ensemble design is performed by means of HeuristicLab software. The data used in the research have been provided by the General Hospital of Linz, Austria.
Interpreting Tree Ensembles with inTrees
Deng, Houtao
2014-01-01
Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy. In this work, we provide the inTrees (interpretable trees) framework that extracts, measures, prunes and selects rules from a tree ensemble, and calculates frequent variable interactions. An rule-based learner, referred to as the simplified tree ensemble learner (STEL), can also be formed and used for future prediction. The inTrees framework can applied to both classification an...
Analysis of peeling decoder for MET ensembles
Hinton, Ryan
2009-01-01
The peeling decoder introduced by Luby, et al. allows analysis of LDPC decoding for the binary erasure channel (BEC). For irregular ensembles, they analyze the decoder state as a Markov process and present a solution to the differential equations describing the process mean. Multi-edge type (MET) ensembles allow greater precision through specifying graph connectivity. We generalize the the peeling decoder for MET ensembles and derive analogous differential equations. We offer a new change of variables and solution to the node fraction evolutions in the general (MET) case. This result is preparatory to investigating finite-length ensemble behavior.
Goodwin, Philip
2016-10-01
Projections of future climate made by model-ensembles have credibility because the historic simulations by these models are consistent with, or near-consistent with, historic observations. However, it is not known how small inconsistencies between the ranges of observed and simulated historic climate change affects the future projections made by a model ensemble. Here, the impact of historical simulation-observation inconsistencies on future warming projections is quantified in a 4-million member Monte Carlo ensemble from a new efficient Earth System Model (ESM). Of the 4-million ensemble members, a subset of 182,500 are consistent with historic ranges of warming, heat uptake and carbon uptake simulated by the Climate Model Intercomparison Project 5 (CMIP5) ensemble. This simulation-consistent subset projects similar future warming ranges to the CMIP5 ensemble for all four RCP scenarios, indicating the new ESM represents an efficient tool to explore parameter space for future warming projections based on historic performance. A second subset of 14,500 ensemble members are consistent with historic observations for warming, heat uptake and carbon uptake. This observation-consistent subset projects a narrower range for future warming, with the lower bounds of projected warming still similar to CMIP5, but the upper warming bounds reduced by 20-35 %. These findings suggest that part of the upper range of twenty-first century CMIP5 warming projections may reflect historical simulation-observation inconsistencies. However, the agreement of lower bounds for projected warming implies that the likelihood of warming exceeding dangerous levels over the twenty-first century is unaffected by small discrepancies between CMIP5 models and observations.
Hierarchical Bayes Ensemble Kalman Filtering
Tsyrulnikov, Michael
2015-01-01
Ensemble Kalman filtering (EnKF), when applied to high-dimensional systems, suffers from an inevitably small affordable ensemble size, which results in poor estimates of the background error covariance matrix ${\\bf B}$. The common remedy is a kind of regularization, usually an ad-hoc spatial covariance localization (tapering) combined with artificial covariance inflation. Instead of using an ad-hoc regularization, we adopt the idea by Myrseth and Omre (2010) and explicitly admit that the ${\\bf B}$ matrix is unknown and random and estimate it along with the state (${\\bf x}$) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components ${\\bf P}$ and ${\\bf Q}$ of the ${\\bf B}$ matrix into the extended control vector $({\\bf x},{\\bf P},{\\bf Q})$. Similarly, we break the traditional backgrou...
Visualizing ensembles in structural biology.
Melvin, Ryan L; Salsbury, Freddie R
2016-06-01
Displaying a single representative conformation of a biopolymer rather than an ensemble of states mistakenly conveys a static nature rather than the actual dynamic personality of biopolymers. However, there are few apparent options due to the fixed nature of print media. Here we suggest a standardized methodology for visually indicating the distribution width, standard deviation and uncertainty of ensembles of states with little loss of the visual simplicity of displaying a single representative conformation. Of particular note is that the visualization method employed clearly distinguishes between isotropic and anisotropic motion of polymer subunits. We also apply this method to ligand binding, suggesting a way to indicate the expected error in many high throughput docking programs when visualizing the structural spread of the output. We provide several examples in the context of nucleic acids and proteins with particular insights gained via this method. Such examples include investigating a therapeutic polymer of FdUMP (5-fluoro-2-deoxyuridine-5-O-monophosphate) - a topoisomerase-1 (Top1), apoptosis-inducing poison - and nucleotide-binding proteins responsible for ATP hydrolysis from Bacillus subtilis. We also discuss how these methods can be extended to any macromolecular data set with an underlying distribution, including experimental data such as NMR structures.
Optimized nested Markov chain Monte Carlo sampling: theory
Coe, Joshua D [Los Alamos National Laboratory; Shaw, M Sam [Los Alamos National Laboratory; Sewell, Thomas D [U. MISSOURI
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.
Quantum Monte Carlo for vibrating molecules
Brown, W.R. [Univ. of California, Berkeley, CA (United States). Chemistry Dept.]|[Lawrence Berkeley National Lab., CA (United States). Chemical Sciences Div.
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.
Kadoura, Ahmad Salim
2014-03-17
Molecular simulation could provide detailed description of fluid systems when compared to experimental techniques. They can also replace equations of state; however, molecular simulation usually costs considerable computational efforts. Several techniques have been developed to overcome such high computational costs. In this paper, two early rejection schemes, a conservative and a hybrid one, are introduced. In these two methods, undesired configurations generated by the Monte Carlo trials are rejected earlier than it would when using conventional algorithms. The methods are tested for structureless single-component Lennard-Jones particles in both canonical and NVT-Gibbs ensembles. The computational time reduction for both ensembles is observed at a wide range of thermodynamic conditions. Results show that computational time savings are directly proportional to the rejection rate of Monte Carlo trials. The proposed conservative scheme has shown to be successful in saving up to 40% of the computational time in the canonical ensemble and up to 30% in the NVT-Gibbs ensemble when compared to standard algorithms. In addition, it preserves the exact Markov chains produced by the Metropolis scheme. Further enhancement for NVT-Gibbs ensemble is achieved by combining this technique with the bond formation early rejection one. The hybrid method achieves more than 50% saving of the central processing unit (CPU) time.
Improved customer choice predictions using ensemble methods
M.C. van Wezel (Michiel); R. Potharst (Rob)
2005-01-01
textabstractIn this paper various ensemble learning methods from machine learning and statistics are considered and applied to the customer choice modeling problem. The application of ensemble learning usually improves the prediction quality of flexible models like decision trees and thus leads to
Layered Ensemble Architecture for Time Series Forecasting.
Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin
2016-01-01
Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.
Ensemble methods for handwritten digit recognition
Hansen, Lars Kai; Liisberg, Christian; Salamon, P.
1992-01-01
. It is further shown that it is possible to estimate the ensemble performance as well as the learning curve on a medium-size database. In addition the authors present preliminary analysis of experiments on a large database and show that state-of-the-art performance can be obtained using the ensemble approach...
Nonextensivity in magnetic nanoparticle ensembles
Binek, Ch.; Polisetty, S.; He, Xi; Mukherjee, T.; Rajesh, R.; Redepenning, J.
2006-08-01
A superconducting quantum interference device and Faraday rotation technique are used to study dipolar interacting nanoparticles embedded in a polystyrene matrix. Magnetization isotherms are measured for three cylindrically shaped samples of constant diameter but various heights. Detailed analysis of the isotherms supports Tsallis’ conjecture of a magnetic equation of state that involves temperature and magnetic field variables scaled by the logarithm of the number of magnetic nanoparticles. This unusual scaling of thermodynamic variables, which are conventionally considered to be intensive, originates from the nonextensivity of the Gibbs free energy in three-dimensional dipolar interacting particle ensembles. Our experimental evidence for nonextensivity is based on the data collapse of various isotherms that require scaling of the field variable in accordance with Tsallis’ equation of state.
Perception of ensemble statistics requires attention.
Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A
2017-02-01
To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics.
Popular Ensemble Methods: An Empirical Study
Maclin, R; 10.1613/jair.614
2011-01-01
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being exa...
Monte Carlo Simulations of Arterial Imaging with Optical Coherence Tomography
Amendt, P.; Estabrook, K.; Everett, M.; London, R.A.; Maitland, D.; Zimmerman, G.; Colston, B.; da Silva, L.; Sathyam, U.
2000-02-01
The laser-tissue interaction code LATIS [London et al., Appl. Optics 36, 9068 ( 1998)] is used to analyze photon scattering histories representative of optical coherence tomography (OCT) experiment performed at Lawrence Livermore National Laboratory. Monte Carlo photonics with Henyey-Greenstein anisotropic scattering is implemented and used to simulate signal discrimination of intravascular structure. An analytic model is developed and used to obtain a scaling law relation for optimization of the OCT signal and to validate Monte Carlo photonics. The appropriateness of the Henyey-Greenstein phase function is studied by direct comparison with more detailed Mie scattering theory using an ensemble of spherical dielectric scatterers. Modest differences are found between the two prescriptions for describing photon angular scattering in tissue. In particular, the Mie scattering phase functions provide less overall reflectance signal but more signal contrast compared to the Henyey-Greenstein formulation.
Bouallegue, Zied Ben; Theis, Susanne E; Pinson, Pierre
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (...
Cecilia Maya
2004-12-01
Full Text Available El método Monte Carlo se aplica a varios casos de valoración de opciones financieras. El método genera una buena aproximación al comparar su precisión con la de otros métodos numéricos. La estimación que produce la versión Cruda de Monte Carlo puede ser aún más exacta si se recurre a metodologías de reducción de la varianza entre las cuales se sugieren la variable antitética y de la variable de control. Sin embargo, dichas metodologías requieren un esfuerzo computacional mayor por lo cual las mismas deben ser evaluadas en términos no sólo de su precisión sino también de su eficiencia.
Monte Carlo and nonlinearities
Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian
2016-01-01
The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Calculations of canonical averages from the grand canonical ensemble.
Kosov, D S; Gelin, M F; Vdovin, A I
2008-02-01
Grand canonical and canonical ensembles become equivalent in the thermodynamic limit, but when the system size is finite the results obtained in the two ensembles deviate from each other. In many important cases, the canonical ensemble provides an appropriate physical description but it is often much easier to perform the calculations in the corresponding grand canonical ensemble. We present a method to compute averages in the canonical ensemble based on calculations of the expectation values in the grand canonical ensemble. The number of particles, which is fixed in the canonical ensemble, is not necessarily the same as the average number of particles in the grand canonical ensemble.
Stochastic ensembles, conformationally adaptive teamwork, and enzymatic detoxification.
Atkins, William M; Qian, Hong
2011-05-17
It has been appreciated for a long time that enzymes exist as conformational ensembles throughout multiple stages of the reactions they catalyze, but there is renewed interest in the functional implications. The energy landscape that results from conformationlly diverse poteins is a complex surface with an energetic topography in multiple dimensions, even at the transition state(s) leading to product formation, and this represents a new paradigm. At the same time there has been renewed interest in conformational ensembles, a new paradigm concerning enzyme function has emerged, wherein catalytic promiscuity has clear biological advantages in some cases. "Useful", or biologically functional, promiscuity or the related behavior of "multifunctionality" can be found in the immune system, enzymatic detoxification, signal transduction, and the evolution of new function from an existing pool of folded protein scaffolds. Experimental evidence supports the widely held assumption that conformational heterogeneity promotes functional promiscuity. The common link between these coevolving paradigms is the inherent structural plasticity and conformational dynamics of proteins that, on one hand, lead to complex but evolutionarily selected energy landscapes and, on the other hand, promote functional promiscuity. Here we consider a logical extension of the overlap between these two nascent paradigms: functionally promiscuous and multifunctional enzymes such as detoxification enzymes are expected to have an ensemble landscape with more states accessible on multiple time scales than substrate specific enzymes. Two attributes of detoxification enzymes become important in the context of conformational ensembles: these enzymes metabolize multiple substrates, often in substrate mixtures, and they can form multiple products from a single substrate. These properties, combined with complex conformational landscapes, lead to the possibility of interesting time-dependent, or emergent
Hybrid Data Assimilation without Ensemble Filtering
Todling, Ricardo; Akkraoui, Amal El
2014-01-01
The Global Modeling and Assimilation Office is preparing to upgrade its three-dimensional variational system to a hybrid approach in which the ensemble is generated using a square-root ensemble Kalman filter (EnKF) and the variational problem is solved using the Grid-point Statistical Interpolation system. As in most EnKF applications, we found it necessary to employ a combination of multiplicative and additive inflations, to compensate for sampling and modeling errors, respectively and, to maintain the small-member ensemble solution close to the variational solution; we also found it necessary to re-center the members of the ensemble about the variational analysis. During tuning of the filter we have found re-centering and additive inflation to play a considerably larger role than expected, particularly in a dual-resolution context when the variational analysis is ran at larger resolution than the ensemble. This led us to consider a hybrid strategy in which the members of the ensemble are generated by simply converting the variational analysis to the resolution of the ensemble and applying additive inflation, thus bypassing the EnKF. Comparisons of this, so-called, filter-free hybrid procedure with an EnKF-based hybrid procedure and a control non-hybrid, traditional, scheme show both hybrid strategies to provide equally significant improvement over the control; more interestingly, the filter-free procedure was found to give qualitatively similar results to the EnKF-based procedure.
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
4DVAR by ensemble Kalman smoother
Mandel, Jan; Gratton, Serge
2013-01-01
We propose to use the ensemble Kalman smoother (EnKS) as linear least squares solver in the Gauss-Newton method for the large nonlinear least squares in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Further, adding a regularization term results in replacing the Gauss-Newton method, which may diverge, by^M the Levenberg-Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS.
Derivation of Mayer Series from Canonical Ensemble
Wang, Xian-Zhi
2016-02-01
Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.
Switching Between the NVT and NpT Ensembles Using the Reweighting and Reconstruction Scheme
Kadoura, Ahmad Salim
2015-06-01
Recently, we have developed several techniques in order to accelerate Monte Carlo (MC) molecular simulations. For that purpose, two strategies were followed. In the first, new algorithms were proposed as a set of early rejection schemes performing faster than the conventional algorithm while preserving the accuracy of the method. On the other hand, a reweighting and reconstruction scheme was introduced that is capable of retrieving primary quantities and second derivative properties at several thermodynamic conditions from a single MC Markov chain. The latter scheme, was first developed to extrapolate quantities in NV T ensemble for struc- tureless Lennard-Jones particles. However, it is evident that for most real life applications the NpT ensemble is more convenient, as pressure and temperature are usually known. Therefore, in this paper we present an extension to the reweighting and reconstruction method to solve NpT problems utilizing the same Markov chains generated by the NV T ensemble simulations. Eventually, the new approach allows elegant switching between the two ensembles for several quantities at a wide range of neighboring thermodynamic conditions.
Data Assimilation for Wildland Fires: Ensemble Kalman filters in coupled atmosphere-surface models
Mandel, Jan; Beezley, Jonathan D.; Coen, Janice L.; Kim, Minjeong
2007-01-01
Two wildland fire models are described, one based on reaction-diffusion-convection partial differential equations, and one based on semi-empirical fire spread by the level let method. The level set method model is coupled with the Weather Research and Forecasting (WRF) atmospheric model. The regularized and the morphing ensemble Kalman filter are used for data assimilation.
Data Assimilation for Wildland Fires: Ensemble Kalman filters in coupled atmosphere-surface models
Mandel, Jan; Coen, Janice L; Kim, Minjeong
2007-01-01
Two wildland fire models are described, one based on reaction-diffusion-convection partial differential equations, and one based on empirical fire spread by the level let method. The level set method model is coupled with the Weather Research and Forecasting (WRF) atmospheric model. The regularized and the morphing ensemble Kalman filter are used for data assimilation.
Nonextensivity in Magnetic Nanocluster Ensembles
Binek, Christian; Polisetty, Srinivas; He, Xi; Mukherjee, Tathagata; Rajasekeran, Rajesh; Redepenning, Jody
2006-03-01
We study the scaling behavior of dipolar interacting nanoparticles in 3D samples of various sizes but constant particle density. Ferromagnetic γ-Fe2O3 clusters embedded in a polystyrene matrix are fabricated by thermal decomposition of metal carbonyls. Transmission electron microscopy reveals a narrow size distribution of 12 nm clusters. They are randomly dispersed in the matrix with an average separation of 80 nm. Magnetization isotherms of these single domain particle ensembles are measured by SQUID magnetometry above the blocking temperature TB =115K where non-equilibrium effects are avoided. After demagnetization corrections which convert the applied magnetic fields into internal fields, H, a data collapse is achieved when scaling the magnetic moment, m, and H by appropriate factors. The latter are theoretically predicted functions of the number of particles and determined here numerically. Scaling of H takes into account the nonextensive (NE) behavior of dipolar interacting particles. In the case of long range interactions a scaling schema has been proposed by Tsallis and confirmed by simulations. The controversial field of NE thermodynamics requires however experimental evidence provided here.
Ensemble Dynamics and Bred Vectors
Balci, Nusret; Restrepo, Juan M; Sell, George R
2011-01-01
We introduce the new concept of an EBV to assess the sensitivity of model outputs to changes in initial conditions for weather forecasting. The new algorithm, which we call the "Ensemble Bred Vector" or EBV, is based on collective dynamics in essential ways. By construction, the EBV algorithm produces one or more dominant vectors. We investigate the performance of EBV, comparing it to the BV algorithm as well as the finite-time Lyapunov Vectors. We give a theoretical justification to the observed fact that the vectors produced by BV, EBV, and the finite-time Lyapunov vectors are similar for small amplitudes. Numerical comparisons of BV and EBV for the 3-equation Lorenz model and for a forced, dissipative partial differential equation of Cahn-Hilliard type that arises in modeling the thermohaline circulation, demonstrate that the EBV yields a size-ordered description of the perturbation field, and is more robust than the BV in the higher nonlinear regime. The EBV yields insight into the fractal structure of th...
2012-01-01
The 5th edition of the "Monts Jura Jazz Festival" will take place at the Esplanade du Lac in Divonne-les-Bains, France on September 21 and 22. This festival organized by the CERN Jazz Club and supported by the CERN Staff Association is becoming a major musical event in the Geneva region. International Jazz artists like Didier Lockwood and David Reinhardt are part of this year outstanding program. Full program and e-tickets are available on the festival website. Don't miss this great festival!
Jazz Club
2012-01-01
The 5th edition of the "Monts Jura Jazz Festival" that will take place on September 21st and 22nd 2012 at the Esplanade du Lac in Divonne-les-Bains. This festival is organized by the "CERN Jazz Club" with the support of the "CERN Staff Association". This festival is a major musical event in the French/Swiss area and proposes a world class program with jazz artists such as D.Lockwood and D.Reinhardt. More information on http://www.jurajazz.com.
LMC: Logarithmantic Monte Carlo
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Zhao, Huawei
2009-01-01
A ZEMAX model was constructed to simulate a clinical trial of intraocular lenses (IOLs) based on a clinically oriented Monte Carlo ensemble analysis using postoperative ocular parameters. The purpose of this model is to test the feasibility of streamlining and optimizing both the design process and the clinical testing of IOLs. This optical ensemble analysis (OEA) is also validated. Simulated pseudophakic eyes were generated by using the tolerancing and programming features of ZEMAX optical design software. OEA methodology was verified by demonstrating that the results of clinical performance simulations were consistent with previously published clinical performance data using the same types of IOLs. From these results we conclude that the OEA method can objectively simulate the potential clinical trial performance of IOLs.
Manca, Fabio; Giordano, Stefano; Palla, Pier Luca; Zucca, Rinaldo; Cleri, Fabrizio; Colombo, Luciano
2012-04-21
Stretching experiments on single molecules of arbitrary length opened the way for studying the statistical mechanics of small systems. In many cases in which the thermodynamic limit is not satisfied, different macroscopic boundary conditions, corresponding to different statistical mechanics ensembles, yield different force-displacement curves. We formulate analytical expressions and develop Monte Carlo simulations to quantitatively evaluate the difference between the Helmholtz and the Gibbs ensembles for a wide range of polymer models of biological relevance. We consider generalizations of the freely jointed chain and of the worm-like chain models with extensible bonds. In all cases we show that the convergence to the thermodynamic limit upon increasing contour length is described by a suitable power law and a specific scaling exponent, characteristic of each model.
A 4D-Ensemble-Variational System for Data Assimilation and Ensemble Initialization
Bowler, Neill; Clayton, Adam; Jardak, Mohamed; Lee, Eunjoo; Jermey, Peter; Lorenc, Andrew; Piccolo, Chiara; Pring, Stephen; Wlasak, Marek; Barker, Dale; Inverarity, Gordon; Swinbank, Richard
2016-04-01
The Met Office has been developing a four-dimensional ensemble variational (4DEnVar) data assimilation system over the past four years. The 4DEnVar system is intended both as data assimilation system in its own right and also an improved means of initializing the Met Office Global and Regional Ensemble Prediction System (MOGREPS). The global MOGREPS ensemble has been initialized by running an ensemble of 4DEnVars (En-4DEnVar). The scalability and maintainability of ensemble data assimilation methods make them increasingly attractive, and 4DEnVar may be adopted in the context of the Met Office's LFRic project to redevelop the technical infrastructure to enable its Unified Model (MetUM) to be run efficiently on massively parallel supercomputers. This presentation will report on the results of the 4DEnVar development project, including experiments that have been run using ensemble sizes of up to 200 members.
Transition from Poisson to circular unitary ensemble
Vinayak; Akhilesh Pandey
2009-09-01
Transitions to universality classes of random matrix ensembles have been useful in the study of weakly-broken symmetries in quantum chaotic systems. Transitions involving Poisson as the initial ensemble have been particularly interesting. The exact two-point correlation function was derived by one of the present authors for the Poisson to circular unitary ensemble (CUE) transition with uniform initial density. This is given in terms of a rescaled symmetry breaking parameter Λ. The same result was obtained for Poisson to Gaussian unitary ensemble (GUE) transition by Kunz and Shapiro, using the contour-integral method of Brezin and Hikami. We show that their method is applicable to Poisson to CUE transition with arbitrary initial density. Their method is also applicable to the more general ℓ CUE to CUE transition where CUE refers to the superposition of ℓ independent CUE spectra in arbitrary ratio.
Ensemble treatments of thermal pairing in nuclei
Hung, Nguyen Quang; Dang, Nguyen Dinh
2009-10-01
A systematic comparison is conducted for pairing properties of finite systems at nonzero temperature as predicted by the exact solutions of the pairing problem embedded in three principal statistical ensembles, namely the grandcanonical ensemble, canonical ensemble and microcanonical ensemble, as well as the unprojected (FTBCS1+SCQRPA) and Lipkin-Nogami projected (FTLN1+SCQRPA) theories that include the quasiparticle number fluctuation and coupling to pair vibrations within the self-consistent quasiparticle random-phase approximation. The numerical calculations are performed for the pairing gap, total energy, heat capacity, entropy, and microcanonical temperature within the doubly-folded equidistant multilevel pairing model. The FTLN1+SCQRPA predictions are found to agree best with the exact grand-canonical results. In general, all approaches clearly show that the superfluid-normal phase transition is smoothed out in finite systems. A novel formula is suggested for extracting the empirical pairing gap in reasonable agreement with the exact canonical results.
Ensemble Machine Learning Methods and Applications
Ma, Yunqian
2012-01-01
It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics. Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...
Ensemble Learning for Free with Evolutionary Algorithms ?
Gagné, Christian; Schoenauer, Marc; Tomassini, Marco
2007-01-01
Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient approaches in supervised Machine Learning for the last decade, proceeds by building a population of diverse classifiers. Ensemble Learning with Evolutionary Computation thus receives increasing attention. The Evolutionary Ensemble Learning (EEL) approach presented in this paper features two contributions. First, a new fitness function, inspired by co-evolution and enforcing the classifier diversity, is presented. Further, a new selection criterion based on the classification margin is proposed. This criterion is used to extract the classifier ensemble from the final population only (Off-line) or incrementally along evolution (On-line). Experiments on a set of benchmark problems show that Off-line outperforms single-hypothesis evolutionary learning and state-of-art ...
Desgranges, Caroline; Delhommelle, Jerome
2009-06-28
In recent years, powerful and accurate methods, based on a Wang-Landau sampling, have been developed to determine phase equilibria. However, while these methods have been extensively applied to study the phase behavior of model fluids, they have yet to be applied to molecular systems. In this work, we show how, by combining hybrid Monte Carlo simulations in the isothermal-isobaric ensemble with the Wang-Landau sampling method, we determine the vapor-liquid equilibria of various molecular fluids. More specifically, we present results obtained on rigid molecules, such as benzene, as well as on flexible chains of n-alkanes. The reliability of the method introduced in this work is assessed by demonstrating that our results are in excellent agreement with the results obtained in previous work on simple fluids, using either transition matrix or conventional Monte Carlo simulations with a Wang-Landau sampling, and on molecular fluids, using histogram reweighting or Gibbs ensemble Monte Carlo simulations.
Reversible Projective Measurement in Quantum Ensembles
Khitrin, Anatoly; Lee, Jae-Seung
2010-01-01
We present experimental NMR demonstration of a scheme of reversible projective measurement, which allows extracting information on outcomes and probabilities of a projective measurement in a non-destructive way, with a minimal net effect on the quantum state of an ensemble. The scheme uses reversible dynamics and weak measurement of the intermediate state. The experimental system is an ensemble of 133Cs (S = 7/2) nuclei in a liquid-crystalline matrix.
Ozone ensemble forecast with machine learning algorithms
Mallet, Vivien; Stoltz, Gilles; Mauricette, Boris
2009-01-01
International audience; We apply machine learning algorithms to perform sequential aggregation of ozone forecasts. The latter rely on a multimodel ensemble built for ozone forecasting with the modeling system Polyphemus. The ensemble simulations are obtained by changes in the physical parameterizations, the numerical schemes, and the input data to the models. The simulations are carried out for summer 2001 over western Europe in order to forecast ozone daily peaks and ozone hourly concentrati...
Cluster Ensemble-based Image Segmentation
Xiaoru Wang; Junping Du; Shuzhe Wu; Xu Li; Fu Li
2013-01-01
Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories ...
Calibrating ensemble reliability whilst preserving spatial structure
Jonathan Flowerdew
2014-03-01
Full Text Available Ensemble forecasts aim to improve decision-making by predicting a set of possible outcomes. Ideally, these would provide probabilities which are both sharp and reliable. In practice, the models, data assimilation and ensemble perturbation systems are all imperfect, leading to deficiencies in the predicted probabilities. This paper presents an ensemble post-processing scheme which directly targets local reliability, calibrating both climatology and ensemble dispersion in one coherent operation. It makes minimal assumptions about the underlying statistical distributions, aiming to extract as much information as possible from the original dynamic forecasts and support statistically awkward variables such as precipitation. The output is a set of ensemble members preserving the spatial, temporal and inter-variable structure from the raw forecasts, which should be beneficial to downstream applications such as hydrological models. The calibration is tested on three leading 15-d ensemble systems, and their aggregation into a simple multimodel ensemble. Results are presented for 12 h, 1° scale over Europe for a range of surface variables, including precipitation. The scheme is very effective at removing unreliability from the raw forecasts, whilst generally preserving or improving statistical resolution. In most cases, these benefits extend to the rarest events at each location within the 2-yr verification period. The reliability and resolution are generally equivalent or superior to those achieved using a Local Quantile-Quantile Transform, an established calibration method which generalises bias correction. The value of preserving spatial structure is demonstrated by the fact that 3×3 averages derived from grid-scale precipitation calibration perform almost as well as direct calibration at 3×3 scale, and much better than a similar test neglecting the spatial relationships. Some remaining issues are discussed regarding the finite size of the output
Liu, Li; Xu, Yue-Ping
2017-04-01
Ensemble flood forecasting driven by numerical weather prediction products is becoming more commonly used in operational flood forecasting applications.In this study, a hydrological ensemble flood forecasting system based on Variable Infiltration Capacity (VIC) model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated.The hydrological model is optimized by parallel programmed ɛ-NSGAII multi-objective algorithm and two respectively parameterized models are determined to simulate daily flows and peak flows coupled with a modular approach.The results indicatethat the ɛ-NSGAII algorithm permits more efficient optimization and rational determination on parameter setting.It is demonstrated that the multimodel ensemble streamflow mean have better skills than the best singlemodel ensemble mean (ECMWF) and the multimodel ensembles weighted on members and skill scores outperform other multimodel ensembles. For typical flood event, it is proved that the flood can be predicted 3-4 days in advance, but the flows in rising limb can be captured with only 1-2 days ahead due to the flash feature. With respect to peak flows selected by Peaks Over Threshold approach, the ensemble means from either singlemodel or multimodels are generally underestimated as the extreme values are smoothed out by ensemble process.
Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique
Jan D. Keller
2008-12-01
Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.
Direct determination of liquid phase coexistence by Monte Carlo simulations.
Zweistra, Henk J A; Besseling, N A M
2006-07-01
A formalism to determine coexistence points by means of Monte Carlo simulations is presented. The general idea of the method is to perform a simulation simultaneously in several unconnected boxes which can exchange particles. At equilibrium, most of the boxes will be occupied by a homogeneous phase. The compositions of these boxes yield coexisting points on the binodal. However, since the overall composition is fixed, at least one of the boxes will contain an interface. We show that this does not affect the results, provided that the interface has no net curvature. We coin the name "Helmholtz-ensemble method," because the method is related to the well-known Gibbs-ensemble method, but the volume of the boxes is constant. Since the box volumes are constant, we expect that this method will be particularly useful for lattice models. The accuracy of the Helmholtz-ensemble method is benchmarked against known coexistence curves of the three-dimensional Ising model with excellent results.
Minimal redefinition of the OSV ensemble
Parvizi, S; Parvizi, Shahrokh; Tavanfar, Alireza
2005-01-01
In the interesting conjecture, Z_{BH}=|Z_{top}|^2, proposed by Ooguri, Strominger and Vafa (OSV), the black hole ensemble is a mixed ensemble, and resulting degeneracy of states as obtained from the ensemble inverse-Laplace integration, suffer from prefactors which do not respect the (relevant) electric-magnetic dualities. One idea to overcome this deficiency, as claimed recently, is imposing a nontrivial measure for the ensemble sum. We address this problem and upon a redefinition of the OSV ensemble whose variables are as numerous as the electric potentials, show that for restoring the symmetry no non-Euclidean measure is needful. In detail, we rewrite the OSV free energy as a function of new variables which are combinations of the electric-potentials and the black hole charges. Subsequently the Legendre transformation which bridges between the entropy and the black hole free energy in terms of these variables, points to a generalized ensemble. In this context we will consider all the cases of relevance: sm...
Level density for deformations of the Gaussian orthogonal ensemble
Bertuola, A C; Hussein, M S; Pato, M P; Sargeant, A J
2004-01-01
Formulas are derived for the average level density of deformed, or transition, Gaussian orthogonal random matrix ensembles. After some general considerations about Gaussian ensembles we derive formulas for the average level density for (i) the transition from the Gaussian orthogonal ensemble (GOE) to the Poisson ensemble and (ii) the transition from the GOE to $m$ GOEs.
Multiscale Monte Carlo equilibration: Two-color QCD with two fermion flavors
Detmold, William
2016-01-01
We demonstrate the applicability of a recently proposed multi-scale thermalization algorithm to two-color quantum chromodynamics (QCD) with two mass-degenerate fermion flavors. The algorithm involves refining an ensemble of gauge configurations that had been generated using a renormalization group (RG) matched coarse action, thereby producing a fine ensemble that is close to the thermalized distribution of a target fine action; the refined ensemble is subsequently rethermalized using conventional algorithms. Although the generalization of this algorithm from pure Yang-Mills theory to QCD with dynamical fermions is straight-forward, we find that in the latter case, the method is susceptible to numerical instabilities during the initial stages of rethermalization when using the hybrid Monte Carlo algorithm. We find that these instabilities arise from large fermion forces in the evolution, which are attributed to an accumulation of spurious near-zero modes of the Dirac operator. We propose a simple strategy for ...
The classicality and quantumness of a quantum ensemble
Zhu, Xuanmin; Wu, Shengjun; Liu, Quanhui
2010-01-01
In this paper, we investigate the classicality and quantumness of a quantum ensemble. We define a quantity called classicality to characterize how classical a quantum ensemble is. An ensemble of commuting states that can be manipulated classically has a unit classicality, while a general ensemble has a classicality less than 1. We also study how quantum an ensemble is by defining a related quantity called quantumness. We find that the classicality of an ensemble is closely related to how perfectly the ensemble can be cloned, and that the quantumness of an ensemble is essentially responsible for the security of quantum key distribution(QKD) protocols using that ensemble. Furthermore, we show that the quantumness of an ensemble used in a QKD protocol is exactly the attainable lower bound of the error rate in the sifted key.
Ensemble postprocessing for probabilistic quantitative precipitation forecasts
Bentzien, S.; Friederichs, P.
2012-12-01
Precipitation is one of the most difficult weather variables to predict in hydrometeorological applications. In order to assess the uncertainty inherent in deterministic numerical weather prediction (NWP), meteorological services around the globe develop ensemble prediction systems (EPS) based on high-resolution NWP systems. With non-hydrostatic model dynamics and without parameterization of deep moist convection, high-resolution NWP models are able to describe convective processes in more detail and provide more realistic mesoscale structures. However, precipitation forecasts are still affected by displacement errors, systematic biases and fast error growth on small scales. Probabilistic guidance can be achieved from an ensemble setup which accounts for model error and uncertainty of initial and boundary conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) provides such an ensemble system based on the German-focused limited-area model COSMO-DE. With a horizontal grid-spacing of 2.8 km, COSMO-DE is the convection-permitting high-resolution part of the operational model chain at DWD. The COSMO-DE-EPS consists of 20 realizations of COSMO-DE, driven by initial and boundary conditions derived from 4 global models and 5 perturbations of model physics. Ensemble systems like COSMO-DE-EPS are often limited with respect to ensemble size due to the immense computational costs. As a consequence, they can be biased and exhibit insufficient ensemble spread, and probabilistic forecasts may be not well calibrated. In this study, probabilistic quantitative precipitation forecasts are derived from COSMO-DE-EPS and evaluated at more than 1000 rain gauges located all over Germany. COSMO-DE-EPS is a frequently updated ensemble system, initialized 8 times a day. We use the time-lagged approach to inexpensively increase ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Moreover, we will show that statistical
Athènes, Manuel; Terrier, Pierre
2017-05-01
Markov chain Monte Carlo methods are primarily used for sampling from a given probability distribution and estimating multi-dimensional integrals based on the information contained in the generated samples. Whenever it is possible, more accurate estimates are obtained by combining Monte Carlo integration and integration by numerical quadrature along particular coordinates. We show that this variance reduction technique, referred to as conditioning in probability theory, can be advantageously implemented in expanded ensemble simulations. These simulations aim at estimating thermodynamic expectations as a function of an external parameter that is sampled like an additional coordinate. Conditioning therein entails integrating along the external coordinate by numerical quadrature. We prove variance reduction with respect to alternative standard estimators and demonstrate the practical efficiency of the technique by estimating free energies and characterizing a structural phase transition between two solid phases.
A Monte Carlo Resampling Approach for the Calculation of Hybrid Classical and Quantum Free Energies.
Cave-Ayland, Christopher; Skylaris, Chris-Kriton; Essex, Jonathan W
2017-02-14
Hybrid free energy methods allow estimation of free energy differences at the quantum mechanics (QM) level with high efficiency by performing sampling at the classical mechanics (MM) level. Various approaches to allow the calculation of QM corrections to classical free energies have been proposed. The single step free energy perturbation approach starts with a classically generated ensemble, a subset of structures of which are postprocessed to obtain QM energies for use with the Zwanzig equation. This gives an estimate of the free energy difference associated with the change from an MM to a QM Hamiltonian. Owing to the poor numerical properties of the Zwanzig equation, however, recent developments have produced alternative methods which aim to provide access to the properties of the true QM ensemble. Here we propose an approach based on the resampling of MM structural ensembles and application of a Monte Carlo acceptance test which in principle, can generate the exact QM ensemble or intermediate ensembles between the MM and QM states. We carry out a detailed comparison against the Zwanzig equation and recently proposed non-Boltzmann methods. As a test system we use a set of small molecule hydration free energies for which hybrid free energy calculations are performed at the semiempirical Density Functional Tight Binding level. Equivalent ensembles at this level of theory have also been generated allowing the reverse QM to MM perturbations to be performed along with a detailed analysis of the results. Additionally, a previously published nucleotide base pair data set simulated at the QM level using ab initio molecular dynamics is also considered. We provide a strong rationale for the use of the Monte Carlo Resampling and non-Boltzmann approaches by showing that configuration space overlaps can be estimated which provide useful diagnostic information regarding the accuracy of these hybrid approaches.
Nanoporous gold formation by dealloying : A Metropolis Monte Carlo study
Zinchenko, O.; De Raedt, H. A.; Detsi, E.; Onck, P. R.; De Hosson, J. T. M.
2013-01-01
A Metropolis Monte Carlo study of the dealloying mechanism leading to the formation of nanoporous gold is presented. A simple lattice-gas model for gold, silver and acid particles, vacancies and products of chemical reactions is adopted. The influence of temperature, concentration and lattice defect
Bayesian Monte Carlo Method for Nuclear Data Evaluation
Koning, A.J., E-mail: koning@nrg.eu
2015-01-15
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.
Monte Carlo simulations of Protein Adsorption
Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges
2008-03-01
Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.
Monte Carlo methods for electromagnetics
Sadiku, Matthew NO
2009-01-01
Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...
Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing
Toye, Habib
2017-05-26
We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.
Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing
Toye, Habib; Zhan, Peng; Gopalakrishnan, Ganesh; Kartadikaria, Aditya R.; Huang, Huang; Knio, Omar; Hoteit, Ibrahim
2017-07-01
We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.
Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts...... is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast...... from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error...
Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.
2016-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts...... is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost. For example, the ensemble copula coupling (ECC) method rebuilds the multivariate aspect of the forecast from...... the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new...
Erdmann, Thorsten; Albert, Philipp J; Schwarz, Ulrich S
2013-11-07
Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes, or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors in equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of the fraction of bound motors under load and thus to firm attachment even for small ensembles. This adaptation to load results in a concave force-velocity relation described by a Hill relation. For external load provided by a linear spring, myosin II ensembles dynamically adjust themselves towards an isometric state with constant average position and load. The dynamics of the ensembles is now determined mainly by the distribution of motors over the different kinds of bound states. For increasing stiffness of the external spring, there is a sharp transition beyond which myosin II can no longer perform the power stroke. Slow unbinding from the pre-power-stroke state protects the ensembles against detachment.
Erdmann, Thorsten; Albert, Philipp J.; Schwarz, Ulrich S.
2013-11-01
Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes, or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors in equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of the fraction of bound motors under load and thus to firm attachment even for small ensembles. This adaptation to load results in a concave force-velocity relation described by a Hill relation. For external load provided by a linear spring, myosin II ensembles dynamically adjust themselves towards an isometric state with constant average position and load. The dynamics of the ensembles is now determined mainly by the distribution of motors over the different kinds of bound states. For increasing stiffness of the external spring, there is a sharp transition beyond which myosin II can no longer perform the power stroke. Slow unbinding from the pre-power-stroke state protects the ensembles against detachment.
Multiscale macromolecular simulation: role of evolving ensembles.
Singharoy, A; Joshi, H; Ortoleva, P J
2012-10-22
Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin time step is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers.
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D. M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
A New Approach to Monte Carlo Simulations in Statistical Physics
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Wen, Fufang; Wang, Gang
2016-01-01
Recent charge-dependent azimuthal correlation measurements in high-energy heavy-ion collisions have observed charge-separation signals perpendicular to the reaction plane, and the observations have been related to the chiral magnetic effect (CME). However, the correlation signal is contaminated with the background contributions due to the collective motion (flow) of the collision system, and it remains elusive to effectively remove the background from the correlation. We present a method study with Monte Carlo simulations and a multi-phase transport model, and develop a scheme to reveal the true CME signal via the event-shape engineering with the flow vector, $\\overrightarrow{q}$. An alternative approach using the ensemble averages of observables is also discussed.
Control and Synchronization of Neuron Ensembles
Li, Jr-Shin; Ruths, Justin
2011-01-01
Synchronization of oscillations is a phenomenon prevalent in natural, social, and engineering systems. Controlling synchronization of oscillating systems is motivated by a wide range of applications from neurological treatment of Parkinson's disease to the design of neurocomputers. In this article, we study the control of an ensemble of uncoupled neuron oscillators described by phase models. We examine controllability of such a neuron ensemble for various phase models and, furthermore, study the related optimal control problems. In particular, by employing Pontryagin's maximum principle, we analytically derive optimal controls for spiking single- and two-neuron systems, and analyze the applicability of the latter to an ensemble system. Finally, we present a robust computational method for optimal control of spiking neurons based on pseudospectral approximations. The methodology developed here is universal to the control of general nonlinear phase oscillators.
On large deviations for ensembles of distributions
Khrychev, D. A.
2013-11-01
The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each \\varepsilon>0 the nonempty set \\mathscr P_\\varepsilon of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set \\{\\mathscr P_\\varepsilon,\\,\\varepsilon>0\\}, hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.
Cavity cooling of an ensemble spin system.
Wood, Christopher J; Borneman, Troy W; Cory, David G
2014-02-07
We describe how sideband cooling techniques may be applied to large spin ensembles in magnetic resonance. Using the Tavis-Cummings model in the presence of a Rabi drive, we solve a Markovian master equation describing the joint spin-cavity dynamics to derive cooling rates as a function of ensemble size. Our calculations indicate that the coupled angular momentum subspaces of a spin ensemble containing roughly 10(11) electron spins may be polarized in a time many orders of magnitude shorter than the typical thermal relaxation time. The described techniques should permit efficient removal of entropy for spin-based quantum information processors and fast polarization of spin samples. The proposed application of a standard technique in quantum optics to magnetic resonance also serves to reinforce the connection between the two fields, which has recently begun to be explored in further detail due to the development of hybrid designs for manufacturing noise-resilient quantum devices.
Characteristic polynomials in real Ginibre ensembles
Akemann, G; Phillips, M J [Department of Mathematical Sciences and BURSt Research Centre, Brunel University West London, UB8 3PH Uxbridge (United Kingdom); Sommers, H-J [Fachbereich Physik, Universitaet Duisburg-Essen, 47048 Duisburg (Germany)], E-mail: Gernot.Akemann@brunel.ac.uk, E-mail: Michael.Phillips@brunel.ac.uk, E-mail: H.J.Sommers@uni-due.de
2009-01-09
We calculate the average of two characteristic polynomials for the real Ginibre ensemble of asymmetric random matrices, and its chiral counterpart. Considered as quadratic forms they determine a skew-symmetric kernel from which all complex eigenvalue correlations can be derived. Our results are obtained in a very simple fashion without going to an eigenvalue representation, and are completely new in the chiral case. They hold for Gaussian ensembles which are partly symmetric, with kernels given in terms of Hermite and Laguerre polynomials respectively, depending on an asymmetry parameter. This allows us to interpolate between the maximally asymmetric real Ginibre and the Gaussian orthogonal ensemble, as well as their chiral counterparts. (fast track communication)
Embedded random matrix ensembles in quantum physics
Kota, V K B
2014-01-01
Although used with increasing frequency in many branches of physics, random matrix ensembles are not always sufficiently specific to account for important features of the physical system at hand. One refinement which retains the basic stochastic approach but allows for such features consists in the use of embedded ensembles. The present text is an exhaustive introduction to and survey of this important field. Starting with an easy-to-read introduction to general random matrix theory, the text then develops the necessary concepts from the beginning, accompanying the reader to the frontiers of present-day research. With some notable exceptions, to date these ensembles have primarily been applied in nuclear spectroscopy. A characteristic example is the use of a random two-body interaction in the framework of the nuclear shell model. Yet, topics in atomic physics, mesoscopic physics, quantum information science and statistical mechanics of isolated finite quantum systems can also be addressed using these ensemb...
Total probabilities of ensemble runoff forecasts
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2017-04-01
Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes
Circular β ensembles, CMV representation, characteristic polynomials
SU ZhongGen
2009-01-01
In this note we first briefly review some recent progress in the study of the circular β ensemble on the unit circle, where 0 > 0 is a model parameter. In the special cases β = 1,2 and 4, this ensemble describes the joint probability density of eigenvalues of random orthogonal, unitary and sympletic matrices, respectively. For general β, Killip and Nenciu discovered a five-diagonal sparse matrix model, the CMV representation. This representation is new even in the case β = 2; and it has become a powerful tool for studying the circular β ensemble. We then give an elegant derivation for the moment identities of characteristic polynomials via the link with orthogonal polynomials on the unit circle.
Becheva, E
2004-11-01
Elastic and inelastic proton scattering on the unstable nuclei {sup 22}O was measured in inverse kinematics at the GANIL facility. A secondary beam of {sup 22}O at 46.6 MeV/A with intensity of {approx} 1000 pps, impinged on a (CH{sub 2}){sub n} target. Recoiling protons were detected in the silicon strip array MUST. We measured the angular distributions of the ground and 2{sub 1}{sup +} states of {sup 22}O. Phenomenological and microscopic analysis of the data were performed. The phenomenological analysis using a global potential parameterization of Becchetti and Greenlees and CH89 yields a value of the deformation parameter {beta}{sub p,p}, = 0.23{+-}0.04 for {sup 22}O, much lower than that of {sup 20}O. The ratio of neutron and proton matrix element M{sub n}/M{sub p} is found equal to 1.46{+-}0.50. The microscopic analysis used of densities and transition densities calculated within HFB and QRPA models respectively. Optical potential were obtained through both folding and JLM procedures. A ratio M{sub n}/M{sub p}=2.5{+-}1.0 is deduced. Contrary of {sup 20}O, {sup 22}O behaviours like a doubly magic nucleus, suggesting a pronounced sub-shell closure at N=14. To develop the study of direct reactions induced by radioactive beams, we have developed and built, a new multi-detector MUST II devoted to light charged particle detection. In this work we established the requirements for the CsI(Tl) detector stage, and test four CsI detector prototypes, constructed by the SCIONIX company. (author)
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Appraisal of jump distributions in ensemble-based sampling algorithms
Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo
2017-04-01
Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.
The effect of sampling noise in ensemble-based Kalman filters
Sacher, William
Ensemble-based Kalman filters have drawn a lot of attention in the atmospheric and ocean scientific community because of their potential to be used as a data assimilation tool for numerical prediction in a strongly nonlinear context at an affordable cost. However, many studies have noted practical problems in their implementation. Indeed, being Monte-Carlo methods, the useful parameters are estimated from a sample of limited size of independent realizations of the process. As a consequence, the unavoidable sampling noise impacts the quality of the analysis. An idealized perfect model context is considered in which the analytical expression for the analysis accuracy and reliability as a function of the ensemble size is established, from a second-order moment perspective. It is proved that one can analytically explain the general tendency for ensemble-based Kalman filters to underestimate, on average, the analysis variance and therefore the likeliness for these filters to diverge. Performance of alternative methods, designed to reduce or eliminate sampling error effects, such as the double ensemble Kalman filter or covariance inflation are also analytically explored. For methods using perturbed observations, it is shown that the covariance inflation is the easiest and least expensive method to obtain the most accurate and reliable analysis. These analytical results agreed well with means over a large number of experiments using a perfect, low-resolution, and quasi-geostrophic barotropic model, in a series of observation system simulation experiments of single analysis cycles as well as in a simulated forecast system. In one-analysis cycle experiments with rank histograms, non-perturbed-observation methods show a lack of reliability regardless of the number of members. For small ensemble sizes, sampling error effects are dominant but have a smaller impact than in the perturbed observation method, making non-perturbed-observation method filters much less subject to
Monte Carlo Radiation Hydrodynamics: Methods, Tests and Application to Supernova Type Ia Ejecta
Noebauer, U M; Kromer, M; Röpke, F K; Hillebrandt, W
2012-01-01
In astrophysical systems, radiation-matter interactions are important in transferring energy and momentum between the radiation field and the surrounding material. This coupling often makes it necessary to consider the role of radiation when modelling the dynamics of astrophysical fluids. During the last few years, there have been rapid developments in the use of Monte Carlo methods for numerical radiative transfer simulations. Here, we present an approach to radiation hydrodynamics that is based on coupling Monte Carlo radiative transfer techniques with finite-volume hydrodynamical methods in an operator-split manner. In particular, we adopt an indivisible packet formalism to discretize the radiation field into an ensemble of Monte Carlo packets and employ volume-based estimators to reconstruct the radiation field characteristics. In this paper the numerical tools of this method are presented and their accuracy is verified in a series of test calculations. Finally, as a practical example, we use our approach...
P.Orea
2003-01-01
Full Text Available We have performed Monte Carlo simulations in the canonical ensemble of a hard-sphere fluid adsorbed in microporous media. The pressure of the adsorbed fluid is calculated by using an original procedure that includes the calculations of the pressure tensor components during the simulation. In order to confirm the equivalence of bulk and adsorbed fluid pressures, we have exploited the mechanical condition of equilibrium and performed additional canonical Monte Carlo simulations in a super system "bulk fluid + adsorbed fluid". When the configuration of a model porous media permits each of its particles to be in contact with adsorbed fluid particles, we found that these pressures are equal. Unlike the grand canonical Monte Carlo method, the proposed calculation approach can be used efficiently to obtain adsorption isotherms over a wide range of fluid densities and porosities of adsorbent.
Ensemble Enabled Weighted PageRank
Luo, Dongsheng; Hu, Renjun; Duan, Liang; Ma, Shuai
2016-01-01
This paper describes our solution for WSDM Cup 2016. Ranking the query independent importance of scholarly articles is a critical and challenging task, due to the heterogeneity and dynamism of entities involved. Our approach is called Ensemble enabled Weighted PageRank (EWPR). To do this, we first propose Time-Weighted PageRank that extends PageRank by introducing a time decaying factor. We then develop an ensemble method to assemble the authorities of the heterogeneous entities involved in scholarly articles. We finally propose to use external data sources to further improve the ranking accuracy. Our experimental study shows that our EWPR is a good choice for ranking scholarly articles.
Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project
Wallick, Michael N.; Mittman, David S.; Shams, Khawaja, S.; Bachmann, Andrew G.; Ludowise, Melissa
2013-01-01
This software simplifies the process of having to set up an Eclipse IDE programming environment for the members of the cross-NASA center project, Ensemble. It achieves this by assembling all the necessary add-ons and custom tools/preferences. This software is unique in that it allows developers in the Ensemble Project (approximately 20 to 40 at any time) across multiple NASA centers to set up a development environment almost instantly and work on Ensemble software. The software automatically has the source code repositories and other vital information and settings included. The Eclipse IDE is an open-source development framework. The NASA (Ensemble-specific) version of the software includes Ensemble-specific plug-ins as well as settings for the Ensemble project. This software saves developers the time and hassle of setting up a programming environment, making sure that everything is set up in the correct manner for Ensemble development. Existing software (i.e., standard Eclipse) requires an intensive setup process that is both time-consuming and error prone. This software is built once by a single user and tested, allowing other developers to simply download and use the software
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
Kinetic Monte Carlo Studies of Hydrogen Abstraction from Graphite
Cuppen, H M
2008-01-01
We present Monte Carlo simulations on Eley-Rideal abstraction reactions of atomic hydrogen chemisorbed on graphite. The results are obtained via a hybrid approach where energy barriers derived from density functional theory calculations are used as input to Monte Carlo simulations. By comparing with experimental data, we discriminate between contributions from different Eley-Rideal mechanisms. A combination of two different mechanisms yields good quantitative and qualitative agreement between the experimentally derived and the simulated Eley-Rideal abstraction cross sections and surface configurations. These two mechanisms include a direct Eley-Rideal reaction with fast diffusing H atoms and a dimer mediated Eley-Rideal mechanism with increased cross section at low coverage. Such a dimer mediated Eley-Rideal mechanism has not previously been proposed and serves as an alternative explanation to the steering behavior often given as the cause of the coverage dependence observed in Eley-Rideal reaction cross sect...
Total probabilities of ensemble runoff forecasts
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2016-04-01
Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative
Generalized ensemble method applied to study systems with strong first order transitions
Małolepsza, E.; Kim, J.; Keyes, T.
2015-09-01
At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.
Monte Carlo integration on GPU
Kanzaki, J.
2010-01-01
We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...
An Ensemble Approach for Expanding Queries
2012-11-01
vincristine; thalidomide; painful; cisplatin; oxaliplatin; charcot -marie-tooth disease ; drugs; neuropathy Ensemble expansion child of, asthma, kids...system disorders; peripheral nerve diseases ; peripheral neuropathies; peripheral nervous system disorder; peripheral nervous system disease ...peripheral nerve disease ; peripheral nerve disorders, peripheral nerve disorder Relation expansion offspring, child of, of child, child find
NYYD Ensemble ja Riho Sibul / Anneli Remme
Remme, Anneli, 1968-
2001-01-01
Gavin Bryarsi teos "Jesus' Blood Never Failed Me Yet" NYYD Ensemble'i ja Riho Sibula esituses 27. detsembril Pauluse kirikus Tartus ja 28. detsembril Rootsi- Mihkli kirikus Tallinnas. Kaastegevad Tartu Ülikooli Kammerkoor (Tartus) ja kammerkoor Voces Musicales (Tallinnas). Kunstiline juht Olari Elts
A method for ensemble wildland fire simulation
Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain
2011-01-01
An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...
NYYD Ensemble ja Riho Sibul / Anneli Remme
Remme, Anneli, 1968-
2001-01-01
Gavin Bryarsi teos "Jesus' Blood Never Failed Me Yet" NYYD Ensemble'i ja Riho Sibula esituses 27. detsembril Pauluse kirikus Tartus ja 28. detsembril Rootsi- Mihkli kirikus Tallinnas. Kaastegevad Tartu Ülikooli Kammerkoor (Tartus) ja kammerkoor Voces Musicales (Tallinnas). Kunstiline juht Olari Elts
Eigenstate Gibbs ensemble in integrable quantum systems
Nandy, Sourav; Sen, Arnab; Das, Arnab; Dhar, Abhishek
2016-12-01
The eigenstate thermalization hypothesis conjectures that for a thermodynamically large system in one of its energy eigenstates, the reduced density matrix describing any finite subsystem is determined solely by a set of relevant conserved quantities. In a chaotic quantum system, only the energy is expected to play that role and hence eigenstates appear locally thermal. Integrable systems, on the other hand, possess an extensive number of such conserved quantities and therefore the reduced density matrix requires specification of all the corresponding parameters (generalized Gibbs ensemble). However, here we show by unbiased statistical sampling of the individual eigenstates with a given finite energy density that the local description of an overwhelming majority of these states of even such an integrable system is actually Gibbs-like, i.e., requires only the energy density of the eigenstate. Rare eigenstates that cannot be represented by the Gibbs ensemble can also be sampled efficiently by our method and their local properties are then shown to be described by appropriately truncated generalized Gibbs ensembles. We further show that the presence of these rare eigenstates differentiates the model from the chaotic case and leads to the system being described by a generalized Gibbs ensemble at long time under a unitary dynamics following a sudden quench, even when the initial state is a typical (Gibbs-like) eigenstate of the prequench Hamiltonian.
Locally Accessible Information from Multipartite Ensembles
SONG Wei
2009-01-01
We present a universal Holevo-like upper bound on the locally accessible information for arbitrary multipartite ensembles.This bound allows us to analyze the indistinguishability of a set of orthogonal states under local operations and classical communication.We also derive the upper bound for the capacity of distributed dense coding with multipartite senders and multipartite receivers.
Canonical Ensemble Model for Black Hole Radiation
Jingyi Zhang
2014-09-01
In this paper, a canonical ensemble model for the black hole quantum tunnelling radiation is introduced. In this model the probability distribution function corresponding to the emission shell is calculated to second order. The formula of pressure and internal energy of the thermal system is modified, and the fundamental equation of thermodynamics is also discussed.
A Hierarchical Bayes Ensemble Kalman Filter
Tsyrulnikov, Michael; Rakitko, Alexander
2017-01-01
A new ensemble filter that allows for the uncertainty in the prior distribution is proposed and tested. The filter relies on the conditional Gaussian distribution of the state given the model-error and predictability-error covariance matrices. The latter are treated as random matrices and updated in a hierarchical Bayes scheme along with the state. The (hyper)prior distribution of the covariance matrices is assumed to be inverse Wishart. The new Hierarchical Bayes Ensemble Filter (HBEF) assimilates ensemble members as generalized observations and allows ordinary observations to influence the covariances. The actual probability distribution of the ensemble members is allowed to be different from the true one. An approximation that leads to a practicable analysis algorithm is proposed. The new filter is studied in numerical experiments with a doubly stochastic one-variable model of "truth". The model permits the assessment of the variance of the truth and the true filtering error variance at each time instance. The HBEF is shown to outperform the EnKF and the HEnKF by Myrseth and Omre (2010) in a wide range of filtering regimes in terms of performance of its primary and secondary filters.
Statistical theory of hierarchical avalanche ensemble
Olemskoi, Alexander I.
1999-01-01
The statistical ensemble of avalanche intensities is considered to investigate diffusion in ultrametric space of hierarchically subordinated avalanches. The stationary intensity distribution and the steady-state current are obtained. The critical avalanche intensity needed to initiate the global avalanche formation is calculated depending on noise intensity. The large time asymptotic for the probability of the global avalanche appearance is derived.
Marking up lattice QCD configurations and ensembles
Coddington, P; Maynard, C M; Pleiter, D; Yoshié, T
2007-01-01
QCDml is an XML-based markup language designed for sharing QCD configurations and ensembles world-wide via the International Lattice Data Grid (ILDG). Based on the latest release, we present key ingredients of the QCDml in order to provide some starting points for colleagues in this community to markup valuable configurations and submit them to the ILDG.
A Theoretical Analysis of Why Hybrid Ensembles Work
Kuo-Wei Hsu
2017-01-01
Full Text Available Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
Global Ensemble Forecast System (GEFS) [2.5 Deg.
National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...
An educational model for ensemble streamflow simulation and uncertainty analysis
AghaKouchak, A; Nakhjiri, N; Habib, E
2013-01-01
...) are interconnected. The educational toolbox includes a MATLAB Graphical User Interface (GUI) and an ensemble simulation scheme that can be used for teaching uncertainty analysis, parameter estimation, ensemble simulation and model sensitivity...
Ensemble-based Kalman Filters in Strongly Nonlinear Dynamics
Zhaoxia PU; Joshua HACKER
2009-01-01
This study examines the effectiveness of ensemble Kalman filters in data assimilation with the strongly nonlinear dynamics of the Lorenz-63 model, and in particular their use in predicting the regime transition that occurs when the model jumps from one basin of attraction to the other. Four configurations of the ensemble-based Kalman filtering data assimilation techniques, including the ensemble Kalman filter, ensemble adjustment Kalman filter, ensemble square root filter and ensemble transform Kalman filter, are evaluated with their ability in predicting the regime transition (also called phase transition) and also are compared in terms of their sensitivity to both observational and sampling errors. The sensitivity of each ensemble-based filter to the size of the ensemble is also examined.
Space Applications for Ensemble Detection and Analysis Project
National Aeronautics and Space Administration — Ensemble Detection is both a measurement technique and analysis tool. Like a prism that separates light into spectral bands, an ensemble detector mixes a signal with...
Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing
Pinson, Pierre; Madsen, Henrik
methodology. In a first stage, ensemble forecasts of meteorological variables are converted to power through a suitable power curve model. The relevance and benefits of employing a newly developed orthogonal fitting method for the power curve model over the traditional least-squares one are discussed...... predictive distributions. Such a methodology has the benefit of yielding predictive distributions that are of increased reliability (in a probabilistic sense) in comparison with the raw ensemble forecasts, while taking advantage of their high resolution....... of probabilistic forecasts, the resolution of which may be maximized by using meteorological ensemble predictions as input. The paper concentrates on the test case of the Horns Rev wind farm over a period of approximately one year, in order to describe, apply and discuss a complete ensemble-based forecasting...
Theory and Practice of Phase-aware Ensemble Forecasting
Schulte, J. A.; Georgas, N.
2016-12-01
The timing of events represents a source of uncertainty in ensemble forecasting that can produce misleading ensemble statistics. A general theory is presented to overcome drawbacks of traditional ensemble forecasting statistics that perform poorly in the presence of timing disagreements among ensemble members. It was shown, in particular, that ensemble forecasts containing substantial uncertainty in timing can produce non-trivial higher-order statistical moments, rendering the ensemble mean inappropriate as a best available estimate of the future state of the forecast parameter in question. A set of theoretical experiments showed that the existence of large timing differences among ensemble members can produce negative ensemble skewness even when the ensemble members are sinusoids whose amplitudes are drawn from a normal distribution: Consistently, the ensemble mean will tend to fall on the left tail of the normal distribution representing the originally sampled amplitudes, rather than at the mean or median. To remedy the left-tail placement problem of the ensemble mean, a new generally applicable ensemble statistic - the phase-aware ensemble mean - is proposed that is more robust against ensemble skewness resulting from timing spread. The computation of the phase-aware mean involves the transformation of all ensemble members to wavelet space and the subsequent inverse wavelet transformation of the product of the ensemble mean wavelet phase and modulus back to the time domain. The new methods were applied to storm surge reforecasts for Hurricane Irene and Sandy at 8 stations located around the New York City metropolitan area. The phase-aware ensemble mean was found to perform better at detecting the magnitude of events compared to the traditional ensemble mean, consistent with the results from theoretical experiments. The ensemble mean, moreover, was found to be consistently located on the left tail of distributions representing future peak storm surge outcomes. A
Quantum canonical ensemble: A projection operator approach
Magnus, Wim; Lemmens, Lucien; Brosens, Fons
2017-09-01
Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.
Flood Forecasting Based on TIGGE Precipitation Ensemble Forecast
Jinyin Ye; Yuehong Shao; Zhijia Li
2016-01-01
TIGGE (THORPEX International Grand Global Ensemble) was a major part of the THORPEX (Observing System Research and Predictability Experiment). It integrates ensemble precipitation products from all the major forecast centers in the world and provides systematic evaluation on the multimodel ensemble prediction system. Development of meteorologic-hydrologic coupled flood forecasting model and early warning model based on the TIGGE precipitation ensemble forecast can provide flood probability fo...
Ensembles of signal transduction models using Pareto Optimal Ensemble Techniques (POETs).
Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D
2010-07-01
Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.
Data assimilation in integrated hydrological modeling using ensemble Kalman filtering
Rasmussen, Jørn; Madsen, H.; Jensen, Karsten Høgh
2015-01-01
Groundwater head and stream discharge is assimilated using the ensemble transform Kalman filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members...
Exploring and Listening to Chinese Classical Ensembles in General Music
Zhang, Wenzhuo
2017-01-01
Music diversity is valued in theory, but the extent to which it is efficiently presented in music class remains limited. Within this article, I aim to bridge this gap by introducing four genres of Chinese classical ensembles--Qin and Xiao duets, Jiang Nan bamboo and silk ensembles, Cantonese ensembles, and contemporary Chinese orchestras--into the…
Data assimilation in integrated hydrological modeling using ensemble Kalman filtering
Rasmussen, Jørn; Madsen, H.; Jensen, Karsten Høgh;
2015-01-01
Groundwater head and stream discharge is assimilated using the ensemble transform Kalman filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members...
An Efficient Approach to Ab Initio Monte Carlo Simulation
Leiding, Jeff
2013-01-01
We present a Nested Markov Chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, is used to substantially decorrelate configurations at which the potential of interest is evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure is maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature \\beta^0), which is otherwise unconstrained. Local density approximation (LDA) results are presented for shocked states in argon at pressures from 4 to 60 GPa. Depending on the quality of the reference potential, the acceptance probability is enhanced by factors of 1.2-28 relative to unoptimized NMC sampling, and the procedure's efficiency is found to be competitive with that of standard ab initio...
Replica exchange Monte Carlo applied to hard spheres.
Odriozola, Gerardo
2009-10-14
In this work a replica exchange Monte Carlo scheme which considers an extended isobaric-isothermal ensemble with respect to pressure is applied to study hard spheres (HSs). The idea behind the proposal is expanding volume instead of increasing temperature to let crowded systems characterized by dominant repulsive interactions to unblock, and so, to produce sampling from disjoint configurations. The method produces, in a single parallel run, the complete HS equation of state. Thus, the first order fluid-solid transition is captured. The obtained results well agree with previous calculations. This approach seems particularly useful to treat purely entropy-driven systems such as hard body and nonadditive hard mixtures, where temperature plays a trivial role.
Monte Carlo simulations of systems with complex energy landscapes
Wüst, T.; Landau, D. P.; Gervais, C.; Xu, Y.
2009-04-01
Non-traditional Monte Carlo simulations are a powerful approach to the study of systems with complex energy landscapes. After reviewing several of these specialized algorithms we shall describe the behavior of typical systems including spin glasses, lattice proteins, and models for "real" proteins. In the Edwards-Anderson spin glass it is now possible to produce probability distributions in the canonical ensemble and thermodynamic results of high numerical quality. In the hydrophobic-polar (HP) lattice protein model Wang-Landau sampling with an improved move set (pull-moves) produces results of very high quality. These can be compared with the results of other methods of statistical physics. A more realistic membrane protein model for Glycophorin A is also examined. Wang-Landau sampling allows the study of the dimerization process including an elucidation of the nature of the process.
Status of Monte-Carlo Event Generators
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
Simulation of an ensemble of future climate time series with an hourly weather generator
Caporali, E.; Fatichi, S.; Ivanov, V. Y.; Kim, J.
2010-12-01
There is evidence that climate change is occurring in many regions of the world. The necessity of climate change predictions at the local scale and fine temporal resolution is thus warranted for hydrological, ecological, geomorphological, and agricultural applications that can provide thematic insights into the corresponding impacts. Numerous downscaling techniques have been proposed to bridge the gap between the spatial scales adopted in General Circulation Models (GCM) and regional analyses. Nevertheless, the time and spatial resolutions obtained as well as the type of meteorological variables may not be sufficient for detailed studies of climate change effects at the local scales. In this context, this study presents a stochastic downscaling technique that makes use of an hourly weather generator to simulate time series of predicted future climate. Using a Bayesian approach, the downscaling procedure derives distributions of factors of change for several climate statistics from a multi-model ensemble of GCMs. Factors of change are sampled from their distributions using a Monte Carlo technique to entirely account for the probabilistic information obtained with the Bayesian multi-model ensemble. Factors of change are subsequently applied to the statistics derived from observations to re-evaluate the parameters of the weather generator. The weather generator can reproduce a wide set of climate variables and statistics over a range of temporal scales, from extremes, to the low-frequency inter-annual variability. The final result of such a procedure is the generation of an ensemble of hourly time series of meteorological variables that can be considered as representative of future climate, as inferred from GCMs. The generated ensemble of scenarios also accounts for the uncertainty derived from multiple GCMs used in downscaling. Applications of the procedure in reproducing present and future climates are presented for different locations world-wide: Tucson (AZ
The role of ensemble post-processing for modeling the ensemble tail
Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-04-01
The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
An efficient approach to ab initio Monte Carlo simulation.
Leiding, Jeff; Coe, Joshua D
2014-01-21
We present a Nested Markov chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, was used to substantially decorrelate configurations at which the potential of interest was evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure was maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature β(0)), which was otherwise unconstrained. Local density approximation results are presented for shocked states of argon at pressures from 4 to 60 GPa, where-depending on the quality of the reference system potential-acceptance probabilities were enhanced by factors of 1.2-28 relative to unoptimized NMC. The optimization procedure compensated strongly for reference potential shortcomings, as evidenced by significantly higher speedups when using a reference potential of lower quality. The efficiency of optimized NMC is shown to be competitive with that of standard ab initio molecular dynamics in the canonical ensemble.
Mester, Zoltan; Lynd, Nathaniel; Fredrickson, Glenn
2013-03-01
Melts of block copolymer blends can exhibit coexistence between compositionally and morphologically distinct phases. We derived a unit-cell approach for a field theoretic Gibbs ensemble formalism to rapidly map out such coexistence regions. We also developed a canonical ensemble model for the reversible reaction of supramolecular polymers and integrated it into the Gibbs ensemble scheme. This creates a faster method for generating phase diagrams in complex supramolecular systems than the usual grand canonical ensemble method and allows us to specify the system in experimentally accessible volume fractions rather than chemical potentials. The integrated approach is used to calculate phase diagrams for AB diblock copolymers reversibly reacting with B homopolymers to form a new diblocks we term ``ABB.'' For our case, we use a diblock that is sixty percent A monomer and a homopolymer that is the same length as the diblock. In the limits of infinite reaction favorability (large equilibrium constant), the system approaches cases of an ABB diblock-B homopolymer blend when the AB diblock is the limiting reactant and AB diblock-ABB diblock blend when the homopolymer is the limiting reactant. As reaction favorability is decreased, the phase boundaries shift towards higher homopolymer compositions so that sufficient reaction can take place to produce the ABB diblock that has a deciding role stabilizing the observed phases.
Equilibrium Statistics: Monte Carlo Methods
Kröger, Martin
Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].
Accurate barrier heights using diffusion Monte Carlo
Krongchon, Kittithat; Wagner, Lucas K
2016-01-01
Fixed node diffusion Monte Carlo (DMC) has been performed on a test set of forward and reverse barrier heights for 19 non-hydrogen-transfer reactions, and the nodal error has been assessed. The DMC results are robust to changes in the nodal surface, as assessed by using different mean-field techniques to generate single determinant wave functions. Using these single determinant nodal surfaces, DMC results in errors of 1.5(5) kcal/mol on barrier heights. Using the large data set of DMC energies, we attempted to find good descriptors of the fixed node error. It does not correlate with a number of descriptors including change in density, but does correlate with the gap between the highest occupied and lowest unoccupied orbital energies in the mean-field calculation.
Quantum data compression of a qubit ensemble.
Rozema, Lee A; Mahler, Dylan H; Hayat, Alex; Turner, Peter S; Steinberg, Aephraim M
2014-10-17
Data compression is a ubiquitous aspect of modern information technology, and the advent of quantum information raises the question of what types of compression are feasible for quantum data, where it is especially relevant given the extreme difficulty involved in creating reliable quantum memories. We present a protocol in which an ensemble of quantum bits (qubits) can in principle be perfectly compressed into exponentially fewer qubits. We then experimentally implement our algorithm, compressing three photonic qubits into two. This protocol sheds light on the subtle differences between quantum and classical information. Furthermore, since data compression stores all of the available information about the quantum state in fewer physical qubits, it could allow for a vast reduction in the amount of quantum memory required to store a quantum ensemble, making even today's limited quantum memories far more powerful than previously recognized.
Rotationally invariant ensembles of integrable matrices.
Yuzbashyan, Emil A; Shastry, B Sriram; Scaramazza, Jasen A
2016-05-01
We construct ensembles of random integrable matrices with any prescribed number of nontrivial integrals and formulate integrable matrix theory (IMT)-a counterpart of random matrix theory (RMT) for quantum integrable models. A type-M family of integrable matrices consists of exactly N-M independent commuting N×N matrices linear in a real parameter. We first develop a rotationally invariant parametrization of such matrices, previously only constructed in a preferred basis. For example, an arbitrary choice of a vector and two commuting Hermitian matrices defines a type-1 family and vice versa. Higher types similarly involve a random vector and two matrices. The basis-independent formulation allows us to derive the joint probability density for integrable matrices, similar to the construction of Gaussian ensembles in the RMT.
Face Recognition using Optimal Representation Ensemble
Li, Hanxi; Gao, Yongsheng
2011-01-01
Recently, the face recognizers based on linear representations have been shown to deliver state-of-the-art performance. In real-world applications, however, face images usually suffer from expressions, disguises and random occlusions. The problematic facial parts undermine the validity of the linear-subspace assumption and thus the recognition performance deteriorates significantly. In this work, we address the problem in a learning-inference-mixed fashion. By observing that the linear-subspace assumption is more reliable on certain face patches rather than on the holistic face, some Bayesian Patch Representations (BPRs) are randomly generated and interpreted according to the Bayes' theory. We then train an ensemble model over the patch-representations by minimizing the empirical risk w.r.t the "leave-one-out margins". The obtained model is termed Optimal Representation Ensemble (ORE), since it guarantees the optimality from the perspective of Empirical Risk Minimization. To handle the unknown patterns in tes...
Statistical ensembles for money and debt
Viaggiu, Stefano; Lionetto, Andrea; Bargigli, Leonardo; Longo, Michele
2012-10-01
We build a statistical ensemble representation of two economic models describing respectively, in simplified terms, a payment system and a credit market. To this purpose we adopt the Boltzmann-Gibbs distribution where the role of the Hamiltonian is taken by the total money supply (i.e. including money created from debt) of a set of interacting economic agents. As a result, we can read the main thermodynamic quantities in terms of monetary ones. In particular, we define for the credit market model a work term which is related to the impact of monetary policy on credit creation. Furthermore, with our formalism we recover and extend some results concerning the temperature of an economic system, previously presented in the literature by considering only the monetary base as a conserved quantity. Finally, we study the statistical ensemble for the Pareto distribution.
Staying thermal with Hartree ensemble approximations
Salle, Mischa E-mail: msalle@science.uva.nl; Smit, Jan E-mail: jsmit@science.uva.nl; Vink, Jeroen C. E-mail: jcvink@science.uva.nl
2002-03-25
We study thermal behavior of a recently introduced Hartree ensemble approximation, which allows for non-perturbative inhomogeneous field configurations as well as for approximate thermalization, in the phi (cursive,open) Greek{sup 4} model in 1+1 dimensions. Using ensembles with a free field thermal distribution as out-of-equilibrium initial conditions we determine thermalization time scales. The time scale for which the system stays in approximate quantum thermal equilibrium is an indication of the time scales for which the approximation method stays reasonable. This time scale turns out to be two orders of magnitude larger than the time scale for thermalization, in the range of couplings and temperatures studied. We also discuss simplifications of our method which are numerically more efficient and make a comparison with classical dynamics.
Entanglement in a Solid State Spin Ensemble
Simmons, Stephanie; Riemann, Helge; Abrosimov, Nikolai V; Becker, Peter; Pohl, Hans-Joachim; Thewalt, Mike L W; Itoh, Kohei M; Morton, John J L
2010-01-01
Entanglement is the quintessential quantum phenomenon and a necessary ingredient in most emerging quantum technologies, including quantum repeaters, quantum information processing (QIP) and the strongest forms of quantum cryptography. Spin ensembles, such as those in liquid state nuclear magnetic resonance, have been powerful in the development of quantum control methods, however, these demonstrations contained no entanglement and ultimately constitute classical simulations of quantum algorithms. Here we report the on-demand generation of entanglement between an ensemble of electron and nuclear spins in isotopically engineered phosphorus-doped silicon. We combined high field/low temperature electron spin resonance (3.4 T, 2.9 K) with hyperpolarisation of the 31P nuclear spin to obtain an initial state of sufficient purity to create a non-classical, inseparable state. The state was verified using density matrix tomography based on geometric phase gates, and had a fidelity of 98% compared with the ideal state a...
Dysonian dynamics of the Ginibre ensemble.
Burda, Zdzislaw; Grela, Jacek; Nowak, Maciej A; Tarnowski, Wojciech; Warchoł, Piotr
2014-09-05
We study the time evolution of Ginibre matrices whose elements undergo Brownian motion. The non-Hermitian character of the Ginibre ensemble binds the dynamics of eigenvalues to the evolution of eigenvectors in a nontrivial way, leading to a system of coupled nonlinear equations resembling those for turbulent systems. We formulate a mathematical framework allowing simultaneous description of the flow of eigenvalues and eigenvectors, and we unravel a hidden dynamics as a function of a new complex variable, which in the standard description is treated as a regulator only. We solve the evolution equations for large matrices and demonstrate that the nonanalytic behavior of the Green's functions is associated with a shock wave stemming from a Burgers-like equation describing correlations of eigenvectors. We conjecture that the hidden dynamics that we observe for the Ginibre ensemble is a general feature of non-Hermitian random matrix models and is relevant to related physical applications.
Rotationally invariant ensembles of integrable matrices
Yuzbashyan, Emil A.; Shastry, B. Sriram; Scaramazza, Jasen A.
2016-05-01
We construct ensembles of random integrable matrices with any prescribed number of nontrivial integrals and formulate integrable matrix theory (IMT)—a counterpart of random matrix theory (RMT) for quantum integrable models. A type-M family of integrable matrices consists of exactly N -M independent commuting N ×N matrices linear in a real parameter. We first develop a rotationally invariant parametrization of such matrices, previously only constructed in a preferred basis. For example, an arbitrary choice of a vector and two commuting Hermitian matrices defines a type-1 family and vice versa. Higher types similarly involve a random vector and two matrices. The basis-independent formulation allows us to derive the joint probability density for integrable matrices, similar to the construction of Gaussian ensembles in the RMT.
Eigenstate Gibbs Ensemble in Integrable Quantum Systems
Nandy, Sourav; Das, Arnab; Dhar, Abhishek
2016-01-01
The Eigenstate Thermalization Hypothesis implies that for a thermodynamically large system in one of its eigenstates, the reduced density matrix describing any finite subsystem is determined solely by a set of {\\it relevant} conserved quantities. In a generic system, only the energy plays that role and hence eigenstates appear locally thermal. Integrable systems, on the other hand, possess an extensive number of such conserved quantities and hence the reduced density matrix requires specification of an infinite number of parameters (Generalized Gibbs Ensemble). However, here we show by unbiased statistical sampling of the individual eigenstates with a given finite energy density, that the local description of an overwhelming majority of these states of even such an integrable system is actually Gibbs-like, i.e. requires only the energy density of the eigenstate. Rare eigenstates that cannot be represented by the Gibbs ensemble can also be sampled efficiently by our method and their local properties are then s...
ABCD of Beta Ensembles and Topological Strings
Krefl, Daniel
2012-01-01
We study beta-ensembles with Bn, Cn, and Dn eigenvalue measure and their relation with refined topological strings. Our results generalize the familiar connections between local topological strings and matrix models leading to An measure, and illustrate that all those classical eigenvalue ensembles, and their topological string counterparts, are related one to another via various deformations and specializations, quantum shifts and discrete quotients. We review the solution of the Gaussian models via Macdonald identities, and interpret them as conifold theories. The interpolation between the various models is plainly apparent in this case. For general polynomial potential, we calculate the partition function in the multi-cut phase in a perturbative fashion, beyond tree-level in the large-N limit. The relation to refined topological string orientifolds on the corresponding local geometry is discussed along the way.
Support Vector Machine Ensemble Based on Genetic Algorithm
LI Ye; YIN Ru-po; CAI Yun-ze; XU Xiao-ming
2006-01-01
Support vector machines (SVMs) have been introduced as effective methods for solving classification problems.However, due to some limitations in practical applications,their generalization performance is sometimes far from the expected level. Therefore, it is meaningful to study SVM ensemble learning. In this paper, a novel genetic algorithm based ensemble learning method, namely Direct Genetic Ensemble (DGE), is proposed. DGE adopts the predictive accuracy of ensemble as the fitness function and searches a good ensemble from the ensemble space. In essence, DGE is also a selective ensemble learning method because the base classifiers of the ensemble are selected according to the solution of genetic algorithm. In comparison with other ensemble learning methods, DGE works on a higher level and is more direct. Different strategies of constructing diverse base classifiers can be utilized in DGE.Experimental results show that SVM ensembles constructed by DGE can achieve better performance than single SVMs,bagged and boosted SVM ensembles. In addition, some valuable conclusions are obtained.
Simulação do equilíbrio: o método de Monte Carlo Equilibrium simulation: Monte Carlo method
Alejandro López-Castillo
2007-01-01
Full Text Available We make several simulations using the Monte Carlo method in order to obtain the chemical equilibrium for several first-order reactions and one second-order reaction. We study several direct, reverse and consecutive reactions. These simulations show the fluctuations and relaxation time and help to understand the solution of the corresponding differential equations of chemical kinetics. This work was done in an undergraduate physical chemistry course at UNIFIEO.
Probability-weighted ensembles of U.S. county-level climate projections for climate risk analysis
Rasmussen, D J; Kopp, Robert E
2015-01-01
Quantitative assessment of climate change risk requires a method for constructing probabilistic time series of changes in physical climate parameters. Here, we develop two such methods, Surrogate/Model Mixed Ensemble (SMME) and Monte Carlo Pattern/Residual (MCPR), and apply them to construct joint probability density functions (PDFs) of temperature and precipitation change over the 21st century for every county in the United States. Both methods produce $likely$ (67% probability) temperature and precipitation projections consistent with the Intergovernmental Panel on Climate Change's interpretation of an equal-weighted Coupled Model Intercomparison Project 5 (CMIP5) ensemble, but also provide full PDFs that include tail estimates. For example, both methods indicate that, under representative concentration pathway (RCP) 8.5, there is a 5% chance that the contiguous United States could warm by at least 8$^\\circ$C. Variance decomposition of SMME and MCPR projections indicate that background variability dominates...
Various multistage ensembles for prediction of heating energy consumption
Radisa Jovanovic
2015-04-01
Full Text Available Feedforward neural network models are created for prediction of daily heating energy consumption of a NTNU university campus Gloshaugen using actual measured data for training and testing. Improvement of prediction accuracy is proposed by using neural network ensemble. Previously trained feed-forward neural networks are first separated into clusters, using k-means algorithm, and then the best network of each cluster is chosen as member of an ensemble. Two conventional averaging methods for obtaining ensemble output are applied; simple and weighted. In order to achieve better prediction results, multistage ensemble is investigated. As second level, adaptive neuro-fuzzy inference system with various clustering and membership functions are used to aggregate the selected ensemble members. Feedforward neural network in second stage is also analyzed. It is shown that using ensemble of neural networks can predict heating energy consumption with better accuracy than the best trained single neural network, while the best results are achieved with multistage ensemble.
Spatially Coupled Ensembles Universally Achieve Capacity under Belief Propagation
Kudekar, Shrinivas; Urbanke, Ruediger
2012-01-01
We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a-priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble which fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in that ensemble have that property. The quantifier universal refers to the single ensemble/code which is good for all channels but we assume that the channel is known at the receiver. The key technical result is a proof that under b...
Analysis and optimization of weighted ensemble sampling
Aristoff, David
2016-01-01
We give a mathematical framework for weighted ensemble (WE) sampling, a binning and resampling technique for efficiently computing probabilities in molecular dynamics. We prove that WE sampling is unbiased in a very general setting that includes adaptive binning. We show that when WE is used for stationary calculations in tandem with a Markov state model (MSM), the MSM can be used to optimize the allocation of replicas in the bins.
Quantum Data Compression of a Qubit Ensemble
Rozema, Lee A.; Mahler, Dylan H.; Hayat, Alex; Turner, Peter S.; Steinberg, Aephraim M.
2014-01-01
Data compression is a ubiquitous aspect of modern information technology, and the advent of quantum information raises the question of what types of compression are feasible for quantum data, where it is especially relevant given the extreme difficulty involved in creating reliable quantum memories. We present a protocol in which an ensemble of quantum bits (qubits) can in principle be perfectly compressed into exponentially fewer qubits. We then experimentally implement our algorithm, compre...
Statistical Ensemble Theory of Gompertz Growth Model
Takuya Yamano
2009-11-01
Full Text Available An ensemble formulation for the Gompertz growth function within the framework of statistical mechanics is presented, where the two growth parameters are assumed to be statistically distributed. The growth can be viewed as a self-referential process, which enables us to use the Bose-Einstein statistics picture. The analytical entropy expression pertain to the law can be obtained in terms of the growth velocity distribution as well as the Gompertz function itself for the whole process.
Staying Thermal with Hartree Ensemble Approximations
Salle, M; Vink, Jeroen C
2000-01-01
Using Hartree ensemble approximations to compute the real time dynamics of scalar fields in 1+1 dimension, we find that with suitable initial conditions, approximate thermalization is achieved much faster than found in our previous work. At large times, depending on the interaction strength and temperature, the particle distribution slowly changes: the Bose-Einstein distribution of the particle densities develops classical features. We also discuss variations of our method which are numerically more efficient.
Bayesian Monte Carlo method for nuclear data evaluation
Koning, A.J. [Nuclear Research and Consultancy Group NRG, P.O. Box 25, ZG Petten (Netherlands)
2015-12-15
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight. (orig.)
Bayesian Monte Carlo method for nuclear data evaluation
Koning, A. J.
2015-12-01
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.
Monte Carlo Hamiltonian: Linear Potentials
LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx ＜ 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Interplanetary magnetic field ensemble at 1 AU
Matthaeus, W.H.; Goldstein, M.L.; King, J.H.
1985-04-01
A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.
Gradient Flow Analysis on MILC HISQ Ensembles
Brown, Nathan [Washington U., St. Louis; Bazavov, Alexei [Brookhaven; Bernard, Claude [Washington U., St. Louis; DeTar, Carleton [Utah U.; Foley, Justin [Utah U.; Gottlieb, Steven [Indiana U.; Heller, Urs M. [APS, New York; Hetrick, J. E. [U. Pacific, Stockton; Komijani, Javad [Washington U., St. Louis; Laiho, Jack [Syracuse U.; Levkova, Ludmila [Utah U.; Oktay, M. B. [Utah U.; Sugar, Robert [UC, Santa Barbara; Toussaint, Doug [Arizona U.; Van de Water, Ruth S. [Fermilab; Zhou, Ran [Fermilab
2014-11-14
We report on a preliminary scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The ensembles include four lattice spacings, ranging from 0.15 to 0.06 fm, and both physical and unphysical values of the quark masses. The scales $\\sqrt{t_0}/a$ and $w_0/a$ are computed using Symanzik flow and the cloverleaf definition of $\\langle E \\rangle$ on each ensemble. Then both scales and the meson masses $aM_\\pi$ and $aM_K$ are adjusted for mistunings in the charm mass. Using a combination of continuum chiral perturbation theory and a Taylor series ansatz in the lattice spacing, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. Our preliminary results are $\\sqrt{t_0} = 0.1422(7)$fm and $w_0 = 0.1732(10)$fm. We also find the continuum mass-dependence of $w_0$.
Cavity Cooling for Ensemble Spin Systems
Cory, David
2015-03-01
Recently there has been a surge of interest in exploring thermodynamics in quantum systems where dissipative effects can be exploited to perform useful work. One such example is quantum state engineering where a quantum state of high purity may be prepared by dissipative coupling through a cold thermal bath. This has been used to great effect in many quantum systems where cavity cooling has been used to cool mechanical modes to their quantum ground state through coupling to the resolved sidebands of a high-Q resonator. In this talk we explore how these techniques may be applied to an ensemble spin system. This is an attractive process as it potentially allows for parallel remove of entropy from a large number of quantum systems, enabling an ensemble to achieve a polarization greater than thermal equilibrium, and potentially on a time scale much shorter than thermal relaxation processes. This is achieved by the coupled angular momentum subspaces of the ensemble behaving as larger effective spins, overcoming the weak individual coupling of individual spins to a microwave resonator. Cavity cooling is shown to cool each of these subspaces to their respective ground state, however an additional algorithmic step or dissipative process is required to couple between these subspaces and enable cooling to the full ground state of the joint system.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-05-08
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Gradient Flow Analysis on MILC HISQ Ensembles
Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Komijani, J; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2014-01-01
We report on a preliminary scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The ensembles include four lattice spacings, ranging from 0.15 to 0.06 fm, and both physical and unphysical values of the quark masses. The scales $\\sqrt{t_0}/a$ and $w_0/a$ are computed using Symanzik flow and the cloverleaf definition of $\\langle E \\rangle$ on each ensemble. Then both scales and the meson masses $aM_\\pi$ and $aM_K$ are adjusted for mistunings in the charm mass. Using a combination of continuum chiral perturbation theory and a Taylor series ansatz in the lattice spacing, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. Our preliminary results are $\\sqrt{t_0} = 0.1422(7)$fm and $w_0 = 0.1732(10)$fm. We also find the continuum mass-dependence of $w_0$.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-12-03
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Ensemble transform sensitivity method for adaptive observations
Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan
2016-01-01
The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.
Multivariate localization methods for ensemble Kalman filtering
S. Roh
2015-05-01
Full Text Available In ensemble Kalman filtering (EnKF, the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
On large deviations for ensembles of distributions
Khrychev, D A [Moscow State Institute of Radio-Engineering, Electronics and Automation (Technical University), Moscow (Russian Federation)
2013-11-30
The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each ε>0 the nonempty set P{sub ε} of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set (P{sub ε}, ε>0), hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Quantum entanglement at ambient conditions in a macroscopic solid-state spin ensemble.
Klimov, Paul V; Falk, Abram L; Christle, David J; Dobrovitski, Viatcheslav V; Awschalom, David D
2015-11-01
Entanglement is a key resource for quantum computers, quantum-communication networks, and high-precision sensors. Macroscopic spin ensembles have been historically important in the development of quantum algorithms for these prospective technologies and remain strong candidates for implementing them today. This strength derives from their long-lived quantum coherence, strong signal, and ability to couple collectively to external degrees of freedom. Nonetheless, preparing ensembles of genuinely entangled spin states has required high magnetic fields and cryogenic temperatures or photochemical reactions. We demonstrate that entanglement can be realized in solid-state spin ensembles at ambient conditions. We use hybrid registers comprising of electron-nuclear spin pairs that are localized at color-center defects in a commercial SiC wafer. We optically initialize 10(3) identical registers in a 40-μm(3) volume (with [Formula: see text] fidelity) and deterministically prepare them into the maximally entangled Bell states (with 0.88 ± 0.07 fidelity). To verify entanglement, we develop a register-specific quantum-state tomography protocol. The entanglement of a macroscopic solid-state spin ensemble at ambient conditions represents an important step toward practical quantum technology.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Exploring fluctuations and phase equilibria in fluid mixtures via Monte Carlo simulation
Denton, Alan R.; Schmidt, Michael P.
2013-03-01
Monte Carlo simulation provides a powerful tool for understanding and exploring thermodynamic phase equilibria in many-particle interacting systems. Among the most physically intuitive simulation methods is Gibbs ensemble Monte Carlo (GEMC), which allows direct computation of phase coexistence curves of model fluids by assigning each phase to its own simulation cell. When one or both of the phases can be modelled virtually via an analytic free energy function (Mehta and Kofke 1993 Mol. Phys. 79 39), the GEMC method takes on new pedagogical significance as an efficient means of analysing fluctuations and illuminating the statistical foundation of phase behaviour in finite systems. Here we extend this virtual GEMC method to binary fluid mixtures and demonstrate its implementation and instructional value with two applications: (1) a lattice model of simple mixtures and polymer blends and (2) a free-volume model of a complex mixture of colloids and polymers. We present algorithms for performing Monte Carlo trial moves in the virtual Gibbs ensemble, validate the method by computing fluid demixing phase diagrams, and analyse the dependence of fluctuations on system size. Our open-source simulation programs, coded in the platform-independent Java language, are suitable for use in classroom, tutorial, or computational laboratory settings.
EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data
Shu, Qingya; Guo, Hanqi; Che, Limei; Yuan, Xiaoru; Liu, Junfeng; Liang, Jie
2016-04-19
We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based on ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.
Kikombo, Andrew Kilinga; Asai, Tetsuya; Amemiya, Yoshihito
We investigated the implications of static noises in a pulse-density modulator based on Vestibulo-ocular Reflex model. We constructed a simple neuromorphic circuit consisting of an ensemble of single-electron devices and confirmed that static noises (heterogeneity in circuit parameters) introduced into the network indeed played an important role in improving the fidelity with which neurons could encode signals whose input frequencies are higher than the intrinsic response frequencies of single neurons. Through Monte-Carlo based computer simulations, we demonstrated that the heterogeneous network could corectly encode signals with input frequencies as high as 1 GHz, twice the range for single (or a network of homogeneous) neurons.
Monte Carlo Particle Lists: MCPL
Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi
2016-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
Ensemble data assimilation with an adjusted forecast spread
Sabrina Rainwater
2013-04-01
Full Text Available Ensemble data assimilation typically evolves an ensemble of model states whose spread is intended to represent the algorithm's uncertainty about the state of the physical system that produces the data. The analysis phase treats the forecast ensemble as a random sample from a background distribution, and it transforms the ensemble according to the background and observation error statistics to provide an appropriate sample for the next forecast phase. We find that in the presence of model nonlinearity and model error, it can be fruitful to rescale the ensemble spread prior to the forecast and then reverse this rescaling after the forecast. We call this approach forecast spread adjustment, which we discuss and test in this article using an ensemble Kalman filter and a 2005 model due to Lorenz. We argue that forecast spread adjustment provides a tunable parameter, that is, complementary to covariance inflation, which cumulatively increases ensemble spread to compensate for underestimation of uncertainty. We also show that as the adjustment parameter approaches zero, the filter approaches the extended Kalman filter if the ensemble size is sufficiently large. We find that varying the adjustment parameter can significantly reduce analysis and forecast errors in some cases. We evaluate how the improvement provided by forecast spread adjustment depends on ensemble size, observation error and model error. Our results indicate that the technique is most effective for small ensembles, small observation error and large model error, though the effectiveness depends significantly on the nature of the model error.
De praeceptis ferendis: good practice in multi-model ensembles
I. Kioutsioukis
2014-06-01
Full Text Available Ensembles of air quality models have been formally and empirically shown to outperform single models in many cases. Evidence suggests that ensemble error is reduced when the members form a diverse and accurate ensemble. Diversity and accuracy are hence two factors that should be taken care of while designing ensembles in order for them to provide better predictions. There exists a trade-off between diversity and accuracy for which one cannot be gained without expenses of the other. Theoretical aspects like the bias-variance-covariance decomposition and the accuracy-diversity decomposition are linked together and support the importance of creating ensemble that incorporates both the elements. Hence, the common practice of unconditional averaging of models without prior manipulation limits the advantages of ensemble averaging. We demonstrate the importance of ensemble accuracy and diversity through an inter-comparison of ensemble products for which a sound mathematical framework exists, and provide specific recommendations for model selection and weighting for multi model ensembles. To this end we have devised statistical tools that can be used for diagnostic evaluation of ensemble modelling products, complementing existing operational methods.
Giancarlo Mauri
2013-09-01
Full Text Available An extensive rewiring of cell metabolism supports enhanced proliferation in cancer cells. We propose a systems level approach to describe this phenomenon based on Flux Balance Analysis (FBA. The approach does not explicit a cell biomass formation reaction to be maximized, but takes into account an ensemble of alternative flux distributions that match the cancer metabolic rewiring (CMR phenotype description. The underlying concept is that the analysis the common/distinguishing properties of the ensemble can provide indications on how CMR is achieved and sustained and thus on how it can be controlled.
Applications of Monte Carlo Methods in Calculus.
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Quantifying uncertainties in primordial nucleosynthesis without Monte Carlo simulations
Fiorentini, G; Sarkar, S; Villante, F L
1998-01-01
We present a simple method for determining the (correlated) uncertainties of the light element abundances expected from big bang nucleosynthesis, which avoids the need for lengthy Monte Carlo simulations. Our approach helps to clarify the role of the different nuclear reactions contributing to a particular elemental abundance and makes it easy to implement energy-independent changes in the measured reaction rates. As an application, we demonstrate how this method simplifies the statistical estimation of the nucleon-to-photon ratio through comparison of the standard BBN predictions with the observationally inferred abundances.
(U) Introduction to Monte Carlo Methods
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
ZHENG Fei; ZHU Jiang
2010-01-01
The initial ensemble perturbations for an ensemble data assimilation system are expected to reasonably sample model uncertainty at the time of analysis to further reduce analysis uncertainty.Therefore,the careful choice of an initial ensemble perturbation method that dynamically cycles ensemble perturbations is required for the optimal performance of the system.Based on the multivariate empirical onhogonal function(MEOF)method,a new ensemble initialization scheme is developed to generate balanced initial perturbations for the ensemble Kalman filter(EnKF)data assimilation,with a reasonable consideration of the physical relationships between different model variables.The scheme is applied in assimilation experiments with a global spectral atmospheric model and with real observations.The proposed perturbation method is compared to the commonly used method of spatially-correlated random perturbations.The comparisons show that the model uncertainties prior to the first analysis time,which are forecasted from the balanced ensemble initial fields,maintain a much more reasonable spread and a more accurate forecast error covariance than those from the randomly perturbed initial fields.The analysis results are further improved by the balanced ensemble initialization scheme due to more accurate background information.Also,a 20-day continuous assimilation experiment shows that the ensemble spreads for each model variable are still retained in reasonable ranges without considering additional perturbations or inflations during the assimilation cycles,while the ensemble spreads from the randomly perturbed initialization scheme decrease and collapse rapidly.
Seasonal hydrological ensemble forecasts over Europe
Arnal, Louise; Wetterhall, Fredrik; Pappenberger, Florian
2015-04-01
Seasonal forecasts have an important socio-economic value in hydro-meteorological forecasting. The applications are for example hydropower management, spring flood prediction and water resources management. The latter includes prediction of low flows, primordial for navigation, water quality assessment, droughts and agricultural water needs. Traditionally, seasonal hydrological forecasts are done using the observed discharge from previous years, so called Ensemble Streamflow Prediction (ESP). With the recent increasing development of seasonal meteorological forecasts, the incentive for developing and improving seasonal hydrological forecasts is great. In this study, a seasonal hydrological forecast, driven by the ECMWF's System 4 (SEA), was compared with an ESP of modelled discharge using observations. The hydrological model used for both forecasts was the LISFLOOD model, run over a European domain with a spatial resolution of 5 km. The forecasts were produced from 1990 until the present time, with a daily time step. They were issued once a month with a lead time of seven months. The SEA forecasts are constituted of 15 ensemble members, extended to 51 members every three months. The ESP forecasts comprise 20 ensembles and served as a benchmark for this comparative study. The forecast systems were compared using a diverse set of verification metrics, such as continuous ranked probability scores, ROC curves, anomaly correlation coefficients and Nash-Sutcliffe efficiency coefficients. These metrics were computed over several time-scales, ranging from a weekly to a six-months basis, for each season. The evaluation enabled the investigation of several aspects of seasonal forecasting, such as limits of predictability, timing of high and low flows, as well as exceedance of percentiles. The analysis aimed at exploring the spatial distribution and timely evolution of the limits of predictability.
Seasonal hydrological ensemble forecasts over Europe
Arnal, Louise; Wetterhall, Fredrik; Stephens, Elisabeth; Cloke, Hannah; Pappenberger, Florian
2016-04-01
This study investigates the limits of predictability in dynamical seasonal discharge forecasting, in both space and time, over Europe. Seasonal forecasts have an important socioeconomic value. Applications are numerous and cover hydropower management, spring flood prediction, low flow prediction for navigation and agricultural water demands. Additionally, the constant increase in NWP skill for longer lead times and the predicted increase in the intensity and frequency of hydro-meteorological extremes, have amplified the incentive to promote and further improve hydrological forecasts on sub-seasonal to seasonal timescales. In this study, seasonal hydrological forecasts (SEA), driven by the ECMWF's System 4 in hindcast mode, were analysed against an Ensemble Streamflow Prediction (ESP) benchmark. The ESP was forced with an ensemble of resampled historical meteorological observations and started with perfect initial conditions. Both forecasts were produced by the LISFLOOD model, run on the pan-European scale with a spatial resolution of 5 by 5 km. The forecasts were issued monthly on a daily time step, from 1990 until the current time, up to a lead time of 7 months. The seasonal discharge forecasts were analysed against the ESP on a catchment scale in terms of their accuracy, skill and sharpness, using a diverse set of verification metrics (e.g. KGE, CRPSS and ROC). Additionally, a reverse-ESP was constructed by forcing the LISFLOOD model with a single perfect meteorological set of observations and initiated from an ensemble of resampled historical initial conditions. The comparison of the ESP with the reverse-ESP approach enabled the identification of the respective contribution of meteorological forcings and hydrologic initial conditions errors to seasonal discharge forecasting uncertainties in Europe. These results could help pinpoint target elements of the forecasting chain which, after being improved, could lead to substantial increase in discharge predictability
Bayesian network ensemble as a multivariate strategy to predict radiation pneumonitis risk
Lee, Sangkyu, E-mail: sangkyu.lee@mail.mcgill.ca; Ybarra, Norma; Jeyaseelan, Krishinima; Seuntjens, Jan; El Naqa, Issam [Medical Physics Unit, McGill University, Montreal, Quebec H3G1A4 (Canada); Faria, Sergio; Kopek, Neil; Brisebois, Pascale [Department of Radiation Oncology, Montreal General Hospital, Montreal, H3G1A4 (Canada); Bradley, Jeffrey D.; Robinson, Clifford [Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, Missouri 63110 (United States)
2015-05-15
Purpose: Prediction of radiation pneumonitis (RP) has been shown to be challenging due to the involvement of a variety of factors including dose–volume metrics and radiosensitivity biomarkers. Some of these factors are highly correlated and might affect prediction results when combined. Bayesian network (BN) provides a probabilistic framework to represent variable dependencies in a directed acyclic graph. The aim of this study is to integrate the BN framework and a systems’ biology approach to detect possible interactions among RP risk factors and exploit these relationships to enhance both the understanding and prediction of RP. Methods: The authors studied 54 nonsmall-cell lung cancer patients who received curative 3D-conformal radiotherapy. Nineteen RP events were observed (common toxicity criteria for adverse events grade 2 or higher). Serum concentration of the following four candidate biomarkers were measured at baseline and midtreatment: alpha-2-macroglobulin, angiotensin converting enzyme (ACE), transforming growth factor, interleukin-6. Dose-volumetric and clinical parameters were also included as covariates. Feature selection was performed using a Markov blanket approach based on the Koller–Sahami filter. The Markov chain Monte Carlo technique estimated the posterior distribution of BN graphs built from the observed data of the selected variables and causality constraints. RP probability was estimated using a limited number of high posterior graphs (ensemble) and was averaged for the final RP estimate using Bayes’ rule. A resampling method based on bootstrapping was applied to model training and validation in order to control under- and overfit pitfalls. Results: RP prediction power of the BN ensemble approach reached its optimum at a size of 200. The optimized performance of the BN model recorded an area under the receiver operating characteristic curve (AUC) of 0.83, which was significantly higher than multivariate logistic regression (0
A Framework for Non-Equilibrium Statistical Ensemble Theory
BI Qiao; HE Zu-Tan; LIU Jie
2011-01-01
Since Gibbs synthesized a general equilibrium statistical ensemble theory, many theorists have attempted to generalized the Gibbsian theory to non-equilibrium phenomena domain, however the status of the theory of nonequilibrium phenomena can not be said as firm as well established as the Gibbsian ensemble theory. In this work, we present a framework for the non-equilibrium statistical ensemble formalism based on a subdynamic kinetic equation (SKE) rooted from the Brussels-Austin school and followed by some up-to-date works. The constructed key is to use a similarity transformation between Gibbsian ensembles formalism based on Liouville equation and the subdynamic ensemble formalism based on the SKE. Using this formalism, we study the spin-Boson system, as cases of weak coupling or strongly coupling, and obtain the reduced density operators for the Canonical ensembles easily.
Monte Carlo calculations of the physical properties of RDX, {beta}-HMX, and TATB
Sewell, T.D.
1997-09-01
Atomistic Monte Carlo simulations in the NpT ensemble are used to calculate the physical properties of crystalline RDX, {beta}-HMX, and TATB. Among the issues being considered are the effects of various treatments of the intermolecular potential, inclusion of intramolecular flexibility, and simulation size dependence of the results. Calculations of the density, lattice energy, and lattice parameters are made over a wide range of pressures; thereby allowing for predictions of the bulk and linear coefficients of isothermal expansion of the crystals. Comparison with experiment is made where possible.
Exemple de gestion d'un bassin mytilicole. La Baie du Mont St-Michel
Gerla, Daniel
1990-01-01
Depuis son implantation en 1954, la mytiliculture en baie du Mont-saint-Michel a été frappée par de très graves crises. En collaboration avec l'ISTPM puis l'IFREMER, les professionnels ont su analyser les causes de ces chutes de production et mettre en place des mesures adaptées à la situation. La dernière en date étant une restructuration de l'implantation de l'ensemble des bouchots de la baie de 1985 à 1981. Cette volonté de gestion globale du bassin mytilicole du Vivier-sur-Mer se...
Monte Carlo simulations of hole dynamics in SiGe/Si terahertz quantum-cascade structures
Ikonić, Z.; Kelsall, R. W.; Harrison, P.
2004-06-01
A detailed analysis of hole transport in cascaded p - Si/SiGe quantum well structures is performed using ensemble Monte Carlo simulations. The hole subband structure is calculated using the 6×6 k·p model, and then used to find carrier relaxation rates due to the alloy disorder, acoustic and optical phonon scattering. The simulation accounts for the in-plane k -space anisotropy of both the hole subband structure and the scattering rates. Results are presented for prototype terahertz Si/SiGe quantum cascade structures.
Chemical Potential of Benzene Fluid from Monte Carlo Simulation with Anisotropic United Atom Model
Mahfuzh Huda
2013-07-01
Full Text Available The profile of chemical potential of benzene fluid has been investigated using Anisotropic United Atom (AUA model. A Monte Carlo simulation in canonical ensemble was done to obtain the isotherm of benzene fluid, from which the excess part of chemical potential was calculated. A surge of potential energy is observed during the simulation at high temperature which is related to the gas-liquid phase transition. The isotherm profile indicates the tendency of benzene to condensate due to the strong attractive interaction. The results show that the chemical potential of benzene rapidly deviates from its ideal gas counterpart even at low density.
Cluster ensembles, quantization and the dilogarithm
Fock, Vladimir; Goncharov, Alexander B.
2009-01-01
, possibly degenerate, and the space has a Poisson structure. The map is compatible with these structures. The dilogarithm together with its motivic and quantum avatars plays a central role in the cluster ensemble structure. We define a non-commutative -deformation of the -space. When is a root of unity...... group . It is an algebraic-geometric avatar of higher Teichmüller theory on related to . We suggest that there exists a duality between the and spaces. In particular, we conjecture that the tropical points of one of the spaces parametrise a basis in the space of functions on the Langlands dual space. We...
Accurate Atom Counting in Mesoscopic Ensembles
Hume, D B; Joos, M; Muessel, W; Strobel, H; Oberthaler, M K
2013-01-01
Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.
Accurate Atom Counting in Mesoscopic Ensembles
Hume, D. B.; Stroescu, I.; Joos, M.; Muessel, W.; Strobel, H.; Oberthaler, M. K.
2013-12-01
Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.
Supervised Ensemble Classification of Kepler Variable Stars
Bass, Gideon
2016-01-01
Variable star analysis and classification is an important task in the understanding of stellar features and processes. While historically classifications have been done manually by highly skilled experts, the recent and rapid expansion in the quantity and quality of data has demanded new techniques, most notably automatic classification through supervised machine learning. We present an expansion of existing work on the field by analyzing variable stars in the {\\em Kepler} field using an ensemble approach, combining multiple characterization and classification techniques to produce improved classification rates. Classifications for each of the roughly 150,000 stars observed by {\\em Kepler} are produced separating the stars into one of 14 variable star classes.
Modeling Coordination Problems in a Music Ensemble
Frimodt-Møller, Søren R.
2008-01-01
This paper considers in general terms, how musicians are able to coordinate through rational choices in a situation of (temporary) doubt in an ensemble performance. A fictitious example involving a 5-bar development in an unknown piece of music is analyzed in terms of epistemic logic, more...... specifically a multi-agent system, where it is shown that perfect coordination can only be certain to take place if the musicians have common knowledge of certain rules of the composition. We subsequently argue, however, that the musicians need not agree on the central features of the piece of music in order...
Asymptotic expansions for the Gaussian unitary ensemble
Haagerup, Uffe; Thorbjørnsen, Steen
2012-01-01
Let g : R ¿ C be a C8-function with all derivatives bounded and let trn denote the normalized trace on the n × n matrices. In Ref. 3 Ercolani and McLaughlin established asymptotic expansions of the mean value ¿{trn(g(Xn))} for a rather general class of random matrices Xn, including the Gaussian...... Unitary Ensemble (GUE). Using an analytical approach, we provide in the present paper an alternative proof of this asymptotic expansion in the GUE case. Specifically we derive for a random matrix Xn that where k is an arbitrary positive integer. Considered as mappings of g, we determine the coefficients...
Accurate atom counting in mesoscopic ensembles.
Hume, D B; Stroescu, I; Joos, M; Muessel, W; Strobel, H; Oberthaler, M K
2013-12-20
Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.
Accessing Many-Body Localized States through the Generalized Gibbs Ensemble
Inglis, Stephen; Pollet, Lode
2016-09-01
We show how the thermodynamic properties of large many-body localized systems can be studied using quantum Monte Carlo simulations. We devise a heuristic way of constructing local integrals of motion of high quality, which are added to the Hamiltonian in conjunction with Lagrange multipliers. The ground state simulation of the shifted Hamiltonian corresponds to a high-energy state of the original Hamiltonian in the case of exactly known local integrals of motion. The inevitable mixing between eigenstates as a consequence of nonperfect integrals of motion is weak enough such that the characteristics of many-body localized systems are not averaged out, unlike the standard ensembles of statistical mechanics. Our method paves the way to study higher dimensions and indicates that a fully many-body localized phase in 2D, where (nearly) all eigenstates are localized, is likely to exist.
Sturm, Irene; Treder, Matthias S.; Miklody, Daniel
2015-01-01
When listening to ensemble music even non-musicians can follow single instruments effortlessly. Electrophysiological indices for neural sensory encoding of separate streams have been described using oddball paradigms which utilize brain reactions to sound events that deviate from a repeating...... standard pattern. Obviously, these paradigms put constraints on the compositional complexity of the musical stimulus. Here, we apply a regression-based method of multivariate EEG analysis in order to reveal the neural encoding of separate voices of naturalistic ensemble music that is based on cortical...... responses to tone onsets, such as N1/P2 ERP components. Music clips (resembling minimalistic electro-pop) were presented to 11 subjects, either in an ensemble version (drums, bass, keyboard) or in the corresponding three solo versions. For each instrument we train a spatio-temporal regression filter...
Validation of the Air Force Weather Agency Ensemble Prediction Systems
2014-03-27
to deterministic models. Results from ensemble weather input into operational risk management ( ORM ) destruction of enemy air defense simulations...growth during the analysis period (Toth and Kalnay, 1993; Toth and Kalnay, 1997). From this framework the ensemble transform bred vector, ensemble...features. Each of its 10 members is run independently using different configurations in the framework of the Weather Research and Forecasting (WRF
Unconditional two-mode squeezing of separated atomic ensembles
Parkins, A S; Solano, E
2005-01-01
We propose schemes for the unconditional preparation of a two-mode squeezed state of effective bosonic modes realized in a pair of atomic ensembles interacting collectively with optical cavity and laser fields. The scheme uses Raman transitions between stable atomic ground states and under ideal conditions produces pure entangled states in the steady state. The scheme works both for ensembles confined within a single cavity and for ensembles confined in separate, cascaded cavities.
The Moment Convergence Rates for Largest Eigenvalues of β Ensembles
Jun Shan XIE
2013-01-01
The paper focuses on the largest eigenvalues of the β-Hermite ensemble and theβ-Laguerre ensemble.In particular,we obtain the precise moment convergence rates of their largest eigenvalues.The results are motivated by the complete convergence for partial sums of i.i.d.random variables,and the proofs depend on the small deviations for largest eigenvalues of the β ensembles and tail inequalities of the general β Tracy-Widom law.
Extracting Value from Ensembles for Cloud-Free Forecasting
2011-09-01
for Medium range Weather Forecasting EMean Ensemble mean ETR Ensemble transform with rescaling EUMETSAT European Organization for the...transform method (ET) with rescaling ( ETR ) to define the initial atmospheric uncertainty (Wei et al. 2008). Adapted from the ET method devised by...variances of each grid point to further restrain the initial ensemble spread. The ETR method replaced the breeding method in GEFS during NCEP’s May
On sequential observation processing in localized ensemble Kalman filters
Nerger, Lars
2014-01-01
The different variants of current ensemble square-root Kalman filters assimilate either all observations at once or perform a sequence in which batches of observations or each single observation is assimilated. The sequential observation processing is used in filter algorithms like the ensemble adjustment Kalman filter (EAKF) and the ensemble square-root filter (EnSRF) and can result in computationally efficient algorithms because matrix inversions in the observation space are reduced to the ...
Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing
Pinson, Pierre; Madsen, Henrik
. The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...
2011-09-01
variable is appropriately sized for the region ( UCAR 2010). 4. An Isotropic Joint-Ensemble Majumdar and Finochio (2010) develop a probability circle...Forecasting, 22, 671–675. UCAR , cited 2010: NCEP Perturbation Method. [Available online at http://www.meted.ucar.edu/nwp/pcu2/ens_matrix
The MIP Ensemble Simulation: Local Ensemble Statistics in the Cosmic Web
Aragon-Calvo, M A
2012-01-01
Here we present a novel N-body simulation technique that allows us to compute ensemble statistics on a local basis, directly relating halo properties to their environment. This is achieved by the use of an ensemble simulation in which the otherwise independent realizations share the same fluctuations above a given cut-off scale. This produces a constrained ensemble where the LSS is common to all realizations while having an independent halo population. By generating a large number of semi-independent realizations we can effectively increase the local halo density by an arbitrary factor thus breaking the fundamental limit of the finite halo density (for a given halo mass range) determined by the halo mass function. This technique allows us to compute local ensemble statistics of the matter/halo distribution at a particular position in space, removing the intrinsic stochasticity in the halo formation process and directly relating halo properties to their environment. This is a major improvement over global desc...
A parallel systematic-Monte Carlo algorithm for exploring conformational space.
Perez-Riverol, Yasset; Vera, Roberto; Mazola, Yuliet; Musacchio, Alexis
2012-01-01
Computational algorithms to explore the conformational space of small molecules are complex and computer demand field in chemoinformatics. In this paper a hybrid algorithm to explore the conformational space of organic molecules is presented. This hybrid algorithm is based in a systematic search approach combined with a Monte Carlo based method in order to obtain an ensemble of low-energy conformations simulating the flexibility of small chemical compounds. The Monte Carlo method uses the Metropolis criterion to accept or reject a conformation through an in-house implementation of the MMFF94s force field to calculate the conformational energy. The parallel design of this algorithm, based on the message passing interface (MPI) paradigm, was implemented. The results showed a performance increase in the terms of speed and efficiency.
Monte Carlo transport simulation of velocity undershoot in zinc blende and wurtzite InN
Wang, Shulong; Liu, Hongxia; Gao, Bo; Zhuo, Qingqing [School of Microelectronics, Key Laboratory of Wide Band-gap Semiconductor Materials and Device, Xidian University, Xi& #x27; an, 710071 (China)
2012-09-15
Velocity undershoot in zinc blende (ZB) and wurtzite (WZ) InN is investigated by ensemble Monte Carlo (EMC) calculation. The results show that velocity undershoot arises from the relatively long energy relaxation time compared with momentum. Monte Carlo transport simulations over wide range of electric fields is presented in the paper. The results show that velocity undershoot impacts the electron transport greatly, compared with velocity overshoot, when the electric field changes quickly with time and space. A comparison study between WZ and ZB InN shows that WZ InN has more advantages in device applications due to its excellent electron transport properties. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Lucena, Sebastião M P; Mileo, Paulo G M; Silvino, Pedro F G; Cavalcante, Célio L
2011-12-01
The adsorption equilibrium of methane in PCN-14 was simulated by the Monte Carlo technique in the grand canonical ensemble. A new force field was proposed for the methane/PCN-14 system, and the temperature dependence of the molecular siting was investigated. A detailed study of the statistics of the center of mass and potential energy showed a surprising site behavior with no energy barriers between weak and strong sites, allowing open metal sites to guide methane molecules to other neighboring sites. Moreover, this study showed that a model assuming weakly adsorbing open metal clusters in PCN-14, densely populated only at low temperatures (below 150 K), can explain published experimental data. These results also explain previously observed discrepancies between neutron diffraction experiments and Monte Carlo simulations.
Velazquez, L.; Castro-Palacio, J. C.
2013-07-01
Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.
Massively parallel Monte Carlo for many-particle simulations on GPUs
Anderson, Joshua A; Grubb, Thomas L; Engel, Michael; Glotzer, Sharon C
2013-01-01
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a GeForce GTX 680, our GPU implementation executes 95 times faster than on a single Intel Xeon E5540 CPU core, enabling 17 times better performance per dollar and cutting energy usage by a factor of 10.
Deformed Gaussian Orthogonal Ensemble Analysis of the Interacting Boson Model
Pato, M P; Lima, C L; Hussein, M S; Alhassid, Y
1994-01-01
A Deformed Gaussian Orthogonal Ensemble (DGOE) which interpolates between the Gaussian Orthogonal Ensemble and a Poissonian Ensemble is constructed. This new ensemble is then applied to the analysis of the chaotic properties of the low lying collective states of nuclei described by the Interacting Boson Model (IBM). This model undergoes a transition order-chaos-order from the $SU(3)$ limit to the $O(6)$ limit. Our analysis shows that the quantum fluctuations of the IBM Hamiltonian, both of the spectrum and the eigenvectors, follow the expected behaviour predicted by the DGOE when one goes from one limit to the other.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Adiabatic Passage of Collective Excitations in Atomic Ensembles
LIYong; MIAOYuan-Xiu; SUNChang-Pu
2004-01-01
We describe a theoretical scheme that allows for transfer of quantum states of atomic collective excitation between two macroscopic atomic ensembles localized in two spatially-separated domains. The conception is based on the occurrence of double-exciton dark states due to the collective destructive quantum interference of the emissions from the two atomic ensembles. With an adiabatically coherence manipulation for the atom-field couplings by stimulated Rmann scattering, the dark states will extrapolate from an exciton state of an ensemble to that of another. This realizes the transport of quantum information among atomic ensembles.
Adiabatic Passage of Collective Excitations in Atomic Ensembles
LI Yong; MIAO Yuan-Xiu; SUN Chang-Pu
2004-01-01
We describe a theoretical scheme that allows for transfer of quantum states of atomic collective excitation between two macroscopic atomic ensembles localized in two spatially-separated domains. The conception is based on the occurrence of double-exciton dark states due to the collective destructive quantum interference of the emissions from the two atomic ensembles. With an adiabatically coherence manipulation for the atom-field couplings by stimulated Ramann scattering, the dark states will extrapolate from an exciton state of an ensemble to that of another. This realizes the transport of quantum information among atomic ensembles.
Relation between native ensembles and experimental structures of proteins
Best, R. B.; Lindorff-Larsen, Kresten; DePristo, M. A.
2006-01-01
Different experimental structures of the same protein or of proteins with high sequence similarity contain many small variations. Here we construct ensembles of "high-sequence similarity Protein Data Bank" (HSP) structures and consider the extent to which such ensembles represent the structural...... Data Bank ensembles; moreover, we show that the effects of uncertainties in structure determination are insufficient to explain the results. These results highlight the importance of accounting for native-state protein dynamics in making comparisons with ensemble-averaged experimental data and suggest...
Fractional exclusion statistics and the Random Matrix Boson Ensemble
Hernández-Quiroz, Saul; Benet, Luis; Flores, Jorge; Cocho, Germinal
2012-01-01
The k-body Gaussian Embedded Ensemble of Random Matrices is considered for N bosons distributed on two single-particle levels. When k = N, the ensemble is equivalent to the Gaussian Orthogonal Ensemble (GOE), and when k = 2 it corresponds to the Two-body Random Ensemble (TBRE) for bosons. It is shown that the energy spectrum leads to a rank function which is of the form of a discrete generalized beta distribution. The same distribution is obtained assuming N non-interacting quasiparticles that obey the fractional exclusion statistics introduced by Haldane two decades ago.
Density matrix quantum Monte Carlo
Blunt, N S; Spencer, J S; Foulkes, W M C
2013-01-01
This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...
Efficient kinetic Monte Carlo simulation
Schulze, Tim P.
2008-02-01
This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Cluster Ensemble-based Image Segmentation
Xiaoru Wang
2013-07-01
Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new\tcluster ensemble-based image\tsegmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.
Online cross-validation-based ensemble learning.
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2017-05-04
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Nanobiosensing with Arrays and Ensembles of Nanoelectrodes
Najmeh Karimian
2016-12-01
Full Text Available Since the first reports dating back to the mid-1990s, ensembles and arrays of nanoelectrodes (NEEs and NEAs, respectively have gained an important role as advanced electroanalytical tools thank to their unique characteristics which include, among others, dramatically improved signal/noise ratios, enhanced mass transport and suitability for extreme miniaturization. From the year 2000 onward, these properties have been exploited to develop electrochemical biosensors in which the surfaces of NEEs/NEAs have been functionalized with biorecognition layers using immobilization modes able to take the maximum advantage from the special morphology and composite nature of their surface. This paper presents an updated overview of this field. It consists of two parts. In the first, we discuss nanofabrication methods and the principles of functioning of NEEs/NEAs, focusing, in particular, on those features which are important for the development of highly sensitive and miniaturized biosensors. In the second part, we review literature references dealing the bioanalytical and biosensing applications of sensors based on biofunctionalized arrays/ensembles of nanoelectrodes, focusing our attention on the most recent advances, published in the last five years. The goal of this review is both to furnish fundamental knowledge to researchers starting their activity in this field and provide critical information on recent achievements which can stimulate new ideas for future developments to experienced scientists.
Hsaing Waing: Classical Ensemble of Myanmar
Chalermkit Kengkeaw
2013-09-01
Full Text Available Hsaing Waing is a classical music ensemble and a prominent culturalidentity of Myanmar. The Hsaing Waing ensemble consists of many instruments such as the Pat Waing, Muang Hsaing, Hne, Chauk Lon Bat, Byaung, Wa, Wallet Kok, Yakin, Si, and Mong. The earliest historical record of the Hsaing Waing is in 1544 where the Pat Waing and possibly the Hsaing Waing, was in royal service at the court of King Tabinshwehti of the Taungoo dynasty and prospered under the Kaunbaun dynasty up to colonial rule. During colonization, Hsaing Waing’s popularity declined but other innovations were introduced such as modern recording mediums and broadcasts which transferred the popularity of Hsaing Waing to a broader public audience and brought innovation to religious music, ceremonial rituals, fusion of westernmusical instruments such as the piano, violin and mandolin. The wealth of knowledge and numbers of connoisseur during the Kaunbaun dynasty led to the transfer of knowledge to many apprentices which were responsible for the development and adaptation and continuation of Hsaing Waing during colonization, socialism and independence. The transfer of knowledge was carried out by previous generations through apprentices, family members, close relatives and inspired individuals. The factors for the successful inheritance of Hsaing Waing are management, education, musicians and opportunity.
Ensemble Kalman filtering with residual nudging
Xiaodong Luo
2012-10-01
Full Text Available Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF by (in effect adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Ensemble Kalman filtering with residual nudging
Luo, X.
2012-10-03
Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.
Phase transitions in ensembles of solitons induced by an optical pumping or a strong electric field
Karpov, P.; Brazovskii, S.
2016-09-01
The latest trend in studies of modern electronically and/or optically active materials is to provoke phase transformations induced by high electric fields or by short (femtosecond) powerful optical pulses. The systems of choice are cooperative electronic states whose broken symmetries give rise to topological defects. For typical quasi-one-dimensional architectures, those are the microscopic solitons taking from electrons the major roles as carriers of charge or spin. Because of the long-range ordering, the solitons experience unusual super-long-range forces leading to a sequence of phase transitions in their ensembles: the higher-temperature transition of the confinement and the lower one of aggregation into macroscopic walls. Here we present results of an extensive numerical modeling for ensembles of both neutral and charged solitons in both two- and three-dimensional systems. We suggest a specific Monte Carlo algorithm preserving the number of solitons, which substantially facilitates the calculations, allows to extend them to the three-dimensional case and to include the important long-range Coulomb interactions. The results confirm the first confinement transition, except for a very strong Coulomb repulsion, and demonstrate a pattern formation at the second transition of aggregation.
Transitions between imperfectly ordered crystalline structures: a phase switch Monte Carlo study.
Wilms, Dorothea; Wilding, Nigel B; Binder, Kurt
2012-05-01
A model for two-dimensional colloids confined laterally by "structured boundaries" (i.e., ones that impose a periodicity along the slit) is studied by Monte Carlo simulations. When the distance D between the confining walls is reduced at constant particle number from an initial value D(0), for which a crystalline structure commensurate with the imposed periodicity fits, to smaller values, a succession of phase transitions to imperfectly ordered structures occur. These structures have a reduced number of rows parallel to the boundaries (from n to n-1 to n-2, etc.) and are accompanied by an almost periodic strain pattern, due to "soliton staircases" along the boundaries. Since standard simulation studies of such transitions are hampered by huge hysteresis effects, we apply the phase switch Monte Carlo method to estimate the free energy difference between the structures as a function of the misfit between D and D(0), thereby locating where the transitions occur in equilibrium. For comparison, we also obtain this free energy difference from a thermodynamic integration method: The results agree, but the effort required to obtain the same accuracy as provided by phase switch Monte Carlo would be at least three orders of magnitude larger. We also show for a situation where several "candidate structures" exist for a phase, that phase switch Monte Carlo can clearly distinguish the metastable structures from the stable one. Finally, applying the method in the conjugate statistical ensemble (where the normal pressure conjugate to D is taken as an independent control variable), we show that the standard equivalence between the conjugate ensembles of statistical mechanics is violated.
Visualizing uncertainties in a storm surge ensemble data assimilation and forecasting system
Hollt, Thomas
2015-01-15
We present a novel integrated visualization system that enables the interactive visual analysis of ensemble simulations and estimates of the sea surface height and other model variables that are used for storm surge prediction. Coastal inundation, caused by hurricanes and tropical storms, poses large risks for today\\'s societies. High-fidelity numerical models of water levels driven by hurricane-force winds are required to predict these events, posing a challenging computational problem, and even though computational models continue to improve, uncertainties in storm surge forecasts are inevitable. Today, this uncertainty is often exposed to the user by running the simulation many times with different parameters or inputs following a Monte-Carlo framework in which uncertainties are represented as stochastic quantities. This results in multidimensional, multivariate and multivalued data, so-called ensemble data. While the resulting datasets are very comprehensive, they are also huge in size and thus hard to visualize and interpret. In this paper, we tackle this problem by means of an interactive and integrated visual analysis system. By harnessing the power of modern graphics processing units for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real time, view specific parameter settings or simulation models and move between different spatial and temporal regions without delay. In addition, our system provides advanced visualizations to highlight the uncertainty or show the complete distribution of the simulations at user-defined positions over the complete time series of the prediction. We highlight the benefits of our system by presenting its application in a real-world scenario using a simulation of Hurricane Ike.
Jun Kyung KAY; Hyun Mee KIM; Young-Youn PARK; Joohyung SON
2013-01-01
Using the Met Office Global and Regional Ensemble Prediction System (MOGREPS) implemented at the Korea Meteorological Administration (KMA),the effect of doubling the ensemble size on the performance of ensemble prediction in the warm season was evaluated.Because a finite ensemble size causes sampling error in the full forecast probability distribution function (PDF),ensemble size is closely related to the efficiency of the ensemble prediction system.Prediction capability according to doubling the ensemble size was evaluated by increasing the number of ensembles from 24 to 48 in MOGREPS implemented at the KMA.The initial analysis perturbations generated by the Ensemble Transform Kalman Filter (ETKF) were integrated for 10 days from 22 May to 23 June 2009.Several statistical verification scores were used to measure the accuracy,reliability,and resolution of ensemble probabilistic forecasts for 24 and 48 ensemble member forecasts.Even though the results were not significant,the accuracy of ensemble prediction improved slightly as ensemble size increased,especially for longer forecast times in the Northern Hemisphere.While increasing the number of ensemble members resulted in a slight improvement in resolution as forecast time increased,inconsistent results were obtained for the scores assessing the reliability of ensemble prediction.The overall performance of ensemble prediction in terms of accuracy,resolution,and reliability increased slightly with ensemble size,especially for longer forecast times.
Multispecies pair annihilation reactions.
Deloubrière, Olivier; Hilhorst, Henk J; Täuber, Uwe C
2002-12-16
We consider diffusion-limited reactions A(i)+A(j)--> (12 and d> or =2, we argue that the asymptotic density decay for such mutual annihilation processes with equal rates and initial densities is the same as for single-species pair annihilation A+A-->. In d=1, however, particle segregation occurs for all q< infinity. The total density decays according to a q dependent power law, rho(t) approximately t(-alpha(q)). Within a simplified version of the model alpha(q)=(q-1)/2q can be determined exactly. Our findings are supported through Monte Carlo simulations.
kmos: A lattice kinetic Monte Carlo framework
Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten
2014-07-01
Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.
Toward a Monte Carlo program for simulating vapor-liquid phase equilibria from first principles
McGrath, M; Siepmann, J I; Kuo, I W; Mundy, C J; Vandevondele, J; Sprik, M; Hutter, J; Mohamed, F; Krack, M; Parrinello, M
2004-10-20
Efficient Monte Carlo algorithms are combined with the Quickstep energy routines of CP2K to develop a program that allows for Monte Carlo simulations in the canonical, isobaric-isothermal, and Gibbs ensembles using a first principles description of the physical system. Configurational-bias Monte Carlo techniques and pre-biasing using an inexpensive approximate potential are employed to increase the sampling efficiency and to reduce the frequency of expensive ab initio energy evaluations. The new Monte Carlo program has been validated through extensive comparison with molecular dynamics simulations using the programs CPMD and CP2K. Preliminary results for the vapor-liquid coexistence properties (T = 473 K) of water using the Becke-Lee-Yang-Parr exchange and correlation energy functionals, a triple-zeta valence basis set augmented with two sets of d-type or p-type polarization functions, and Goedecker-Teter-Hutter pseudopotentials are presented. The preliminary results indicate that this description of water leads to an underestimation of the saturated liquid density and heat of vaporization and, correspondingly, an overestimation of the saturated vapor pressure.
Monte Carlo Methods in ICF (LIRPP Vol. 13)
Zimmerman, George B.
2016-10-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Quantum Monte Carlo study of the protonated water dimer
Dagrada, Mario; Saitta, Antonino M; Sorella, Sandro; Mauri, Francesco
2013-01-01
We report an extensive theoretical study of the protonated water dimer (Zundel ion) by means of the highly correlated variational Monte Carlo and lattice regularized Monte Carlo approaches. This system represents the simplest model for proton transfer (PT) and a correct description of its properties is essential in order to understand the PT mechanism in more complex acqueous systems. Our Jastrow correlated AGP wave function ensures an accurate treatment of electron correlations. Exploiting the advantages of contracting the primitive basis set over atomic hybrid orbitals, we are able to limit dramatically the number of variational parameters with a systematic control on the numerical precision, crucial in order to simulate larger systems. We investigate energetics and geometrical properties of the Zundel ion as a function of the oxygen-oxygen distance, taken as reaction coordinate. In both cases, our QMC results are found in excellent agreement with coupled cluster CCSD(T) technique, the quantum chemistry "go...
Gold nanoelectrode ensembles for direct trace electroanalysis of iodide.
Pereira, Francisco C; Moretto, Ligia M; De Leo, Manuela; Zanoni, Maria V Boldrin; Ugo, Paolo
2006-08-01
A procedure for the standardization of ensembles of gold nanodisk electrodes (NEE) of 30 nm diameter is presented, which is based on the analytical comparison between experimental cyclic voltammograms (CV) obtained at the NEEs in diluted solutions of redox probes and CV patterns obtained by digital simulation. Possible origins of defects sometimes found in NEEs are discussed. Selected NEEs are then employed for the study of the electrochemical oxidation of iodide in acidic solutions. CV patterns display typical quasi-reversible behavior which involves associated chemical reactions between adsorbed and solution species. The main CV characteristics at the NEE compare with those observed at millimeter sized gold disk electrodes (Au-macro), apart a slight shift in E1/2 values and slightly higher peak to peak separation at the NEE. The detection limit (DL) at NEEs is 0.3 microM, which is more than one order of magnitude lower than DL at the Au-macro (4 microM). The mechanism of the electrochemical oxidation of iodide at NEEs is discussed. Finally, NEEs are applied to the direct determination of iodide at micromolar concentration levels in real samples, namely in some ophthalmic drugs and iodized table salt.
Fission Matrix Capability for MCNP Monte Carlo
Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a
Thermal Insulation Distribution Pattern of Layered Clothing Ensemble
李俊; 韦鸿发; 刘岩; 张渭源
2004-01-01
With a thermal manikin, the distribution pattern of thermal insulation in multi-layered clothing ensemble is studied. It is found that the thermal insulation of multi-layered clothing ensemble has certain statistical relationship with the thermal insulation of each layer, and the prediction equation has been established.
Building Identity in Collegiate Midlevel Choral Ensembles: The Director's Perspective
Major, Marci L.
2017-01-01
This study was designed to explore the director's perspective on the role organizational images play in social identity development in midlevel choral ensembles. Using a phenomenological methodology, I interviewed 10 current or former directors of midlevel choral ensembles from eight midwestern U.S. colleges and universities. Directors cited…
Calculation of the chemical potential in the Gibbs ensemble
Smit, B.; Frenkel, D.
1989-01-01
An expression for the chemical potential in the Gibbs ensemble is derived. For finite system sizes this expression for the chemical potential differs system-atically from Widom's test particle insertion method for the N, V, T ensemble. In order to compare these two methods for calculating the chemic
Stochastic and dynamical downscaling of ensemble precipitation forecasts
Brussolo, E.; von Hardenberg, J.; Rebora, N.
2009-04-01
Forecasting hydrogeological risk in small basins requires quantitative forecasts and an estimate of the probability of occurrence of severe, localized precipitation events at spatial scales of the order of tens of kilometers or less, significantly smaller than those currently provided by large scale, global, ensemble forecasting systems (EPS). Dynamically based forecasts at these scales can be obtained extending EPS scenarios with high-resolution, non-hydrostatic, limited area ensemble prediction systems. An alternative is represented by the direct application of stochastic downscaling techniques to the large scale ensemble forecasts. This work compares the performances of these two very different ensemble forecast downscaling approaches. To this purpose we consider ensemble forecasts provided by the ECMWF EPS, downscaled in space using the RainFARM stochastic technique [1], and ensembles of forecasts obtained from the COSMO-LEPS limited area prediction system (which also uses ECMWF EPS ensemble members as boundary conditions), for three intense precipitation events over northern Italy in 2006. The statistical properties of the fields produced with these two techniques are compared and the skill of the resulting ensembles is verified against direct precipitation measurements from a dense network of rain gauges. Reference: 1. Rebora, N., L. Ferraris, J. von Hardenberg, and A. Provenzale, 2006: The RainFARM: Rainfall Downscaling by a Filtered AutoRegressive Model. J. Hydrometeorol., 7, 724-738.
A Comparison of Ensemble Kalman Filters for Storm Surge Assimilation
Altaf, Muhammad
2014-08-01
This study evaluates and compares the performances of several variants of the popular ensembleKalman filter for the assimilation of storm surge data with the advanced circulation (ADCIRC) model. Using meteorological data from Hurricane Ike to force the ADCIRC model on a domain including the Gulf ofMexico coastline, the authors implement and compare the standard stochastic ensembleKalman filter (EnKF) and three deterministic square root EnKFs: the singular evolutive interpolated Kalman (SEIK) filter, the ensemble transform Kalman filter (ETKF), and the ensemble adjustment Kalman filter (EAKF). Covariance inflation and localization are implemented in all of these filters. The results from twin experiments suggest that the square root ensemble filters could lead to very comparable performances with appropriate tuning of inflation and localization, suggesting that practical implementation details are at least as important as the choice of the square root ensemble filter itself. These filters also perform reasonably well with a relatively small ensemble size, whereas the stochastic EnKF requires larger ensemble sizes to provide similar accuracy for forecasts of storm surge.
Conductor and Ensemble Performance Expressivity and State Festival Ratings
Price, Harry E.; Chang, E. Christina
2005-01-01
This study is the second in a series examining the relationship between conducting and ensemble performance. The purpose was to further examine the associations among conductor, ensemble performance expressivity, and festival ratings. Participants were asked to rate the expressivity of video-only conducting and parallel audio-only excerpts from a…
An iterative ensemble Kalman filter for reservoir engineering applications
Krymskaya, M.V.; Hanea, R.G.; Verlaan, M.
2009-01-01
The study has been focused on examining the usage and the applicability of ensemble Kalman filtering techniques to the history matching procedures. The ensemble Kalman filter (EnKF) is often applied nowadays to solving such a problem. Meanwhile, traditional EnKF requires assumption of the
Ensemble Forecast: A New Approach to Uncertainty and Predictability
无
2005-01-01
Ensemble techniques have been used to generate daily numerical weather forecasts since the 1990s in numerical centers around the world due to the increase in computation ability. One of the main purposes of numerical ensemble forecasts is to try to assimilate the initial uncertainty (initial error) and the forecast uncertainty (forecast error) by applying either the initial perturbation method or the multi-model/multiphysics method. In fact, the mean of an ensemble forecast offers a better forecast than a deterministic (or control) forecast after a short lead time (3 5 days) for global modelling applications. There is about a 1-2-day improvement in the forecast skill when using an ensemble mean instead of a single forecast for longer lead-time. The skillful forecast (65% and above of an anomaly correlation) could be extended to 8 days (or longer) by present-day ensemble forecast systems. Furthermore, ensemble forecasts can deliver a probabilistic forecast to the users, which is based on the probability density function (PDF)instead of a single-value forecast from a traditional deterministic system. It has long been recognized that the ensemble forecast not only improves our weather forecast predictability but also offers a remarkable forecast for the future uncertainty, such as the relative measure of predictability (RMOP) and probabilistic quantitative precipitation forecast (PQPF). Not surprisingly, the success of the ensemble forecast and its wide application greatly increase the confidence of model developers and research communities.
Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.
Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G
2017-09-01
To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
Competitive Learning Neural Network Ensemble Weighted by Predicted Performance
Ye, Qiang
2010-01-01
Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…
Exact ensemble density-functional theory for excited states
Yang, Zeng-hui; Pribram-Jones, Aurora; Burke, Kieron; Needs, Richard J; Ullrich, Carsten A
2014-01-01
We construct exact Kohn-Sham potentials for the ensemble density-functional theory (EDFT) of excited states from the ground and excited states of helium. The exchange-correlation potential is compared with current approximations, which miss prominent features. The ensemble derivative discontinuity is tested, and the virial theorem is proven and illustrated.
An iterative ensemble Kalman filter for reservoir engineering applications
Krymskaya, M.V.; Hanea, R.G.; Verlaan, M.
2009-01-01
The study has been focused on examining the usage and the applicability of ensemble Kalman filtering techniques to the history matching procedures. The ensemble Kalman filter (EnKF) is often applied nowadays to solving such a problem. Meanwhile, traditional EnKF requires assumption of the distributi
Competitive Learning Neural Network Ensemble Weighted by Predicted Performance
Ye, Qiang
2010-01-01
Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…
Modality-Driven Classification and Visualization of Ensemble Variance
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Kazuo Saito
2012-01-01
Full Text Available The effect of lateral boundary perturbations (LBPs on the mesoscale breeding (MBD method and the local ensemble transform Kalman filter (LETKF as the initial perturbations generators for mesoscale ensemble prediction systems (EPSs was examined. A LBPs method using the Japan Meteorological Agency's (JMA's operational one-week global ensemble prediction was developed and applied to the mesoscale EPS of the Meteorological Research Institute for the World Weather Research Programme, Beijing 2008 Olympics Research and Development Project. The amplitude of the LBPs was adjusted based on the ensemble spread statistics considering the difference of the forecast times of the JMA's one-week EPS and the associated breeding/ensemble Kalman filter (EnKF cycles. LBPs in the ensemble forecast increase the ensemble spread and improve the accuracy of the ensemble mean forecast. In the MBD method, if LBPs were introduced in its breeding cycles, the growth rate of the generated bred vectors is increased, and the ensemble spread and the root mean square errors (RMSEs of the ensemble mean are further improved in the ensemble forecast. With LBPs in the breeding cycles, positional correspondences to the meteorological disturbances and the orthogonality of the bred vectors are improved. Brier Skill Scores (BSSs also showed a remarkable effect of LBPs in the breeding cycles. LBPs showed a similar effect with the LETKF. If LBPs were introduced in the EnKF data assimilation cycles, the ensemble spread, ensemble mean accuracy, and BSSs for precipitation were improved, although the relative advantage of LETKF as the initial perturbations generator against MDB was not necessarily clear. LBPs in the EnKF cycles contribute not to the orthogonalisation but to prevent the underestimation of the forecast error near the lateral boundary.The accuracy of the LETKF analyses was compared with that of the mesoscale 4D-VAR analyses. With LBPs in the LETKF cycles, the RMSEs of the
Ensemble-Based Data Assimilation With a Martian GCM
Lawson, W.; Richardson, M. I.; McCleese, D. J.; Anderson, J. L.; Chen, Y.; Snyder, C.
2007-12-01
Monte Carlo approximations, "ensemble-based methods," has matured enough to be both appropriate for use in planetary problems and exploitably within the reach of planetary scientists. Capitalizing on this new class of methods, the National Center for Atmospheric Research (NCAR) has developed a framework for ensemble-based DA that is flexible and modular in its use of various forecast models and data sets. The framework is called DART, the Data Assimilation Research Testbed, and it is freely available on-line. We have begun to take advantage of this rich software infrastructure, and are on our way toward performing state of the art DA in the martian atmosphere using Caltech's martian general circulation model, PlanetWRF. We have begun by testing and validating the model within DART under idealized scenarios, and we hope to address actual, available infrared remote sensing datasets from Mars orbiters in the coming year. We shall present the details of this approach and our progress to date.
Data assimilation the ensemble Kalman filter
Evensen, Geir
2007-01-01
Data Assimilation comprehensively covers data assimilation and inverse methods, including both traditional state estimation and parameter estimation. This text and reference focuses on various popular data assimilation methods, such as weak and strong constraint variational methods and ensemble filters and smoothers. It is demonstrated how the different methods can be derived from a common theoretical basis, as well as how they differ and/or are related to each other, and which properties characterize them, using several examples. Rather than emphasize a particular discipline such as oceanography or meteorology, it presents the mathematical framework and derivations in a way which is common for any discipline where dynamics is merged with measurements. The mathematics level is modest, although it requires knowledge of basic spatial statistics, Bayesian statistics, and calculus of variations. Readers will also appreciate the introduction to the mathematical methods used and detailed derivations, which should b...
Predicting protein dynamics from structural ensembles
Copperman, J
2015-01-01
The biological properties of proteins are uniquely determined by their structure and dynamics. A protein in solution populates a structural ensemble of metastable configurations around the global fold. From overall rotation to local fluctuations, the dynamics of proteins can cover several orders of magnitude in time scales. We propose a simulation-free coarse-grained approach which utilizes knowledge of the important metastable folded states of the protein to predict the protein dynamics. This approach is based upon the Langevin Equation for Protein Dynamics (LE4PD), a Langevin formalism in the coordinates of the protein backbone. The linear modes of this Langevin formalism organize the fluctuations of the protein, so that more extended dynamical cooperativity relates to increasing energy barriers to mode diffusion. The accuracy of the LE4PD is verified by analyzing the predicted dynamics across a set of seven different proteins for which both relaxation data and NMR solution structures are available. Using e...
China’s First Modern Dance Ensemble
1992-01-01
After four years’hardwork by both Chineseand foreign artiststhe Guangdong ExperimentalModern Dance Ensemble,thefirst of its kind in China,wasestablished on June 6,1992,in the Friendship Theater ofGuangzhou.Ms、Yang Meiqi,a famous Chinese folk danceeducator,was chosen as headand Mr.Willy Tsao,a famousyoung Hongkong dancer,as artisticdirector.China’s Central TV Stationreported the news.Recommended by Ms.ChiangChing,a Chinese-American dancer,Yang Meiqi went to Durham,NorthCarolina,in the United States in thesummer of 1986.to attend the Amer-ican Dance Festival.The moderndances put on during the festivalfascinated her with their universal“language,”flexible movement,cho-reography and scientific training.“Isn’t this just What China’s dance
ARM Cloud Retrieval Ensemble Data Set (ACRED)
Zhao, C; Xie, S; Klein, SA; McCoy, R; Comstock, JM; Delanoë, J; Deng, M; Dunn, M; Hogan, RJ; Jensen, MP; Mace, GG; McFarlane, SA; O’Connor, EJ; Protat, A; Shupe, MD; Turner, D; Wang, Z
2011-09-12
This document describes a new Atmospheric Radiation Measurement (ARM) data set, the ARM Cloud Retrieval Ensemble Data Set (ACRED), which is created by assembling nine existing ground-based cloud retrievals of ARM measurements from different cloud retrieval algorithms. The current version of ACRED includes an hourly average of nine ground-based retrievals with vertical resolution of 45 m for 512 layers. The techniques used for the nine cloud retrievals are briefly described in this document. This document also outlines the ACRED data availability, variables, and the nine retrieval products. Technical details about the generation of ACRED, such as the methods used for time average and vertical re-grid, are also provided.
An educational model for ensemble streamflow simulation and uncertainty analysis
A. AghaKouchak
2013-02-01
Full Text Available This paper presents the hands-on modeling toolbox, HBV-Ensemble, designed as a complement to theoretical hydrology lectures, to teach hydrological processes and their uncertainties. The HBV-Ensemble can be used for in-class lab practices and homework assignments, and assessment of students' understanding of hydrological processes. Using this modeling toolbox, students can gain more insights into how hydrological processes (e.g., precipitation, snowmelt and snow accumulation, soil moisture, evapotranspiration and runoff generation are interconnected. The educational toolbox includes a MATLAB Graphical User Interface (GUI and an ensemble simulation scheme that can be used for teaching uncertainty analysis, parameter estimation, ensemble simulation and model sensitivity. HBV-Ensemble was administered in a class for both in-class instruction and a final project, and students submitted their feedback about the toolbox. The results indicate that this educational software had a positive impact on students understanding and knowledge of uncertainty in hydrological modeling.
Ensemble inequivalence: Landau theory and the ABC model
Cohen, O.; Mukamel, D.
2012-12-01
It is well known that systems with long-range interactions may exhibit different phase diagrams when studied within two different ensembles. In many of the previously studied examples of ensemble inequivalence, the phase diagrams differ only when the transition in one of the ensembles is first order. By contrast, in a recent study of a generalized ABC model, the canonical and grand-canonical ensembles of the model were shown to differ even when they both exhibit a continuous transition. Here we show that the order of the transition where ensemble inequivalence may occur is related to the symmetry properties of the order parameter associated with the transition. This is done by analyzing the Landau expansion of a generic model with long-range interactions. The conclusions drawn from the generic analysis are demonstrated for the ABC model by explicit calculation of its Landau expansion.
Excitations and benchmark ensemble density functional theory for two electrons
Pribram-Jones, Aurora; Trail, John R; Burke, Kieron; Needs, Richard J; Ullrich, Carsten A
2014-01-01
A new method for extracting ensemble Kohn-Sham potentials from accurate excited state densities is applied to a variety of two electron systems, exploring the behavior of exact ensemble density functional theory. The issue of separating the Hartree energy and the choice of degenerate eigenstates is explored. A new approximation, spin eigenstate Hartree-exchange (SEHX), is derived. Exact conditions that are proven include the signs of the correlation energy components, the virial theorem for both exchange and correlation, and the asymptotic behavior of the potential for small weights of the excited states. Many energy components are given as a function of the weights for two electrons in a one-dimensional flat box, in a box with a large barrier to create charge transfer excitations, in a three-dimensional harmonic well (Hooke's atom), and for the He atom singlet-triplet ensemble, singlet-triplet-singlet ensemble, and triplet bi-ensemble.
Excitations and benchmark ensemble density functional theory for two electrons
Pribram-Jones, Aurora; Burke, Kieron [Department of Chemistry, University of California-Irvine, Irvine, California 92697 (United States); Yang, Zeng-hui; Ullrich, Carsten A. [Department of Physics and Astronomy, University of Missouri, Columbia, Missouri 65211 (United States); Trail, John R.; Needs, Richard J. [Theory of Condensed Matter Group, Cavendish Laboratory, University of Cambridge, Cambridge CB3 0HE (United Kingdom)
2014-05-14
A new method for extracting ensemble Kohn-Sham potentials from accurate excited state densities is applied to a variety of two-electron systems, exploring the behavior of exact ensemble density functional theory. The issue of separating the Hartree energy and the choice of degenerate eigenstates is explored. A new approximation, spin eigenstate Hartree-exchange, is derived. Exact conditions that are proven include the signs of the correlation energy components and the asymptotic behavior of the potential for small weights of the excited states. Many energy components are given as a function of the weights for two electrons in a one-dimensional flat box, in a box with a large barrier to create charge transfer excitations, in a three-dimensional harmonic well (Hooke's atom), and for the He atom singlet-triplet ensemble, singlet-triplet-singlet ensemble, and triplet bi-ensemble.
Adaptive calibration of (u,v)‐wind ensemble forecasts
Pinson, Pierre
2012-01-01
Ensemble forecasts of (u,v)‐wind are of crucial importance for a number of decision‐making problems related to e.g. air traffic control, ship routeing and energy management. The skill of these ensemble forecasts as generated by NWP‐based models can be maximised by correcting for their lack...... of sufficient reliability. The original framework introduced here allows for an adaptive bivariate calibration of these ensemble forecasts. The originality of this methodology lies in the fact that calibrated ensembles still consist of a set of (space–time) trajectories, after translation and dilation...... on the adaptive calibration of ECMWF ensemble forecasts of (u,v)‐wind at 10 m above ground level over Europe over a three‐year period between December 2006 and December 2009. Substantial improvements in (bivariate) reliability and in various deterministic/probabilistic scores are observed. Finally, the maps...
Induced Ginibre ensemble of random matrices and quantum operations
Fischmann, J; Khoruzhenko, B A; Sommers, H -J; Zyczkowski, K
2011-01-01
A generalisation of the Ginibre ensemble of non-Hermitian random square matrices is introduced. The corresponding probability measure is induced by the ensemble of rectangular Gaussian matrices via a quadratisation procedure. We derive the joint probability density of eigenvalues for such induced Ginibre ensemble and study various spectral correlation functions for complex and real matrices, and analyse universal behaviour in the limit of large dimensions. In this limit the eigenvalues of the induced Ginibre ensemble cover uniformly a ring in the complex plane. The real induced Ginibre ensemble is shown to be useful to describe statistical properties of evolution operators associated with random quantum operations, for which the dimensions of the input state and the output state do differ.
Discrete post-processing of total cloud cover ensemble forecasts
Hemri, Stephan; Haiden, Thomas; Pappenberger, Florian
2017-04-01
This contribution presents an approach to post-process ensemble forecasts for the discrete and bounded weather variable of total cloud cover. Two methods for discrete statistical post-processing of ensemble predictions are tested. The first approach is based on multinomial logistic regression, the second involves a proportional odds logistic regression model. Applying them to total cloud cover raw ensemble forecasts from the European Centre for Medium-Range Weather Forecasts improves forecast skill significantly. Based on station-wise post-processing of raw ensemble total cloud cover forecasts for a global set of 3330 stations over the period from 2007 to early 2014, the more parsimonious proportional odds logistic regression model proved to slightly outperform the multinomial logistic regression model. Reference Hemri, S., Haiden, T., & Pappenberger, F. (2016). Discrete post-processing of total cloud cover ensemble forecasts. Monthly Weather Review 144, 2565-2577.
Halu, Arda; Bianconi, Ginestra
2013-01-01
Spatial networks range from the brain networks, to transportation networks and infrastructures. Recently interacting and multiplex networks are attracting great attention because their dynamics and robustness cannot be understood without treating at the same time several networks. Here we present maximal entropy ensembles of spatial multiplex and spatial interacting networks that can be used in order to model spatial multilayer network structures and to build null models of real datasets. We show that spatial multiplex naturally develop a significant overlap of the links, a noticeable property of many multiplexes that can affect significantly the dynamics taking place on them. Additionally, we characterize ensembles of spatial interacting networks and we analyse the structure of interacting airport and railway networks in India, showing the effect of space in determining the link probability.
Monte Carlo simulation on kinetics of batch and semi-batch free radical polymerization
Shao, Jing
2015-10-27
Based on Monte Carlo simulation technology, we proposed a hybrid routine which combines reaction mechanism together with coarse-grained molecular simulation to study the kinetics of free radical polymerization. By comparing with previous experimental and simulation studies, we showed the capability of our Monte Carlo scheme on representing polymerization kinetics in batch and semi-batch processes. Various kinetics information, such as instant monomer conversion, molecular weight, and polydispersity etc. are readily calculated from Monte Carlo simulation. The kinetic constants such as polymerization rate k p is determined in the simulation without of “steady-state” hypothesis. We explored the mechanism for the variation of polymerization kinetics those observed in previous studies, as well as polymerization-induced phase separation. Our Monte Carlo simulation scheme is versatile on studying polymerization kinetics in batch and semi-batch processes.
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter
Luo, Xiaodong
2011-12-01
A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.
On stochastic error and computational efficiency of the Markov Chain Monte Carlo method
Li, Jun
2014-01-01
In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.
Monte Carlo Studies of the Orientational Order-Disorder Phase Transition in Solid Ammonium Chloride
Topper, R Q; Topper, Robert Q.; Freeman, David L.
1994-01-01
Monte Carlo methods are used to study the phase transition in ammonium chloride from the orientationally ordered $\\delta$ phase to the orientationally disordered $\\gamma$ phase. An effective pair potential is used to model the interaction between ions. Thermodynamic properties are computed in the canonical and isothermal-isobaric ensembles. Each ammonium ion is treated as a rigidly rotating body and the lattice is fixed in the low-temperature CsCl geometry. A simple extension of the Metropolis Monte Carlo method is used to overcome quasiergodicity in the rotational sampling. In the constant-$NVT$ calculations the lattice is held rigid; in the constant-$NpT$ calculations the lattice parameter is allowed to fluctuate. In both ensembles the order parameter rapidly falls to zero in the range (200 - 250)K, suggesting that the model disorders at a temperature in fair agreement with the experimental disordering temperature (243K). Peaks in the heat capacity and thermal expansivity curves are also found in the same t...
Kinetic Monte Carlo simulation of surface segregation in Pd–Cu alloys
Cheng, Feng [Institute of Theoretical and Computational Chemistry, School of Chemistry and Chemical Engineering, Key Laboratory of Mesoscopic Chemistry of MOE, Nanjing University (China); He, Xiang [Nanjing Institute of Geography and Limnology, Chinese Academy of Sciences, Nanjing 210008 (China); Chen, Zhao-Xu, E-mail: zxchen@nju.edu.cn [Institute of Theoretical and Computational Chemistry, School of Chemistry and Chemical Engineering, Key Laboratory of Mesoscopic Chemistry of MOE, Nanjing University (China); Huang, Yu-Gai [Institute of Theoretical and Computational Chemistry, School of Chemistry and Chemical Engineering, Key Laboratory of Mesoscopic Chemistry of MOE, Nanjing University (China); JiangSu Second Normal University, Nanjing (China)
2015-11-05
The knowledge of surface composition and atomic arrangement is prerequisite for understanding of catalytic properties of an alloy catalyst. Gaining such knowledge is rather difficult, especially for those possessing surface segregation. Pd–Cu alloy is used in many fields and possesses surface segregation. In this paper kinetic Monte Carlo method is used to explore the surface composition and structure and to examine the effects of bulk composition and temperature on the surface segregation of Pd–Cu alloys. It is shown that the segregation basically completes within 900 s at 500 K. Below 900 K and within 20 min the enriched surface Cu atoms mainly come from the top five layers. For the first time we demonstrate that there exists a “bulk-inside flocking” or clustering phenomenon (the same component element congregates in bulk) in Pd–Cu alloys. Our results indicate that for alloys with higher Cu content there are small Pd ensembles like monomers, dimers and trimers with contiguous subsurface Pd atoms. - Highlights: • Kinetic Monte Carlo was first used to study surface segregation of Pd–Cu alloys. • Bulk-inside flocking (the same component element congregates in bulk) was observed. • Small Pd ensembles with contiguous subsurface Pd exist on surfaces of Cu-rich alloys.
Erdmann, Thorsten; Schwarz, Ulrich S
2013-01-01
Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors in equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of th...
Constructing the equilibrium ensemble of folding pathways from short off-equilibrium simulations.
Noé, Frank; Schütte, Christof; Vanden-Eijnden, Eric; Reich, Lothar; Weikl, Thomas R
2009-11-10
Characterizing the equilibrium ensemble of folding pathways, including their relative probability, is one of the major challenges in protein folding theory today. Although this information is in principle accessible via all-atom molecular dynamics simulations, it is difficult to compute in practice because protein folding is a rare event and the affordable simulation length is typically not sufficient to observe an appreciable number of folding events, unless very simplified protein models are used. Here we present an approach that allows for the reconstruction of the full ensemble of folding pathways from simulations that are much shorter than the folding time. This approach can be applied to all-atom protein simulations in explicit solvent. It does not use a predefined reaction coordinate but is based on partitioning the state space into small conformational states and constructing a Markov model between them. A theory is presented that allows for the extraction of the full ensemble of transition pathways from the unfolded to the folded configurations. The approach is applied to the folding of a PinWW domain in explicit solvent where the folding time is two orders of magnitude larger than the length of individual simulations. The results are in good agreement with kinetic experimental data and give detailed insights about the nature of the folding process which is shown to be surprisingly complex and parallel. The analysis reveals the existence of misfolded trap states outside the network of efficient folding intermediates that significantly reduce the folding speed.
Ensemble: a web-based system for psychology survey and experiment management.
Tomic, Stefan T; Janata, Petr
2007-08-01
We provide a description of Ensemble, a suite of Web-integrated modules for managing and analyzing data associated with psychology experiments in a small research lab. The system delivers interfaces via a Web browser for creating and presenting simple surveys without the need to author Web pages and with little or no programming effort. The surveys may be extended by selecting and presenting auditory and/or visual stimuli with MATLAB and Flash to enable a wide range of psychophysical and cognitive experiments which do not require the recording of precise reaction times. Additionally, one is provided with the ability to administer and present experiments remotely. The software technologies employed by the various modules of Ensemble are MySQL, PHP, MATLAB, and Flash. The code for Ensemble is open source and available to the public, so that its functions can be readily extended by users. We describe the architecture of the system, the functionality of each module, and provide basic examples of the interfaces.
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.
2012-01-01
International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.
2009-01-01
This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in
Monte Carlo Treatment Planning for Advanced Radiotherapy
Cronholm, Rickard
and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Ensemble Kalman filtering without the intrinsic need for inflation
M. Bocquet
2011-10-01
Full Text Available The main intrinsic source of error in the ensemble Kalman filter (EnKF is sampling error. External sources of error, such as model error or deviations from Gaussianity, depend on the dynamical properties of the model. Sampling errors can lead to instability of the filter which, as a consequence, often requires inflation and localization. The goal of this article is to derive an ensemble Kalman filter which is less sensitive to sampling errors. A prior probability density function conditional on the forecast ensemble is derived using Bayesian principles. Even though this prior is built upon the assumption that the ensemble is Gaussian-distributed, it is different from the Gaussian probability density function defined by the empirical mean and the empirical error covariance matrix of the ensemble, which is implicitly used in traditional EnKFs. This new prior generates a new class of ensemble Kalman filters, called finite-size ensemble Kalman filter (EnKF-N. One deterministic variant, the finite-size ensemble transform Kalman filter (ETKF-N, is derived. It is tested on the Lorenz '63 and Lorenz '95 models. In this context, ETKF-N is shown to be stable without inflation for ensemble size greater than the model unstable subspace dimension, at the same numerical cost as the ensemble transform Kalman filter (ETKF. One variant of ETKF-N seems to systematically outperform the ETKF with optimally tuned inflation. However it is shown that ETKF-N does not account for all sampling errors, and necessitates localization like any EnKF, whenever the ensemble size is too small. In order to explore the need for inflation in this small ensemble size regime, a local version of the new class of filters is defined (LETKF-N and tested on the Lorenz '95 toy model. Whatever the size of the ensemble, the filter is stable. Its performance without inflation is slightly inferior to that of LETKF with optimally tuned inflation for small interval between updates, and
Decadal climate predictions improved by ocean ensemble dispersion filtering
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.type="synopsis">type="main">Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Hg(2+) mediated quinazoline ensemble for highly selective recognition of Cysteine.
Anand, Thangaraj; Sivaraman, Gandhi; Chellappa, Duraisamy
2014-04-05
A fluorimetric sensor for Hg(2+) ion and Cysteine based on quinazoline platform was designed and synthesized by one step reaction and characterized by using common spectroscopic methods. Time Dependent Density Functional Theory calculations shows that probe behaves as "ON-OFF" fluorescent quenching sensor via electron transfer/heavy atom effect. Receptor was found to exhibit selective fluorescence quenching behavior over the other competitive metal ions, and also the receptor-Hg(2+) ensemble act as an efficient "OFF-ON" sensor for Cysteine. Moreover this sensor has also been successfully applied to detection of Hg(2+) in natural water samples with good recovery.
Kadoura, Ahmad
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
W. Colgan
2012-11-01
Full Text Available Due to the abundance of observational datasets collected since the onset of its retreat (c. 1983, Columbia Glacier, Alaska, provides an exciting modeling target. We perform Monte Carlo simulations of the form and flow of Columbia Glacier, using a 1-D (depth-integrated flowline model, over a wide range of parameter values and forcings. An ensemble filter is imposed following spin-up to ensure that only simulations that accurately reproduce observed pre-retreat glacier geometry are retained; all other simulations are discarded. The selected ensemble of simulations reasonably reproduces numerous highly transient post-retreat observed datasets. The selected ensemble mean projection suggests that Columbia Glacier will achieve a new dynamic equilibrium (i.e. "stable" ice geometry c. 2020, at which time iceberg calving rate will have returned to approximately pre-retreat values. Comparison of the observed 1957 and 2007 glacier geometries with the projected 2100 glacier geometry suggests that Columbia Glacier had already discharged ~82% of its projected 1957–2100 sea level rise contribution by 2007. This case study therefore highlights the difficulties associated with the future extrapolation of observed glacier mass loss rates that are dominated by iceberg calving.
A Hybrid Monte Carlo Sampling Filter for Non-Gaussian Data Assimilation
Adrian Sandu
2015-12-01
Full Text Available Data assimilation combines information from models, measurements, and priors to obtain improved estimates of the state of a dynamical system such as the atmosphere. Ensemble-based data assimilation approaches such as the Ensemble Kalman filter (EnKF have gained wide popularity due to their simple formulation, ease of implementation, and good practical results. Many of these methods are derived under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. When the Gaussianity assumptions are severely violated, the performance of EnKF variations degrades. This paper proposes a new ensemble-based data assimilation method, named the sampling filter, which obtains the analysis by sampling directly from the posterior distribution. The sampling strategy is based on a Hybrid Monte Carlo (HMC approach that can handle non-Gaussian probability distributions. Numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of non-linearity and differentiability. The proposed filter is also tested with shallow water model on a sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge.
McGrath, M; Siepmann, J I; Kuo, I W; Mundy, C J; VandeVondele, J; Hutter, J; Mohamed, F; Krack, M
2004-12-02
A series of first principles Monte Carlo simulations in the isobaric-isothermal ensemble were carried out for liquid water at ambient conditions (T = 298 K and p = 1 atm). The Becke-Lee-Yang-Parr (BLYP) exchange and correlation energy functionals and norm-conserving Goedecker-Teter-Hutter (GTH) pseudopotentials were employed with the CP2K simulation package to examine systems consisting of 64 water molecules. The fluctuations in the system volume encountered in simulations in the isobaric-isothermal ensemble requires a reconsideration of the suitability of the typical charge density cutoff and the regular grid generation method previously used for the computation of the electrostatic energy in first principles simulations in the microcanonical or canonical ensembles. In particular, it is noted that a much higher cutoff is needed and that the most computationally efficient method of creating grids can result in poor simulations. Analysis of the simulation trajectories using a very large charge density cutoff at 1200 Ry and four different grid generation methods point to a substantially underestimated liquid density of about 0.85 g/cm{sup 3} resulting in a somewhat understructured liquid (with a value of about 2.7 for the height of the first peak in the oxygen/oxygen radial distribution function) for BLYP-GTH water at ambient conditions.
Yun, Su-Won; Park, Shin-Ae; Kim, Tae-June; Kim, Jun-Hyuk; Pak, Gi-Woong; Kim, Yong-Tae
2017-02-08
A simple, inexpensive approach is proposed for enhancing the durability of automotive proton exchange membrane fuel cells by selective promotion of the hydrogen oxidation reaction (HOR) and suppression of the oxygen reduction reaction (ORR) at the anode in startup/shutdown events. Dodecanethiol forms a self-assembled monolayer (SAM) on the surface of Pt particles, thus decreasing the number of Pt ensemble sites. Interestingly, by controlling the dodecanethiol concentration during SAM formation, the number of ensemble sites can be precisely optimized such that it is sufficient for the HOR but insufficient for the ORR. Thus, a Pt surface with an SAM of dodecanethiol clearly effects HOR-selective electrocatalysis. Clear HOR selectivity is demonstrated in unit cell tests with the actual membrane electrode assembly, as well as in an electrochemical three-electrode setup with a thin-film rotating disk electrode configuration. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R. H. P.; Lazopoulos, A.
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...
A modified iterative ensemble Kalman filter data assimilation method
Xu, Baoxiong; Bai, Yulong; Wang, Yizhao; Li, Zhe; Ma, Boyang
2017-08-01
High nonlinearity is a typical characteristic associated with data assimilation systems. Additionally, iterative ensemble based methods have attracted a large amount of research attention, which has been focused on dealing with nonlinearity problems. To solve the local convergence problem of the iterative ensemble Kalman filter, a modified iterative ensemble Kalman filter algorithm was put forward, which was based on a global convergence strategy from the perspective of a Gauss-Newton iteration. Through self-adaption, the step factor was adjusted to enable every iteration to approach expected values during the process of the data assimilation. A sensitivity experiment was carried out in a low dimensional Lorenz-63 chaotic system, as well as a Lorenz-96 model. The new method was tested via ensemble size, observation variance, and inflation factor changes, along with other aspects. Meanwhile, comparative research was conducted with both a traditional ensemble Kalman filter and an iterative ensemble Kalman filter. The results showed that the modified iterative ensemble Kalman filter algorithm was a data assimilation method that was able to effectively estimate a strongly nonlinear system state.
Progressive freezing of interacting spins in isolated finite magnetic ensembles
Bhattacharya, Kakoli; Dupuis, Veronique; Le-Roy, Damien; Deb, Pritam
2017-02-01
Self-organization of magnetic nanoparticles into secondary nanostructures provides an innovative way for designing functional nanomaterials with novel properties, different from the constituent primary nanoparticles as well as their bulk counterparts. Collective magnetic properties of such complex closed packing of magnetic nanoparticles makes them more appealing than the individual magnetic nanoparticles in many technological applications. This work reports the collective magnetic behaviour of magnetic ensembles comprising of single domain Fe3O4 nanoparticles. The present work reveals that the ensemble formation is based on the re-orientation and attachment of the nanoparticles in an iso-oriented fashion at the mesoscale regime. Comprehensive dc magnetic measurements show the prevalence of strong interparticle interactions in the ensembles. Due to the close range organization of primary Fe3O4 nanoparticles in the ensemble, the spins of the individual nanoparticles interact through dipolar interactions as realized from remnant magnetization measurements. Signature of super spin glass like behaviour in the ensembles is observed in the memory studies carried out in field cooled conditions. Progressive freezing of spins in the ensembles is corroborated from the Vogel-Fulcher fit of the susceptibility data. Dynamic scaling of relaxation reasserted slow spin dynamics substantiating cluster spin glass like behaviour in the ensembles.
Soil texture reclassification by an ensemble model
Cisty, Milan; Hlavcova, Kamila
2015-04-01
a prerequisite for solving some subsequent task, this bias is propagated to the subsequent modelling or other work. Therefore, for the sake of achieving more general and precise outputs while solving such tasks, the authors of the present paper are proposing a hybrid approach, which has the potential for obtaining improved results. Although the authors continue recommending the use of the mentioned parametric PSD models in the proposed methodology, the final prediction is made by an ensemble machine learning algorithm based on regression trees, the so-called Random Forest algorithm, which is built on top of the outputs of such models, which serves as an ensemble members. An improvement in precision was proved, and it is documented in the paper that the ensemble model worked better than any of its constituents. References Nemes, A., Wosten, J.H.M., Lilly, A., Voshaar, J.H.O.: Evaluation of different procedures to interpolate particle-size distributions to achieve compatibility within soil databases. Geoderma 90, 187- 202 (1999) Hwang, S.: Effect of texture on the performance of soil particle-size distribution models. Geoderma 123, 363-371 (2004) Botula, Y.D., Cornelis, W.M., Baert, G., Mafuka, P., Van Ranst, E.: Particle size distribution models for soils of the humid tropics. J Soils Sediments. 13, 686-698 (2013)
Monte Carlo Simulation for Statistical Decay of Compound Nucleus
Chadwick M.B.
2012-02-01
Full Text Available We perform Monte Carlo simulations for neutron and γ-ray emissions from a compound nucleus based on the Hauser-Feshbach statistical theory. This Monte Carlo Hauser-Feshbach (MCHF method calculation, which gives us correlated information between emitted particles and γ-rays. It will be a powerful tool in many applications, as nuclear reactions can be probed in a more microscopic way. We have been developing the MCHF code, CGM, which solves the Hauser-Feshbach theory with the Monte Carlo method. The code includes all the standard models that used in a standard Hauser-Feshbach code, namely the particle transmission generator, the level density module, interface to the discrete level database, and so on. CGM can emit multiple neutrons, as long as the excitation energy of the compound nucleus is larger than the neutron separation energy. The γ-ray competition is always included at each compound decay stage, and the angular momentum and parity are conserved. Some calculations for a fission fragment 140Xe are shown as examples of the MCHF method, and the correlation between the neutron and γ-ray is discussed.
Ensemble Deep Learning for Biomedical Time Series Classification
Lin-peng Jin
2016-01-01
Full Text Available Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Ensemble Deep Learning for Biomedical Time Series Classification.
Jin, Lin-Peng; Dong, Jun
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Ensemble Deep Learning for Biomedical Time Series Classification
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
An Improved Particle Swarm Optimization Algorithm Based on Ensemble Technique
SHI Yan; HUANG Cong-ming
2006-01-01
An improved particle swarm optimization (PSO) algorithm based on ensemble technique is presented. The algorithm combines some previous best positions (pbest) of the particles to get an ensemble position (Epbest), which is used to replace the global best position (gbest). It is compared with the standard PSO algorithm invented by Kennedy and Eberhart and some improved PSO algorithms based on three different benchmark functions. The simulation results show that the improved PSO based on ensemble technique can get better solutions than the standard PSO and some other improved algorithms under all test cases.
Deterministic entanglement of Rydberg ensembles by engineered dissipation
Dasari, Durga; Mølmer, Klaus
2014-01-01
We propose a scheme that employs dissipation to deterministically generate entanglement in an ensemble of strongly interacting Rydberg atoms. With a combination of microwave driving between different Rydberg levels and a resonant laser coupling to a short lived atomic state, the ensemble can...... be driven towards a dark steady state that entangles all atoms. The long-range resonant dipole-dipole interaction between different Rydberg states extends the entanglement beyond the van der Walls interaction range with perspectives for entangling large and distant ensembles....
The stochastic separatrix and the reaction coordinate for complex systems.
Antoniou, Dimitri; Schwartz, Steven D
2009-04-21
We present a new approach to the identification of degrees of freedom which comprise a reaction coordinate in a complex system. The method begins with the generation of an ensemble of reactive trajectories. Each trajectory is analyzed for its equicommittor position or transition state; then the transition state ensemble is identified as the stochastic separatrix. Numerical analysis of the points along the separatrix for variability of coordinate location correctly identifies the components of the reaction coordinate in a test system of a double well coupled to a promoting vibration and a bath of linearly coupled oscillators.
Langevin Monte Carlo filtering for target tracking
Iglesias Garcia, Fernando; Bocquel, Melanie; Driessen, Hans
2015-01-01
This paper introduces the Langevin Monte Carlo Filter (LMCF), a particle filter with a Markov chain Monte Carlo algorithm which draws proposals by simulating Hamiltonian dynamics. This approach is well suited to non-linear filtering problems in high dimensional state spaces where the bootstrap filte
Large margin classifier-based ensemble tracking
Wang, Yuru; Liu, Qiaoyuan; Yin, Minghao; Wang, ShengSheng
2016-07-01
In recent years, many studies consider visual tracking as a two-class classification problem. The key problem is to construct a classifier with sufficient accuracy in distinguishing the target from its background and sufficient generalize ability in handling new frames. However, the variable tracking conditions challenges the existing methods. The difficulty mainly comes from the confused boundary between the foreground and background. This paper handles this difficulty by generalizing the classifier's learning step. By introducing the distribution data of samples, the classifier learns more essential characteristics in discriminating the two classes. Specifically, the samples are represented in a multiscale visual model. For features with different scales, several large margin distribution machine (LDMs) with adaptive kernels are combined in a Baysian way as a strong classifier. Where, in order to improve the accuracy and generalization ability, not only the margin distance but also the sample distribution is optimized in the learning step. Comprehensive experiments are performed on several challenging video sequences, through parameter analysis and field comparison, the proposed LDM combined ensemble tracker is demonstrated to perform with sufficient accuracy and generalize ability in handling various typical tracking difficulties.
Model error estimation in ensemble data assimilation
S. Gillijns
2007-01-01
Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.
Variety of synchronous regimes in neuronal ensembles
Komarov, M. A.; Osipov, G. V.; Suykens, J. A. K.
2008-09-01
We consider a Hodgkin-Huxley-type model of oscillatory activity in neurons of the snail Helix pomatia. This model has a distinctive feature: It demonstrates multistability in oscillatory and silent modes that is typical for the thalamocortical neurons. A single neuron cell can demonstrate a variety of oscillatory activity: Regular and chaotic spiking and bursting behavior. We study collective phenomena in small and large arrays of nonidentical cells coupled by models of electrical and chemical synapses. Two single elements coupled by electrical coupling show different types of synchronous behavior, in particular in-phase and antiphase synchronous regimes. In an ensemble of three inhibitory synaptically coupled elements, the phenomenon of sequential synchronous dynamics is observed. We study the synchronization phenomena in the chain of nonidentical neurons at different oscillatory behavior coupled with electrical and chemical synapses. Various regimes of phase synchronization are observed: (i) Synchronous regular and chaotic spiking; (ii) synchronous regular and chaotic bursting; and (iii) synchronous regular and chaotic bursting with different numbers of spikes inside the bursts. We detect and study the effect of collective synchronous burst generation due to the cluster formation and the oscillatory death.
General approaches in ensemble quantum computing
V Vimalan; N Chandrakumar
2008-01-01
We have developed methodology for NMR quantum computing focusing on enhancing the efficiency of initialization, of logic gate implementation and of readout. Our general strategy involves the application of rotating frame pulse sequences to prepare pseudopure states and to perform logic operations. We demonstrate experimentally our methodology for both homonuclear and heteronuclear spin ensembles. On model two-spin systems, the initialization time of one of our sequences is three-fourths (in the heteronuclear case) or one-fourth (in the homonuclear case), of the typical pulsed free precession sequences, attaining the same initialization efficiency. We have implemented the logical SWAP operation in homonuclear AMX spin systems using selective isotropic mixing, reducing the duration taken to a third compared to the standard re-focused INEPT-type sequence. We introduce the 1D version for readout of the rotating frame SWAP operation, in an attempt to reduce readout time. We further demonstrate the Hadamard mode of 1D SWAP, which offers 2N-fold reduction in experiment time for a system with -working bits, attaining the same sensitivity as the standard 1D version.
Ensemble LUT classification for degraded document enhancement
Obafemi-Ajayi, Tayo; Agam, Gady; Frieder, Ophir
2008-01-01
The fast evolution of scanning and computing technologies have led to the creation of large collections of scanned paper documents. Examples of such collections include historical collections, legal depositories, medical archives, and business archives. Moreover, in many situations such as legal litigation and security investigations scanned collections are being used to facilitate systematic exploration of the data. It is almost always the case that scanned documents suffer from some form of degradation. Large degradations make documents hard to read and substantially deteriorate the performance of automated document processing systems. Enhancement of degraded document images is normally performed assuming global degradation models. When the degradation is large, global degradation models do not perform well. In contrast, we propose to estimate local degradation models and use them in enhancing degraded document images. Using a semi-automated enhancement system we have labeled a subset of the Frieder diaries collection.1 This labeled subset was then used to train an ensemble classifier. The component classifiers are based on lookup tables (LUT) in conjunction with the approximated nearest neighbor algorithm. The resulting algorithm is highly effcient. Experimental evaluation results are provided using the Frieder diaries collection.1
Group Theory for Embedded Random Matrix Ensembles
Kota, V K B
2014-01-01
Embedded random matrix ensembles are generic models for describing statistical properties of finite isolated quantum many-particle systems. For the simplest spinless fermion (or boson) systems with say $m$ fermions (or bosons) in $N$ single particle states and interacting with say $k$-body interactions, we have EGUE($k$) [embedded GUE of $k$-body interactions) with GUE embedding and the embedding algebra is $U(N)$. In this paper, using EGUE($k$) representation for a Hamiltonian that is $k$-body and an independent EGUE($t$) representation for a transition operator that is $t$-body and employing the embedding $U(N)$ algebra, finite-$N$ formulas for moments up to order four are derived, for the first time, for the transition strength densities (transition strengths multiplied by the density of states at the initial and final energies). In the asymptotic limit, these formulas reduce to those derived for the EGOE version and establish that in general bivariate transition strength densities take bivariate Gaussian ...
Emergent order in ensembles of active spinners
van Zuiden, Benjamin C.; Paulose, Jayson; Irvine, William T. M.; Bartolo, Denis; Vitelli, Vincenzo
Interacting self-propelled particles is proxy to model many living systems from cytoskeletal motors to bird flocks, while also providing a framework to investigate fundamental questions in non equilibrium statistical mechanics. A surge of recent studies have shown that self-propulsion significantly modifies the phase behavior of particles interacting via potential interactions. A prototypical example is the so-called Motility Induced Phase Separation occurring in ensembles of self-propelled hard spheres. In stark contrast, our understanding of active spinning, as opposed to self-propulsion, remains very scarce. Here, we study a system of self-spinning dimers, interacting via soft repulsive forces. Upon varying the density and activity, we observe a range of emergent phases characterized by different degrees of spatiotemporal order in the position and orientation of the dimers. Changes in bulk properties, including crystallization, melting, and freezing, are reflected in the collective motion of the particles. We rationalize our numerical findings theoretically and demonstrate some of these concepts in a active granular experiment.
Ensemble Kalman filtering with residual nudging
Luo, Xiaodong; 10.3402/tellusa.v64i0.17130
2012-01-01
Covariance inflation and localization are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/o...
Orchestrating Distributed Resource Ensembles for Petascale Science
Baldin, Ilya; Mandal, Anirban; Ruth, Paul; Yufeng, Xin
2014-04-24
Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstract API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.
Quantum metrology with cold atomic ensembles
Mitchell Morgan W.
2013-08-01
Full Text Available Quantum metrology uses quantum features such as entanglement and squeezing to improve the sensitivity of quantum-limited measurements. Long established as a valuable technique in optical measurements such as gravitational-wave detection, quantum metrology is increasingly being applied to atomic instruments such as matter-wave interferometers, atomic clocks, and atomic magnetometers. Several of these new applications involve dual optical/atomic quantum systems, presenting both new challenges and new opportunities. Here we describe an optical magnetometry system that achieves both shot-noise-limited and projection-noise-limited performance, allowing study of optical magnetometry in a fully-quantum regime [1]. By near-resonant Faraday rotation probing, we demonstrate measurement-based spin squeezing in a magnetically-sensitive atomic ensemble [2-4]. The versatility of this system allows us also to design metrologically-relevant optical nonlinearities, and to perform quantum-noise-limited measurements with interacting photons. As a first interaction-based measurement [5], we implement a non-linear metrology scheme proposed by Boixo et al. with the surprising feature of precision scaling better than the 1/N “Heisenberg limit” [6].
Evaluation of seasonal ensemble forecasts in Norway
Tore Sinnes, Svein; Engeland, Kolbjørn; Langsholt, Elin; Roar Sælthun, Nils
2017-04-01
Throughout the winter and spring season, seasonal forecasts are used by the Norwegian Water Resources and Energy Directorate (NVE) in order to assess the probability for sever floods or for low seasonal runoff volumes. The latter is especially important for hydropower production. The seasonal forecasts are generated by a set of 145 lumped, elevation distributed HBV models distributed all over Norway. The observed weather is used to establish the initial snow cover, soil moisture and groundwater levels in the HBV model. Subsequently, scenarios are created by using time series of observed weather the previous 50 years, creating a total of 50 ensembles. The predictability of this seasonal forecasting system depends therefore on the importance of the initial conditions, and in Norway the seasonal snow cover is especially important. The aim of this study is to evaluate the performance of the seasonal forecasts of flood peaks and seasonal runoff volumes and especially to evaluate of the predictability depends on (i) catchment climatology and (ii) issue dates and lead times. For achieving these aims, evaluation criterions assessing reliability and sharpness were used. The results shows that the predictability is the highest for catchments where the spring runoff is dominated by snow melt. The predictability is the highest for the shortest lead times (up to 1 months ahead).The predictive performance is higher for runoff volumes than for the flood peaks.
G. Thirel
2010-08-01
Full Text Available The use of ensemble streamflow forecasts is developing in the international flood forecasting services. Ensemble streamflow forecast systems can provide more accurate forecasts and useful information about the uncertainty of the forecasts, thus improving the assessment of risks. Nevertheless, these systems, like all hydrological forecasts, suffer from errors on initialization or on meteorological data, which lead to hydrological prediction errors. This article, which is the second part of a 2-part article, concerns the impacts of initial states, improved by a streamflow assimilation system, on an ensemble streamflow prediction system over France. An assimilation system was implemented to improve the streamflow analysis of the SAFRAN-ISBA-MODCOU (SIM hydro-meteorological suite, which initializes the ensemble streamflow forecasts at Météo-France. This assimilation system, using the Best Linear Unbiased Estimator (BLUE and modifying the initial soil moisture states, showed an improvement of the streamflow analysis with low soil moisture increments. The final states of this suite were used to initialize the ensemble streamflow forecasts of Météo-France, which are based on the SIM model and use the European Centre for Medium-range Weather Forecasts (ECMWF 10-day Ensemble Prediction System (EPS. Two different configurations of the assimilation system were used in this study: the first with the classical SIM model and the second using improved soil physics in ISBA. The effects of the assimilation system on the ensemble streamflow forecasts were assessed for these two configurations, and a comparison was made with the original (i.e. without data assimilation and without the improved physics ensemble streamflow forecasts. It is shown that the assimilation system improved most of the statistical scores usually computed for the validation of ensemble predictions (RMSE, Brier Skill Score and its decomposition, Ranked Probability Skill Score, False Alarm
Challenges of Monte Carlo Transport
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
Scalable Metropolis Monte Carlo for simulation of hard shapes
Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.
2016-07-01
We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.
Monte Carlo Sampling of Negative-temperature Plasma States
John A. Krommes; Sharadini Rath
2002-07-19
A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.
The MC21 Monte Carlo Transport Code
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
Zhe Zhang
Full Text Available The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches.
Infinite ensemble of support vector machines for prediction of ...
user
Many researchers have demonstrated the use of artificial neural networks (ANNs) to ..... Following section discusses the effect of infinite ensemble approach ..... major problem with artificial intelligence-based modeling approaches is their ...