WorldWideScience

Sample records for canonical ensemble method

  1. Stabilizing Canonical-Ensemble Calculations in the Auxiliary-Field Monte Carlo Method

    CERN Document Server

    Gilbreth, C N

    2014-01-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  2. Derivation of Mayer Series from Canonical Ensemble

    Science.gov (United States)

    Xian-Zhi, Wang

    2016-02-01

    Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.

  3. Quantum statistical model of nuclear multifragmentation in the canonical ensemble method

    Energy Technology Data Exchange (ETDEWEB)

    Toneev, V.D.; Ploszajczak, M. [Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (France); Parvant, A.S. [Institute of Applied Physics, Moldova Academy of Sciences, MD Moldova (Ukraine); Parvant, A.S. [Joint Institute for Nuclear Research, Bogoliubov Lab. of Theoretical Physics, Dubna (Russian Federation)

    1999-07-01

    A quantum statistical model of nuclear multifragmentation is proposed. The recurrence equation method used the canonical ensemble makes the model solvable and transparent to physical assumptions and allows to get results without involving the Monte Carlo technique. The model exhibits the first order phase transition. Quantum statistics effects are clearly seen on the microscopic level of occupation numbers but are almost washed out for global thermodynamic variables and the averaged observables studied. In the latter case, the recurrence relations for multiplicity distributions of both intermediate-mass and all fragments are derived and the specific changes in the shape of multiplicity distributions in the narrow region of the transition temperature is stressed. The temperature domain favorable to search for the HBT effect is noted. (authors)

  4. Quantum statistical model of nuclear multifragmentation in the canonical ensemble method

    International Nuclear Information System (INIS)

    A quantum statistical model of nuclear multifragmentation is proposed. The recurrence equation method used the canonical ensemble makes the model solvable and transparent to physical assumptions and allows to get results without involving the Monte Carlo technique. The model exhibits the first order phase transition. Quantum statistics effects are clearly seen on the microscopic level of occupation numbers but are almost washed out for global thermodynamic variables and the averaged observables studied. In the latter case, the recurrence relations for multiplicity distributions of both intermediate-mass and all fragments are derived and the specific changes in the shape of multiplicity distributions in the narrow region of the transition temperature is stressed. The temperature domain favorable to search for the HBT effect is noted. (authors)

  5. Extending the parQ transition matrix method to grand canonical ensembles.

    Science.gov (United States)

    Haber, René; Hoffmann, Karl Heinz

    2016-06-01

    Phase coexistence properties as well as other thermodynamic features of fluids can be effectively determined from the grand canonical density of states (DOS). We present an extension of the parQ transition matrix method in combination with the efasTM method as a very fast approach for determining the grand canonical DOS from the transition matrix. The efasTM method minimizes the deviation from detailed balance in the transition matrix using a fast Krylov-based equation solver. The method allows a very effective use of state space transition data obtained by different exploration schemes. An application to a Lennard-Jones system produces phase coexistence properties of the same quality as reference data. PMID:27415394

  6. Canonical Ensemble Model for Black Hole Radiation

    Indian Academy of Sciences (India)

    Jingyi Zhang

    2014-09-01

    In this paper, a canonical ensemble model for the black hole quantum tunnelling radiation is introduced. In this model the probability distribution function corresponding to the emission shell is calculated to second order. The formula of pressure and internal energy of the thermal system is modified, and the fundamental equation of thermodynamics is also discussed.

  7. Ensemble Methods

    Science.gov (United States)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been

  8. Ensemble Methods

    Science.gov (United States)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been

  9. Triality and the grand canonical ensemble in QCD

    International Nuclear Information System (INIS)

    QCD in the usual finite temperature formulation is using the grand canonical ensemble with chemical potential zero. We demonstrate that this description may give wrong predictions. QCD in the canonical formulation does not explicitly break Z(3) symmetry. It behaves in this sense like pure gluonic QCD. There are no metastable states in the canonical ensemble description as predicted in the grand canonical ensemble formalism. ((orig.))

  10. Multiplicity fluctuations in heavy-ion collisions using canonical and grand-canonical ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Garg, P. [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Science, Simrol (India); Mishra, D.K.; Netrakanti, P.K.; Mohanty, A.K. [Bhabha Atomic Research Center, Nuclear Physics Division, Mumbai (India)

    2016-02-15

    We report the higher-order cumulants and their ratios for baryon, charge and strangeness multiplicity in canonical and grand-canonical ensembles in ideal thermal model including all the resonances. When the number of conserved quanta is small, an explicit treatment of these conserved charges is required, which leads to a canonical description of the system and the fluctuations are significantly different from the grand-canonical ensemble. Cumulant ratios of total-charge and net-charge multiplicity as a function of collision energies are also compared in grand-canonical ensemble. (orig.)

  11. Multiplicity fluctuations in heavy ion collisions using canonical and grand canonical ensemble

    CERN Document Server

    Garg, P; Netrakanti, P K; Mohanty, A K

    2015-01-01

    We report the higher order cumulants and their ratios for baryon, charge and strangeness multiplicity in canonical and grand-canonical ensembles in ideal thermal model including all the resonances. When the number of conserved quanta is small, an explicit treatment of these conserved charges is required, which leads to a canonical description of the system and the fluctuations are significantly different from the grand canonical ensemble. Cumulant ratios of total charge and net-charge multiplicity as a function of collision energies are also compared in grand canonical ensemble.

  12. Critical adsorption and critical Casimir forces in the canonical ensemble

    Science.gov (United States)

    Gross, Markus; Vasilyev, Oleg; Gambassi, Andrea; Dietrich, S.

    2016-08-01

    Critical properties of a liquid film between two planar walls are investigated in the canonical ensemble, within which the total number of fluid particles, rather than their chemical potential, is kept constant. The effect of this constraint is analyzed within mean-field theory (MFT) based on a Ginzburg-Landau free-energy functional as well as via Monte Carlo simulations of the three-dimensional Ising model with fixed total magnetization. Within MFT and for finite adsorption strengths at the walls, the thermodynamic properties of the film in the canonical ensemble can be mapped exactly onto a grand canonical ensemble in which the corresponding chemical potential plays the role of the Lagrange multiplier associated with the constraint. However, due to a nonintegrable divergence of the mean-field order parameter profile near a wall, the limit of infinitely strong adsorption turns out to be not well-defined within MFT, because it would necessarily violate the constraint. The critical Casimir force (CCF) acting on the two planar walls of the film is generally found to behave differently in the canonical and grand canonical ensembles. For instance, the canonical CCF in the presence of equal preferential adsorption at the two walls is found to have the opposite sign and a slower decay behavior as a function of the film thickness compared to its grand canonical counterpart. We derive the stress tensor in the canonical ensemble and find that it has the same expression as in the grand canonical case, but with the chemical potential playing the role of the Lagrange multiplier associated with the constraint. The different behavior of the CCF in the two ensembles is rationalized within MFT by showing that, for a prescribed value of the thermodynamic control parameter of the film, i.e., density or chemical potential, the film pressures are identical in the two ensembles, while the corresponding bulk pressures are not.

  13. Generalized Gibbs canonical ensemble: A possible physical scenario

    OpenAIRE

    Velazquez, L.

    2007-01-01

    After reviewing some fundamental results derived from the introduction of the generalized Gibbs canonical ensemble, such as the called thermodynamic uncertainty relation, it is described a physical scenario where such a generalized ensemble naturally appears as a consequence of a modification of the energetic interchange mechanism between the interest system and its surrounding, which could be relevant within the framework of long-range interacting systems.

  14. Geometric integrator for simulations in the canonical ensemble

    CERN Document Server

    Tapias, Diego; Bravetti, Alessandro

    2016-01-01

    In this work we introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble. In particular, we consider the equations arising from the so-called density dynamics algorithm with any possible type of thermostat and provide an integrator that preserves the invariant distribution. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of the system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results.

  15. Extending canonical Monte Carlo methods

    Science.gov (United States)

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  16. Ensemble Data Mining Methods

    Data.gov (United States)

    National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...

  17. Climate Prediction Center(CPC)Ensemble Canonical Correlation Analysis Forecast of Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) temperature forecast is a 90-day (seasonal) outlook of US surface temperature anomalies. The ECCA uses Canonical...

  18. Canonical ensemble in non-extensive statistical mechanics, q > 1

    Science.gov (United States)

    Ruseckas, Julius

    2016-09-01

    The non-extensive statistical mechanics has been used to describe a variety of complex systems. The maximization of entropy, often used to introduce the non-extensive statistical mechanics, is a formal procedure and does not easily lead to physical insight. In this article we investigate the canonical ensemble in the non-extensive statistical mechanics by considering a small system interacting with a large reservoir via short-range forces and assuming equal probabilities for all available microstates. We concentrate on the situation when the reservoir is characterized by generalized entropy with non-extensivity parameter q > 1. We also investigate the problem of divergence in the non-extensive statistical mechanics occurring when q > 1 and show that there is a limit on the growth of the number of microstates of the system that is given by the same expression for all values of q.

  19. Canonical ensemble in non-extensive statistical mechanics

    Science.gov (United States)

    Ruseckas, Julius

    2016-04-01

    The framework of non-extensive statistical mechanics, proposed by Tsallis, has been used to describe a variety of systems. The non-extensive statistical mechanics is usually introduced in a formal way, using the maximization of entropy. In this paper we investigate the canonical ensemble in the non-extensive statistical mechanics using a more traditional way, by considering a small system interacting with a large reservoir via short-range forces. The reservoir is characterized by generalized entropy instead of the Boltzmann-Gibbs entropy. Assuming equal probabilities for all available microstates we derive the equations of the non-extensive statistical mechanics. Such a procedure can provide deeper insight into applicability of the non-extensive statistics.

  20. Self-consistent thermodynamics for the Tsallis statistics in the grand canonical ensemble: Nonrelativistic hadron gas

    Energy Technology Data Exchange (ETDEWEB)

    Parvan, A.S. [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Horia Hulubei National Institute of Physics and Nuclear Engineering, Department of Theoretical Physics, Bucharest (Romania); Moldova Academy of Sciences, Institute of Applied Physics, Chisinau (Moldova, Republic of)

    2015-09-15

    In the present paper, the Tsallis statistics in the grand canonical ensemble was reconsidered in a general form. The thermodynamic properties of the nonrelativistic ideal gas of hadrons in the grand canonical ensemble was studied numerically and analytically in a finite volume and the thermodynamic limit. It was proved that the Tsallis statistics in the grand canonical ensemble satisfies the requirements of the equilibrium thermodynamics in the thermodynamic limit if the thermodynamic potential is a homogeneous function of the first order with respect to the extensive variables of state of the system and the entropic variable z = 1/(q - 1) is an extensive variable of state. The equivalence of canonical, microcanonical and grand canonical ensembles for the nonrelativistic ideal gas of hadrons was demonstrated. (orig.)

  1. Climate Prediction Center (CPC)Ensemble Canonical Correlation Analysis 90-Day Seasonal Forecast of Precipitation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) precipitation forecast is a 90-day (seasonal) outlook of US surface precipitation anomalies. The ECCA uses...

  2. Non-extended phase space thermodynamics of Lovelock AdS black holes in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Recently, extended phase space thermodynamics of Lovelock AdS black holes has been of great interest. To provide insight from a different perspective and gain a unified phase transition picture, the non-extended phase space thermodynamics of (n+1)-dimensional charged topological Lovelock AdS black holes is investigated in detail in the grand canonical ensemble. Specifically, the specific heat at constant electric potential is calculated and the phase transition in the grand canonical ensemble is discussed. To probe the impact of the various parameters, we utilize the control variate method and solve the phase transition condition equation numerically for the cases k = 1,-1. There are two critical points for the case n = 6, k = 1, while there is only one for the other cases. For k = 0, there exists no phase transition point. To figure out the nature of the phase transition in the grand canonical ensemble, we carry out an analytic check of the analog form of the Ehrenfest equations proposed by Banerjee et al. It is shown that Lovelock AdS black holes in the grand canonical ensemble undergo a second-order phase transition. To examine the phase structure in the grand canonical ensemble, we utilize the thermodynamic geometry method and calculate both the Weinhold metric and the Ruppeiner metric. It is shown that for both analytic and graphical results that the divergence structure of the Ruppeiner scalar curvature coincides with that of the specific heat. Our research provides one more example that Ruppeiner metric serves as a wonderful tool to probe the phase structures of black holes. (orig.)

  3. Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics.

    Science.gov (United States)

    Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R

    2012-01-31

    This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion "borrows" energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991

  4. Generalized network structures: The configuration model and the canonical ensemble of simplicial complexes

    CERN Document Server

    Courtney, Owen T

    2016-01-01

    Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks, to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three or more linked nodes. Moreover we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing respectively the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff...

  5. A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and Its Applications

    Science.gov (United States)

    Tamura, H.; Ito, K. R.

    2006-04-01

    We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of the canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle density converges to a finite value, the boson/fermion processes are obtained. This argument is a realization of the equivalence of ensembles, since resulting processes are considered to describe a grand canonical ensemble of points. Random point processes corresponding to para-particles of order two are discussed as an application of the formulation. Statistics of a system of composite particles at zero temperature are also considered as a model of determinantal random point processes.

  6. Extending canonical Monte Carlo methods: II

    International Nuclear Information System (INIS)

    We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2(δE2) compatible with negative heat capacities, C α, as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14–0.18

  7. Study of critical dynamics in fluids via molecular dynamics in canonical ensemble.

    Science.gov (United States)

    Roy, Sutapa; Das, Subir K

    2015-12-01

    With the objective of understanding the usefulness of thermostats in the study of dynamic critical phenomena in fluids, we present results for transport properties in a binary Lennard-Jones fluid that exhibits liquid-liquid phase transition. Various collective transport properties, calculated from the molecular dynamics (MD) simulations in canonical ensemble, with different thermostats, are compared with those obtained from MD simulations in microcanonical ensemble. It is observed that the Nosé-Hoover and dissipative particle dynamics thermostats are useful for the calculations of mutual diffusivity and shear viscosity. The Nosé-Hoover thermostat, however, as opposed to the latter, appears inadequate for the study of bulk viscosity. PMID:26687057

  8. A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and its Applications

    OpenAIRE

    Tamura, H.; Ito, K. R.

    2005-01-01

    We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle density...

  9. A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and Its Applications

    OpenAIRE

    Tamura, Hiroshi; Ito, Keiichi R.

    2006-01-01

    We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of the canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle den...

  10. THERMODYNAMICS OF GLOBAL MONOPOLE ANTI-DE-SITTER BLACK HOLE IN GRAND CANONICAL ENSEMBLE

    Institute of Scientific and Technical Information of China (English)

    陈菊华; 荆继良; 王永久

    2001-01-01

    In this paper, we investigate the thermodynamics of the global monopole anti-de-Sitter black hole in the grand canonical ensemble following the York's formalism. The black hole is enclosed in a cavity with a finite radius where the temperature and potential are fixed. We have studied some thermodynamical properties, i.e. the reduced action,thermal energy and entropy. By investigating the stability of the solutions, we find stable solutions and instantons.

  11. Isobar of an ideal Bose gas within the grand canonical ensemble

    OpenAIRE

    Jeon, Imtak; Kim, Sang-Woo; Park, Jeong-Hyuck

    2011-01-01

    We investigate the isobar of an ideal Bose gas confined in a cubic box within the grand canonical ensemble, for a large yet finite number of particles, N. After solving the equation of the spinodal curve, we derive precise formulae for the supercooling and the superheating temperatures which reveal an N^{-1/3} or N^{-1/4} power correction to the known Bose-Einstein condensation temperature in the thermodynamic limit. Numerical computations confirm the accuracy of our analytical approximation,...

  12. Pattern classification using ensemble methods

    CERN Document Server

    Rokach, Lior

    2009-01-01

    Researchers from various disciplines such as pattern recognition, statistics, and machine learning have explored the use of ensemble methodology since the late seventies. Thus, they are faced with a wide variety of methods, given the growing interest in the field. This book aims to impose a degree of order upon this diversity by presenting a coherent and unified repository of ensemble methods, theories, trends, challenges and applications. The book describes in detail the classical methods, as well as the extensions and novel approaches developed recently. Along with algorithmic descriptions o

  13. Hori method for generalized canonical systems

    Science.gov (United States)

    da Silva Fernandes, Sandro

    2009-01-01

    In this paper, some special features on the canonical version of Hori method, when it is applied to generalized canonical systems (systems of differential equations described by a Hamiltonian function linear in the momenta), are presented. Two different procedures, based on a new approach for the integration theory recently presented for the canonical version, are proposed for determining the new Hamiltonian and the generating function for systems whose differential equations for the coordinates describe a periodic system with one fast phase. These procedures are equivalent and they are directly related to the canonical transformations defined by the general solution of the integrable kernel of the Hamiltonian. They provide the same near-identity transformation for the coordinates obtained through the non-canonical version of Hori method. It is also shown that these procedures are connected to the classic averaging principle through a canonical transformation. As examples, asymptotic solutions of a non-linear oscillations problem and of the elliptic perturbed problem are discussed.

  14. Using lattice methods in non-canonical quantum statistics

    International Nuclear Information System (INIS)

    We define a natural coarse-graining procedure which can be applied to any closed equilibrium quantum system described by a density matrix ensemble and we show how the coarse-graining leads to the Gaussian and canonical ensembles. After this motivation, we present two ways of evaluating the Gaussian expectation values with lattice simulations. The first one is computationally demanding but general, whereas the second employs only canonical expectation values but it is applicable only for systems which are almost thermodynamical

  15. Phase changes in 38 atom Lennard-Jones clusters; 1, A parallel tempering study in the canonical ensemble

    CERN Document Server

    Neirotti, J P; Freeman, D L; Doll, J D; Freeman, David L.

    2000-01-01

    The heat capacity and isomer distributions of the 38 atom Lennard-Jones cluster have been calculated in the canonical ensemble using parallel tempering Monte Carlo methods. A distinct region of temperature is identified that corresponds to equilibrium between the global minimum structure and the icosahedral basin of structures. This region of temperatures occurs below the melting peak of the heat capacity and is accompanied by a peak in the derivative of the heat capacity with temperature. Parallel tempering is shown to introduce correlations between results at different temperatures. A discussion is given that compares parallel tempering with other related approaches that ensure ergodic simulations.

  16. Generalized network structures: The configuration model and the canonical ensemble of simplicial complexes

    Science.gov (United States)

    Courtney, Owen T.; Bianconi, Ginestra

    2016-06-01

    Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three, or more linked nodes. Moreover, we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing, respectively, the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff of simplicial complexes that for simplicial complexes of dimension d =1 reduces to the structural cutoff of simple networks. Finally, we provide a numerical analysis of the natural correlations emerging in the configuration model of simplicial complexes without structural cutoff.

  17. Ensemble Methods Foundations and Algorithms

    CERN Document Server

    Zhou, Zhi-Hua

    2012-01-01

    An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a

  18. Linkage-specific conformational ensembles of non-canonical polyubiquitin chains.

    Science.gov (United States)

    Castañeda, Carlos A; Chaturvedi, Apurva; Camara, Christina M; Curtis, Joseph E; Krueger, Susan; Fushman, David

    2016-02-17

    Polyubiquitination is a critical protein post-translational modification involved in a variety of processes in eukaryotic cells. The molecular basis for selective recognition of the polyubiquitin signals by cellular receptors is determined by the conformations polyubiquitin chains adopt; this has been demonstrated for K48- and K63-linked chains. Recent studies of the so-called non-canonical chains (linked via K6, K11, K27, K29, or K33) suggest they play important regulatory roles in growth, development, and immune system pathways, but biophysical studies are needed to elucidate the physical/structural basis of their interactions with receptors. A first step towards this goal is characterization of the conformations these chains adopt in solution. We assembled diubiquitins (Ub2) comprised of every lysine linkage. Using solution NMR measurements, small-angle neutron scattering (SANS), and in silico ensemble generation, we determined population-weighted conformational ensembles that shed light on the structure and dynamics of the non-canonical polyubiquitin chains. We found that polyubiquitin is conformationally heterogeneous, and each chain type exhibits unique conformational ensembles. For example, K6-Ub2 and K11-Ub2 (at physiological salt concentration) are in dynamic equilibrium between at least two conformers, where one exhibits a unique Ub/Ub interface, distinct from that observed in K48-Ub2 but similar to crystal structures of these chains. Conformers for K29-Ub2 and K33-Ub2 resemble recent crystal structures in the ligand-bound state. Remarkably, a number of diubiquitins adopt conformers similar to K48-Ub2 or K63-Ub2, suggesting potential overlap of biological function among different lysine linkages. These studies highlight the potential power of determining function from elucidation of conformational states. PMID:26422168

  19. Extending canonical Monte Carlo methods: II

    Science.gov (United States)

    Velazquez, L.; Curilef, S.

    2010-04-01

    We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2langδE2rang compatible with negative heat capacities, C < 0. Now, we improve this methodology by including the finite size effects that reduce the precision of a direct determination of the microcanonical caloric curve β(E) = ∂S(E)/∂E, as well as by carrying out a better implementation of the MC schemes. We show that, despite the modifications considered, the extended canonical MC methods lead to an impressive overcoming of the so-called supercritical slowing down observed close to the region of the temperature driven first-order phase transition. In this case, the size dependence of the decorrelation time τ is reduced from an exponential growth to a weak power-law behavior, \\tau (N)\\propto N^{\\alpha } , as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14-0.18.

  20. On the calculation of single ion activity coefficients in homogeneous ionic systems by application of the grand canonical ensemble

    DEFF Research Database (Denmark)

    Sloth, Peter

    1993-01-01

    The grand canonical ensemble has been used to study the evaluation of single ion activity coefficients in homogeneous ionic fluids. In this work, the Coulombic interactions are truncated according to the minimum image approximation, and the ions are assumed to be placed in a structureless......, homogeneous dielectric continuum. Grand canonical ensemble Monte Carlo calculation results for two primitive model electrolyte solutions are presented. Also, a formula involving the second moments of the total correlation functions is derived from fluctuation theory, which applies for the derivatives of the...... individual ionic activity coefficients with respect to the total ionic concentration. This formula has previously been proposed on the basis of somewhat different considerations....

  1. Phase structures of 4D stringy charged black holes in canonical ensemble

    Science.gov (United States)

    Jia, Qiang; Lu, J. X.; Tan, Xiao-Jun

    2016-08-01

    We study the thermodynamics and phase structures of the asymptotically flat dilatonic black holes in 4 dimensions, placed in a cavity a la York, in string theory for an arbitrary dilaton coupling. We consider these charged black systems in canonical ensemble for which the temperature at the wall of and the charge inside the cavity are fixed. We find that the dilaton coupling plays the key role in the underlying phase structures. The connection of these black holes to higher dimensional brane systems via diagonal (double) and/or direct dimensional reductions indicates that the phase structures of the former may exhaust all possible ones of the latter, which are more difficult to study, under conditions of similar settings. Our study also shows that a diagonal (double) dimensional reduction preserves the underlying phase structure while a direct dimensional reduction has the potential to change it.

  2. Phase structures of 4D stringy charged black holes in canonical ensemble

    CERN Document Server

    Jia, Qiang; Tan, Xiao-Jun

    2016-01-01

    We study the thermodynamics and phase structures of the asymptotically flat dilatonic black holes in 4 dimensions, placed in a cavity {\\it a la} York, in string theory for an arbitrary dilaton coupling. We consider these charged black systems in canonical ensemble for which the temperature at the wall of and the charge inside the cavity are fixed. We find that the dilaton coupling plays the key role in the underlying phase structures. The connection of these black holes to higher dimensional brane systems via diagonal (double) and/or direct dimensional reductions indicates that the phase structures of the former may exhaust all possible ones of the latter, which are more difficult to study, under conditions of similar settings. Our study also shows that a diagonal (double) dimensional reduction preserves the underlying phase structure while a direct dimensional reduction has the potential to change it.

  3. Phase transition and thermodynamic geometry of f (R ) AdS black holes in the grand canonical ensemble

    Science.gov (United States)

    Li, Gu-Qiang; Mo, Jie-Xiong

    2016-06-01

    The phase transition of a four-dimensional charged AdS black hole solution in the R +f (R ) gravity with constant curvature is investigated in the grand canonical ensemble, where we find novel characteristics quite different from that in the canonical ensemble. There exists no critical point for T -S curve while in former research critical point was found for both the T -S curve and T -r+ curve when the electric charge of f (R ) black holes is kept fixed. Moreover, we derive the explicit expression for the specific heat, the analog of volume expansion coefficient and isothermal compressibility coefficient when the electric potential of f (R ) AdS black hole is fixed. The specific heat CΦ encounters a divergence when 0 b . This finding also differs from the result in the canonical ensemble, where there may be two, one or no divergence points for the specific heat CQ . To examine the phase structure newly found in the grand canonical ensemble, we appeal to the well-known thermodynamic geometry tools and derive the analytic expressions for both the Weinhold scalar curvature and Ruppeiner scalar curvature. It is shown that they diverge exactly where the specific heat CΦ diverges.

  4. Phase transition and thermodynamic geometry of $f(R)$ AdS black holes in the grand canonical ensemble

    CERN Document Server

    Li, Gu-Qiang

    2016-01-01

    The phase transition of four-dimensional charged AdS black hole solution in the $R+f(R)$ gravity with constant curvature is investigated in the grand canonical ensemble, where we find novel characteristics quite different from that in canonical ensemble. There exists no critical point for $T-S$ curve while in former research critical point was found for both the $T-S$ curve and $T-r_+$ curve when the electric charge of $f(R)$ black holes is kept fixed. Moreover, we derive the explicit expression for the specific heat, the analog of volume expansion coefficient and isothermal compressibility coefficient when the electric potential of $f(R)$ AdS black hole is fixed. The specific heat $C_\\Phi$ encounters a divergence when $0b$. This finding also differs from the result in the canonical ensemble, where there may be two, one or no divergence points for the specific heat $C_Q$. To examine the phase structure newly found in the grand canonical ensemble, we appeal to the well-known thermodynamic geometry tools and de...

  5. Hard-sphere fluids inside spherical, hard pores. Grand canonical ensemble Monte Carlo calculations and integral equation approximations

    DEFF Research Database (Denmark)

    Sloth, Peter

    1990-01-01

    Density profiles and partition coefficients are obtained for hard-sphere fluids inside hard, spherical pores of different sizes by grand canonical ensemble Monte Carlo calculations. The Monte Carlo results are compared to the results obtained by application of different kinds of integral equation...

  6. Canonical vs. micro-canonical sampling methods in a 2D Ising model

    International Nuclear Information System (INIS)

    Canonical and micro-canonical Monte Carlo algorithms were implemented on a 2D Ising model. Expressions for the internal energy, U, inverse temperature, Z, and specific heat, C, are given. These quantities were calculated over a range of temperature, lattice sizes, and time steps. Both algorithms accurately simulate the Ising model. To obtain greater than three decimal accuracy from the micro-canonical method requires that the more complicated expression for Z be used. The overall difference between the algorithms is small. The physics of the problem under study should be the deciding factor in determining which algorithm to use. 13 refs., 6 figs., 2 tabs

  7. Canonical Ensemble Model for Black Hole Horizon of Schwarzschild–de Sitter Black Holes Quantum Tunnelling Radiation

    Indian Academy of Sciences (India)

    W. X. Zhong

    2014-09-01

    In this paper, we use the canonical ensemble model to discuss the radiation of a Schwarzschild–de Sitter black hole on the black hole horizon. Using this model, we calculate the probability distribution from function of the emission shell. And the statistical meaning which compare with the distribution function is used to investigate the black hole tunnelling radiation spectrum.We also discuss the mechanism of information flowing from the black hole.

  8. Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: Charge-bond resonance in monomethine cyanines

    International Nuclear Information System (INIS)

    This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space

  9. Ensemble methods for noise in classification problems

    OpenAIRE

    Verbaeten, Sofie; Van Assche, Anneleen

    2003-01-01

    Ensemble methods combine a set of classifiers to construct a new classifier that is (often) more accurate than any of its component classifiers. In this paper, we use ensemble methods to identify noisy training examples. More precisely, we consider the problem of mislabeled training examples in classification tasks, and address this problem by pre-processing the training set, i.e. by identifying and removing outliers from the training set. We study a number of filter techniques that are based...

  10. Canonical Correlation Forests

    OpenAIRE

    Rainforth, Tom; Wood, Frank

    2015-01-01

    We introduce canonical correlation forests (CCFs), a new decision tree ensemble method for classification. Individual canonical correlation trees are binary decision trees with hyperplane splits based on canonical correlation components. Unlike axis-aligned alternatives, the decision surfaces of CCFs are not restricted to the coordinate system of the input features and therefore more naturally represent data with correlation between the features. Additionally we introduce a novel alternative ...

  11. Hamiltonian Dynamics of Bounded Spacetime and Black Hole Entropy Canonical Method

    CERN Document Server

    Park, M

    2002-01-01

    Recently, Carlip proposed a formulation which computes the Bekenstein-Hawking (BH) entropy for the black hole in any dimension. But it has been known that his theory has some technical inconsistencies although his idea has received wide attentions. This paper address a resolution of the problem. By considering a correct gravity action whose variational principle is well defined at the horizon, one can $derive$ the correct Virasoro generator for the surface deformation at the horizon through the canonical method. The grand canonical ensemble, where the horizon and its angular velocity and temperature are fixed, is appropriate for my purpose. From the canonical quantization of the Virasoro algebra, it is found that the existence of the $classical$ Virasoro algebra is crucial to obtain the operator Virasoro algebra which produces the right conformal weights $\\sim A/\\hbar G$ for the semiclassical black hole entropy from the universal Cardy's entropy formula. The correct numerical factor 1/4 is obtained by choosin...

  12. Ensemble Kalman methods for inverse problems

    International Nuclear Information System (INIS)

    The ensemble Kalman filter (EnKF) was introduced by Evensen in 1994 (Evensen 1994 J. Geophys. Res. 99 10143–62) as a novel method for data assimilation: state estimation for noisily observed time-dependent problems. Since that time it has had enormous impact in many application domains because of its robustness and ease of implementation, and numerical evidence of its accuracy. In this paper we propose the application of an iterative ensemble Kalman method for the solution of a wide class of inverse problems. In this context we show that the estimate of the unknown function that we obtain with the ensemble Kalman method lies in a subspace A spanned by the initial ensemble. Hence the resulting error may be bounded above by the error found from the best approximation in this subspace. We provide numerical experiments which compare the error incurred by the ensemble Kalman method for inverse problems with the error of the best approximation in A, and with variants on traditional least-squares approaches, restricted to the subspace A. In so doing we demonstrate that the ensemble Kalman method for inverse problems provides a derivative-free optimization method with comparable accuracy to that achieved by traditional least-squares approaches. Furthermore, we also demonstrate that the accuracy is of the same order of magnitude as that achieved by the best approximation. Three examples are used to demonstrate these assertions: inversion of a compact linear operator; inversion of piezometric head to determine hydraulic conductivity in a Darcy model of groundwater flow; and inversion of Eulerian velocity measurements at positive times to determine the initial condition in an incompressible fluid. (paper)

  13. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    International Nuclear Information System (INIS)

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide

  14. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    Science.gov (United States)

    Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.

  15. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    KAUST Repository

    Kadoura, Ahmad Salim

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.

  16. Quantum Decoherence and Thermalization at Finite Temperature within the Canonical Thermal State Ensemble

    CERN Document Server

    Novotny, M A; Yuan, S; Miyashita, S; De Raedt, H; Michielsen, K

    2016-01-01

    We study measures of decoherence and thermalization of a quantum system $S$ in the presence of a quantum environment (bath) $E$. The entirety $S$$+$$E$ is prepared in a canonical thermal state at a finite temperature, that is the entirety is in a steady state. Both our numerical results and theoretical predictions show that measures of the decoherence and the thermalization of $S$ are generally finite, even in the thermodynamic limit, when the entirety $S$$+$$E$ is at finite temperature. Notably, applying perturbation theory with respect to the system-environment coupling strength, we find that under common Hamiltonian symmetries, up to first order in the coupling strength it is sufficient to consider $S$ uncoupled from $E$, but entangled with $E$, to predict decoherence and thermalization measures of $S$. This decoupling allows closed form expressions for perturbative expansions for the measures of decoherence and thermalization in terms of the free energies of $S$ and of $E$. Large-scale numerical results f...

  17. Quantum decoherence and thermalization at finite temperature within the canonical-thermal-state ensemble

    Science.gov (United States)

    Novotny, M. A.; Jin, F.; Yuan, S.; Miyashita, S.; De Raedt, H.; Michielsen, K.

    2016-03-01

    We study measures of decoherence and thermalization of a quantum system S in the presence of a quantum environment (bath) E . The entirety S +E is prepared in a canonical-thermal state at a finite temperature; that is, the entirety is in a steady state. Both our numerical results and theoretical predictions show that measures of the decoherence and the thermalization of S are generally finite, even in the thermodynamic limit, when the entirety S +E is at finite temperature. Notably, applying perturbation theory with respect to the system-environment coupling strength, we find that under common Hamiltonian symmetries, up to first order in the coupling strength it is sufficient to consider S uncoupled from E , but entangled with E , to predict decoherence and thermalization measures of S . This decoupling allows closed-form expressions for perturbative expansions for the measures of decoherence and thermalization in terms of the free energies of S and of E . Large-scale numerical results for both coupled and uncoupled entireties with up to 40 quantum spins support these findings.

  18. Electronic chemical response indexes at finite temperature in the canonical ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Franco-Pérez, Marco, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx; Gázquez, José L., E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Universidad Autónoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, México, D. F. 09340, México (Mexico); Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico); Vela, Alberto, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico)

    2015-07-14

    Assuming that the electronic energy is given by a smooth function of the number of electrons and within the extension of density functional theory to finite temperature, the first and second order chemical reactivity response functions of the Helmholtz free energy with respect to the temperature, the number of electrons, and the external potential are derived. It is found that in all cases related to the first or second derivatives with respect to the number of electrons or the external potential, there is a term given by the average of the corresponding derivative of the electronic energy of each state (ground and excited). For the second derivatives, including those related with the temperature, there is a thermal fluctuation contribution that is zero at zero temperature. Thus, all expressions reduce correctly to their corresponding chemical reactivity expressions at zero temperature and show that, at room temperature, the corrections are very small. When the assumption that the electronic energy is given by a smooth function of the number of electrons is replaced by the straight lines behavior connecting integer values, as required by the ensemble theorem, one needs to introduce directional derivatives in most cases, so that the temperature dependent expressions reduce correctly to their zero temperature counterparts. However, the main result holds, namely, at finite temperature the thermal corrections to the chemical reactivity response functions are very small. Consequently, the present work validates the usage of reactivity indexes calculated at zero temperature to infer chemical behavior at room and even higher temperatures.

  19. Electronic chemical response indexes at finite temperature in the canonical ensemble

    International Nuclear Information System (INIS)

    Assuming that the electronic energy is given by a smooth function of the number of electrons and within the extension of density functional theory to finite temperature, the first and second order chemical reactivity response functions of the Helmholtz free energy with respect to the temperature, the number of electrons, and the external potential are derived. It is found that in all cases related to the first or second derivatives with respect to the number of electrons or the external potential, there is a term given by the average of the corresponding derivative of the electronic energy of each state (ground and excited). For the second derivatives, including those related with the temperature, there is a thermal fluctuation contribution that is zero at zero temperature. Thus, all expressions reduce correctly to their corresponding chemical reactivity expressions at zero temperature and show that, at room temperature, the corrections are very small. When the assumption that the electronic energy is given by a smooth function of the number of electrons is replaced by the straight lines behavior connecting integer values, as required by the ensemble theorem, one needs to introduce directional derivatives in most cases, so that the temperature dependent expressions reduce correctly to their zero temperature counterparts. However, the main result holds, namely, at finite temperature the thermal corrections to the chemical reactivity response functions are very small. Consequently, the present work validates the usage of reactivity indexes calculated at zero temperature to infer chemical behavior at room and even higher temperatures

  20. Ensemble Machine Learning Methods and Applications

    CERN Document Server

    Ma, Yunqian

    2012-01-01

    It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics.   Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...

  1. The canonical and grand canonical models for nuclear multifragmentation

    Indian Academy of Sciences (India)

    G Chaudhuri; S Das Gupta

    2010-08-01

    Many observables seen in intermediate energy heavy-ion collisions can be explained on the basis of statistical equilibrium. Calculations based on statistical equilibrium can be implemented in microcanonical ensemble, canonical ensemble or grand canonical ensemble. This paper deals with calculations with canonical and grand canonical ensembles. A recursive relation developed recently allows calculations with arbitrary precision for many nuclear problems. Calculations are done to study the nature of phase transition in nuclear matter.

  2. Multivariate localization methods for ensemble Kalman filtering

    KAUST Repository

    Roh, S.

    2015-05-08

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  3. Multivariate localization methods for ensemble Kalman filtering

    Directory of Open Access Journals (Sweden)

    S. Roh

    2015-05-01

    Full Text Available In ensemble Kalman filtering (EnKF, the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  4. Multivariate localization methods for ensemble Kalman filtering

    Science.gov (United States)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  5. Multivariate localization methods for ensemble Kalman filtering

    KAUST Repository

    Roh, S.

    2015-12-03

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  6. Hybrid Intrusion Detection Using Ensemble of Classification Methods

    Directory of Open Access Journals (Sweden)

    M.Govindarajan

    2014-01-01

    Full Text Available One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed for homogeneous ensemble classifiers using bagging and heterogeneous ensemble classifiers using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF and Support Vector Machine (SVM as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of real and benchmark data sets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase and combining phase. A wide range of comparative experiments are conducted for real and benchmark data sets of intrusion detection. The accuracy of base classifiers is compared with homogeneous and heterogeneous models for data mining problem. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and also heterogeneous models exhibit better results than homogeneous models for real and benchmark data sets of intrusion detection.

  7. Bose-Einstein condensation in mesoscopic systems: The self-similar structure of the critical region and the nonequivalence of the canonical and grand canonical ensembles

    Science.gov (United States)

    Kocharovsky, V. V.; Kocharovsky, Vl. V.; Tarasov, S. V.

    2016-01-01

    The analytical theory of Bose-Einstein condensation of an ideal gas in mesoscopic systems has been briefly reviewed in application to traps with arbitrary shapes and dimension. This theory describes the phases of the classical gas and the formed Bose-Einstein condensate, as well as the entire vicinity of the phase transition point. The statistics and thermodynamics of Bose-Einstein condensation have been studied in detail, including their self-similar structure in the critical region, transition to the thermodynamic limit, effect of boundary conditions on the properties of a system, and nonequivalence of the description of Bose-Einstein condensation in different statistical ensembles. The complete classification of universality classes of Bose-Einstein condensation has been given.

  8. An Alternative Method to Predict Performance: Canonical Redundancy Analysis.

    Science.gov (United States)

    Dawson-Saunders, Beth; Doolen, Deane R.

    1981-01-01

    The relationships between predictors of performance and subsequent measures of clinical performance in medical school were examined for two classes at Southern Illinois University of Medicine. Canonical redundancy analysis was used to evaluate the association between six academic and three biographical preselection characteristics and four…

  9. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  10. EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms

    KAUST Repository

    Rapakoulia, Trisevgeni

    2014-04-26

    Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.

  11. Canonical density matrix perturbation theory.

    Science.gov (United States)

    Niklasson, Anders M N; Cawkwell, M J; Rubensson, Emanuel H; Rudberg, Elias

    2015-12-01

    Density matrix perturbation theory [Niklasson and Challacombe, Phys. Rev. Lett. 92, 193001 (2004)] is generalized to canonical (NVT) free-energy ensembles in tight-binding, Hartree-Fock, or Kohn-Sham density-functional theory. The canonical density matrix perturbation theory can be used to calculate temperature-dependent response properties from the coupled perturbed self-consistent field equations as in density-functional perturbation theory. The method is well suited to take advantage of sparse matrix algebra to achieve linear scaling complexity in the computational cost as a function of system size for sufficiently large nonmetallic materials and metals at high temperatures. PMID:26764847

  12. Black Hole Statistical Mechanics and The Angular Velocity Ensemble

    CERN Document Server

    Thomson, Mitchell

    2012-01-01

    An new ensemble - the angular velocity ensemble - is derived using Jaynes' method of maximising entropy subject to prior information constraints. The relevance of the ensemble to black holes is motivated by a discussion of external parameters in statistical mechanics and their absence from the Hamiltonian of general relativity. It is shown how this leads to difficulty in deriving entropy as a function of state and recovering the first law of thermodynamics from the microcanonical and canonical ensembles applied to black holes.

  13. Methods of weyl representation of the phase space and canonical transformations

    International Nuclear Information System (INIS)

    The author finds the structure of the kernel of a canonical transformation and a differential equation for the symbol of the intertwining operator. The symbol of a general linear canonical transformation is constructed in terms of a Cayley transformation of the symplectic transformation of the phase space. Its singularities and applications to group theory are studied. The Green's functions and spectral projectors of arbitrary quadratic systems are constructed using the classification methods of classical mechanics

  14. Land Cover Mapping Using Ensemble Feature Selection Methods

    CERN Document Server

    Gidudu, A; Marwala, T

    2008-01-01

    Ensemble classification is an emerging approach to land cover mapping whereby the final classification output is a result of a consensus of classifiers. Intuitively, an ensemble system should consist of base classifiers which are diverse i.e. classifiers whose decision boundaries err differently. In this paper ensemble feature selection is used to impose diversity in ensembles. The features of the constituent base classifiers for each ensemble were created through an exhaustive search algorithm using different separability indices. For each ensemble, the classification accuracy was derived as well as a diversity measure purported to give a measure of the inensemble diversity. The correlation between ensemble classification accuracy and diversity measure was determined to establish the interplay between the two variables. From the findings of this paper, diversity measures as currently formulated do not provide an adequate means upon which to constitute ensembles for land cover mapping.

  15. The ensemble switch method for computing interfacial tensions

    International Nuclear Information System (INIS)

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension

  16. The ensemble switch method for computing interfacial tensions

    Energy Technology Data Exchange (ETDEWEB)

    Schmitz, Fabian; Virnau, Peter [Institute of Physics, Johannes Gutenberg University Mainz, Staudingerweg 9, D-55128 Mainz (Germany)

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  17. Ensemble Methods in Data Mining Improving Accuracy Through Combining Predictions

    CERN Document Server

    Seni, Giovanni

    2010-01-01

    This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although e

  18. Adaptive error covariances estimation methods for ensemble Kalman filters

    International Nuclear Information System (INIS)

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example

  19. Adaptive error covariances estimation methods for ensemble Kalman filters

    Energy Technology Data Exchange (ETDEWEB)

    Zhen, Yicun, E-mail: zhen@math.psu.edu [Department of Mathematics, The Pennsylvania State University, University Park, PA 16802 (United States); Harlim, John, E-mail: jharlim@psu.edu [Department of Mathematics and Department of Meteorology, The Pennsylvania State University, University Park, PA 16802 (United States)

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  20. A canonical correlation analysis based method for contamination event detection in water sources.

    Science.gov (United States)

    Li, Ruonan; Liu, Shuming; Smith, Kate; Che, Han

    2016-06-15

    In this study, a general framework integrating a data-driven estimation model is employed for contamination event detection in water sources. Sequential canonical correlation coefficients are updated in the model using multivariate water quality time series. The proposed method utilizes canonical correlation analysis for studying the interplay between two sets of water quality parameters. The model is assessed by precision, recall and F-measure. The proposed method is tested using data from a laboratory contaminant injection experiment. The proposed method could detect a contamination event 1 minute after the introduction of 1.600 mg l(-1) acrylamide solution. With optimized parameter values, the proposed method can correctly detect 97.50% of all contamination events with no false alarms. The robustness of the proposed method can be explained using the Bauer-Fike theorem. PMID:27264637

  1. Extending the square root method to account for additive forecast noise in ensemble methods

    CERN Document Server

    Raanes, Patrick N; Bertino, Laurent

    2015-01-01

    A square root approach is considered for the problem of accounting for model noise in the forecast step of the ensemble Kalman filter (EnKF) and related algorithms. The primary aim is to replace the method of simulated, pseudo-random, additive noise so as to eliminate the associated sampling errors. The core method is based on the analysis step of ensemble square root filters, and consists in the deterministic computation of a transform matrix. The theoretical advantages regarding dynamical consistency are surveyed, applying equally well to the square root method in the analysis step. A fundamental problem due to the limited size of the ensemble subspace is discussed, and novel solutions that complement the core method are suggested and studied. Benchmarks from twin experiments with simple, low-order dynamics indicate improved performance over standard approaches such as additive, simulated noise and multiplicative inflation.

  2. Microcanonical ensemble and algebra of conserved generators for generalized quantum dynamics

    International Nuclear Information System (INIS)

    It has recently been shown, by application of statistical mechanical methods to determine the canonical ensemble governing the equilibrium distribution of operator initial values, that complex quantum field theory can emerge as a statistical approximation to an underlying generalized quantum dynamics. This result was obtained by an argument based on a Ward identity analogous to the equipartition theorem of classical statistical mechanics. We construct here a microcanonical ensemble which forms the basis of this canonical ensemble. This construction enables us to define the microcanonical entropy and free energy of the field configuration of the equilibrium distribution and to study the stability of the canonical ensemble. We also study the algebraic structure of the conserved generators from which the microcanonical and canonical ensembles are constructed, and the flows they induce on the phase space. copyright 1996 American Institute of Physics

  3. Microcanonical ensemble simulation method applied to discrete potential fluids.

    Science.gov (United States)

    Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro

    2015-09-01

    In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002)0129-183110.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties. PMID:26465582

  4. ENSEMBLE methods to reconcile disparate national long range dispersion forecasts

    DEFF Research Database (Denmark)

    Mikkelsen, Torben; Galmarini, S.; Bianconi, R.;

    2003-01-01

    and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecastsfrom meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national....... ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidentalatmospheric release of radioactive material. A series of new decision-making “ENSEMBLE” procedures...

  5. Sparse canonical methods for biological data integration: application to a cross-platform study

    Directory of Open Access Journals (Sweden)

    Robert-Granié Christèle

    2009-01-01

    Full Text Available Abstract Background In the context of systems biology, few sparse approaches have been proposed so far to integrate several data sets. It is however an important and fundamental issue that will be widely encountered in post genomic studies, when simultaneously analyzing transcriptomics, proteomics and metabolomics data using different platforms, so as to understand the mutual interactions between the different data sets. In this high dimensional setting, variable selection is crucial to give interpretable results. We focus on a sparse Partial Least Squares approach (sPLS to handle two-block data sets, where the relationship between the two types of variables is known to be symmetric. Sparse PLS has been developed either for a regression or a canonical correlation framework and includes a built-in procedure to select variables while integrating data. To illustrate the canonical mode approach, we analyzed the NCI60 data sets, where two different platforms (cDNA and Affymetrix chips were used to study the transcriptome of sixty cancer cell lines. Results We compare the results obtained with two other sparse or related canonical correlation approaches: CCA with Elastic Net penalization (CCA-EN and Co-Inertia Analysis (CIA. The latter does not include a built-in procedure for variable selection and requires a two-step analysis. We stress the lack of statistical criteria to evaluate canonical correlation methods, which makes biological interpretation absolutely necessary to compare the different gene selections. We also propose comprehensive graphical representations of both samples and variables to facilitate the interpretation of the results. Conclusion sPLS and CCA-EN selected highly relevant genes and complementary findings from the two data sets, which enabled a detailed understanding of the molecular characteristics of several groups of cell lines. These two approaches were found to bring similar results, although they highlighted the same

  6. Hybrid Levenberg-Marquardt and weak-constraint ensemble Kalman smoother method

    Science.gov (United States)

    Mandel, J.; Bergou, E.; Gürol, S.; Gratton, S.; Kasanický, I.

    2016-03-01

    The ensemble Kalman smoother (EnKS) is used as a linear least-squares solver in the Gauss-Newton method for the large nonlinear least-squares system in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Furthermore, adding a regularization term results in replacing the Gauss-Newton method, which may diverge, by the Levenberg-Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS. The method is illustrated on the Lorenz 63 model and a two-level quasi-geostrophic model.

  7. ENSEMBLE methods to reconcile disparate national long range dispersion forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, T.; Galmarini, S.; Bianconi, R.; French, S. (eds.)

    2003-11-01

    ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)

  8. ENSEMBLE methods to reconcile disparate national long range dispersion forecasting

    International Nuclear Information System (INIS)

    ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)

  9. Development of a regional ensemble prediction method for probabilistic weather prediction

    International Nuclear Information System (INIS)

    A regional ensemble prediction method has been developed to provide probabilistic weather prediction using a numerical weather prediction model. To obtain consistent perturbations with the synoptic weather pattern, both of initial and lateral boundary perturbations were given by differences between control and ensemble member of the Japan Meteorological Agency (JMA)'s operational one-week ensemble forecast. The method provides a multiple ensemble member with a horizontal resolution of 15 km for 48-hour based on a downscaling of the JMA's operational global forecast accompanied with the perturbations. The ensemble prediction was examined in the case of heavy snow fall event in Kanto area on January 14, 2013. The results showed that the predictions represent different features of high-resolution spatiotemporal distribution of precipitation affected by intensity and location of extra-tropical cyclone in each ensemble member. Although the ensemble prediction has model bias of mean values and variances in some variables such as wind speed and solar radiation, the ensemble prediction has a potential to append a probabilistic information to a deterministic prediction. (author)

  10. Constrained Canonical Correlation.

    Science.gov (United States)

    DeSarbo, Wayne S.; And Others

    1982-01-01

    A variety of problems associated with the interpretation of traditional canonical correlation are discussed. A response surface approach is developed which allows for investigation of changes in the coefficients while maintaining an optimum canonical correlation value. Also, a discrete or constrained canonical correlation method is presented. (JKS)

  11. Methods of Weyl representation of the phase space and canonical transformations

    International Nuclear Information System (INIS)

    The author studies nonlinear canonical transformations realized in the space of Weyl symbols of quantum operators. The kernels of the transformations, the symbol of the intertwining operator of the group of inhomogeneous point transformations, an the group characters are constructed. The group of PL transformations, which is the free produce of the group of point, p, and linear, L, transformations is considered. The simplest PL complexes relating problems with different potentials, in particular, containing a general Darboux transformation of the factorization method, are constructed. The kernel of an arbitrary element of the group PL is found

  12. A canonical perturbation method for computing the guiding-center motion in magnetized axisymmetric plasma columns

    International Nuclear Information System (INIS)

    The motion of charged particles in a magnetized plasma column, such as that of a magnetic mirror trap or a tokamak, is determined in the framework of the canonical perturbation theory through a method of variation of constants which preserves the energy conservation and the symmetry invariance. The choice of a frame of coordinates close to that of the magnetic coordinates allows a relatively precise determination of the guiding-center motion with a low-ordered approximation in the adiabatic parameter. A Hamiltonian formulation of the motion equations is obtained

  13. A comparison of ensemble post-processing methods for extreme events

    Science.gov (United States)

    Williams, Robin; Ferro, Chris; Kwasniok, Frank

    2015-04-01

    Ensemble post-processing methods are used in operational weather forecasting to form probability distributions that represent forecast uncertainty. Several such methods have been proposed in the literature, including logistic regression, ensemble dressing, Bayesian model averaging and non-homogeneous Gaussian regression. We conduct an imperfect model experiment with the Lorenz 1996 model to investigate the performance of these methods, especially when forecasting the occurrence of rare extreme events. We show how flexible bias-correction schemes can be incorporated into these post-processing methods, and that allowing the bias correction to depend on the ensemble mean can yield considerable improvements in skill when forecasting extreme events. In the Lorenz 1996 setting, we find that ensemble dressing, Bayesian model averaging and non-homogeneous Gaussian regression perform similarly, while logistic regression performs less well.

  14. Application of the Multimodel Ensemble Kalman Filter Method in Groundwater System

    OpenAIRE

    Liang Xue

    2015-01-01

    With the development of in-situ monitoring techniques, the ensemble Kalman filter (EnKF) has become a popular data assimilation method due to its capability to jointly update model parameters and state variables in a sequential way, and to assess the uncertainty associated with estimation and prediction. To take the conceptual model uncertainty into account during the data assimilation process, a novel multimodel ensemble Kalman filter method has been proposed by incorporating the standard En...

  15. A multi-model ensemble method that combines imperfect models through learning

    OpenAIRE

    Berge, L.A.; F. M. Selten; Wiegerinck, W.; Duane, G. S.

    2010-01-01

    In the current multi-model ensemble approach climate model simulations are combined a posteriori. In the method of this study the models in the ensemble exchange information during simulations and learn from historical observations to combine their strengths into a best representation of the observed climate. The method is developed and tested in the context of small chaotic dynamical systems, like the Lorenz 63 system. Imperfect models are created by perturbing the standard parameter ...

  16. A Simple Bayesian Climate Index Weighting Method for Seasonal Ensemble Forecasting

    Science.gov (United States)

    Bradley, A.; Habib, M. A.; Schwartz, S. S.

    2014-12-01

    Climate information — in the form of a measure of climate state or a climate forecast — can be an important predictor of future hydrologic conditions. For instance, streamflow variability for many locations around the globe is related to large-scale atmospheric oscillations, like the El Nino Southern Oscillation (ENSO) or the Pacific/Decadal Oscillation (PDO). Furthermore, climate forecast models are growing more skillful in their predictions of future climate variables on seasonal time scales. Finding effective ways to translate this climate information into improved hydrometeorological predictions is an area of ongoing research. In ensemble streamflow forecasting, where historical weather inputs or streamflow observations are used to generate the ensemble, climate index weighting is one way to represent the influence of current climate information. Using a climate index, each forecast variable member of the ensemble is selectively weighted to reflect climate conditions at the time of the forecast. A simple Bayesian climate index weighting of ensemble forecasts is presented. The original hydrologic ensemble members define a sample of the prior distribution; the relationship between the climate index and the ensemble member forecast variable is used to estimate a likelihood function. Given an observation of the climate index at the time of the forecast, the estimated likelihood function is then used to assign weights to each ensemble member. The weighted ensemble forecast is then used to estimate the posterior distribution of the forecast variable conditioned on the climate index. The proposed approach has several advantages over traditional climate index weighting methods. The weights assigned to the ensemble members accomplish the updating of the (prior) ensemble forecast distribution based on Bayes' Theorem, so the method is theoretically sound. The method also automatically adapts to the strength of the relationship between the climate index and the

  17. Evaluation of the thermodynamics of a four level system using canonical density matrix method

    Directory of Open Access Journals (Sweden)

    Awoga Oladunjoye A.

    2013-02-01

    Full Text Available We consider a four-level system with two subsystems coupled by weak interaction. The system is in thermal equilibrium. The thermodynamics of the system, namely internal energy, free energy, entropy and heat capacity, are evaluated using the canonical density matrix by two methods. First by Kronecker product method and later by treating the subsystems separately and then adding the evaluated thermodynamic properties of each subsystem. It is discovered that both methods yield the same result, the results obey the laws of thermodynamics and are the same as earlier obtained results. The results also show that each level of the subsystems introduces a new degree of freedom and increases the entropy of the entire system. We also found that the four-level system predicts a linear relationship between heat capacity and temperature at very low temperatures just as in metals. Our numerical results show the same trend.

  18. On the Relativistic Micro-Canonical Ensemble and Relativistic Kinetic Theory for N Relativistic Particles in Inertial and Non-Inertial Rest Frames

    OpenAIRE

    Alba, David; Crater, Horace W.; Lusanna, Luca

    2012-01-01

    A new formulation of relativistic classical mechanics allows a revisiting of old unsolved problems in relativistic kinetic theory and in relativistic statistical mechanics. In particular a definition of the relativistic micro-canonical partition function is given strictly in terms of the Poincar\\'e generators of an interacting N-particle system both in the inertial and non-inertial rest frames. The non-relativistic limit allows a definition of both the inertial and non-inertial micro-canonica...

  19. Hybrid Modeling of Flotation Height in Air Flotation Oven Based on Selective Bagging Ensemble Method

    Directory of Open Access Journals (Sweden)

    Shuai Hou

    2013-01-01

    Full Text Available The accurate prediction of the flotation height is very necessary for the precise control of the air flotation oven process, therefore, avoiding the scratch and improving production quality. In this paper, a hybrid flotation height prediction model is developed. Firstly, a simplified mechanism model is introduced for capturing the main dynamic behavior of the process. Thereafter, for compensation of the modeling errors existing between actual system and mechanism model, an error compensation model which is established based on the proposed selective bagging ensemble method is proposed for boosting prediction accuracy. In the framework of the selective bagging ensemble method, negative correlation learning and genetic algorithm are imposed on bagging ensemble method for promoting cooperation property between based learners. As a result, a subset of base learners can be selected from the original bagging ensemble for composing a selective bagging ensemble which can outperform the original one in prediction accuracy with a compact ensemble size. Simulation results indicate that the proposed hybrid model has a better prediction performance in flotation height than other algorithms’ performance.

  20. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  1. A Novel Bias Correction Method for Soil Moisture and Ocean Salinity (SMOS Soil Moisture: Retrieval Ensembles

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2015-12-01

    Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.

  2. A Bayes fusion method based ensemble classification approach for Brown cloud application

    Directory of Open Access Journals (Sweden)

    M.Krishnaveni

    2014-03-01

    Full Text Available Classification is a recurrent task of determining a target function that maps each attribute set to one of the predefined class labels. Ensemble fusion is one of the suitable classifier model fusion techniques which combine the multiple classifiers to perform high classification accuracy than individual classifiers. The main objective of this paper is to combine base classifiers using ensemble fusion methods namely Decision Template, Dempster-Shafer and Bayes to compare the accuracy of the each fusion methods on the brown cloud dataset. The base classifiers like KNN, MLP and SVM have been considered in ensemble classification in which each classifier with four different function parameters. From the experimental study it is proved, that the Bayes fusion method performs better classification accuracy of 95% than Decision Template of 80%, Dempster-Shaferof 85%, in a Brown Cloud image dataset.

  3. A Synergy Method to Improve Ensemble Weather Predictions and Differential SAR Interferograms

    Science.gov (United States)

    Ulmer, Franz-Georg; Adam, Nico

    2015-11-01

    A compensation of atmospheric effects is essential for mm-sensitivity in differential interferometric synthetic aperture radar (DInSAR) techniques. Numerical weather predictions are used to compensate these disturbances allowing a reduction in the number of required radar scenes. Practically, predictions are solutions of partial differential equations which never can be precise due to model or initialisation uncertainties. In order to deal with the chaotic nature of the solutions, ensembles of predictions are computed. From a stochastic point of view, the ensemble mean is the expected prediction, if all ensemble members are equally likely. This corresponds to the typical assumption that all ensemble members are physically correct solutions of the set of partial differential equations. DInSAR allows adding to this knowledge. Observations of refractivity can now be utilised to check the likelihood of a solution and to weight the respective ensemble member to estimate a better expected prediction. The objective of the paper is to show the synergy between ensemble weather predictions and differential interferometric atmospheric correction. We demonstrate a new method first to compensate better for the atmospheric effect in DInSAR and second to estimate an improved numerical weather prediction (NWP) ensemble mean. Practically, a least squares fit of predicted atmospheric effects with respect to a differential interferogram is computed. The coefficients of this fit are interpreted as likelihoods and used as weights for the weighted ensemble mean. Finally, the derived weighted prediction has minimal expected quadratic errors which is a better solution compared to the straightforward best-fitting ensemble member. Furthermore, we propose an extension of the algorithm which avoids the systematic bias caused by deformations. It makes this technique suitable for time series analysis, e.g. persistent scatterer interferometry (PSI). We validate the algorithm using the well known

  4. Ensemble-trained source apportionment of fine particulate matter and method uncertainty analysis

    Science.gov (United States)

    Balachandran, Sivaraman; Pachon, Jorge E.; Hu, Yongtao; Lee, Dongho; Mulholland, James A.; Russell, Armistead G.

    2012-12-01

    An ensemble-based approach is applied to better estimate source impacts on fine particulate matter (PM2.5) and quantify uncertainties in various source apportionment (SA) methods. The approach combines source impacts from applications of four individual SA methods: three receptor-based models and one chemical transport model (CTM). Receptor models used are the chemical mass balance methods CMB-LGO (Chemical Mass Balance-Lipschitz global optimizer) and CMB-MM (molecular markers) as well as a factor analytic method, Positive Matrix Factorization (PMF). The CTM used is the Community Multiscale Air Quality (CMAQ) model. New source impact estimates and uncertainties in these estimates are calculated in a two-step process. First, an ensemble average is calculated for each source category using results from applying the four individual SA methods. The root mean square error (RMSE) between each method with respect to the average is calculated for each source category; the RMSE is then taken to be the updated uncertainty for each individual SA method. Second, these new uncertainties are used to re-estimate ensemble source impacts and uncertainties. The approach is applied to data from daily PM2.5 measurements at the Atlanta, GA, Jefferson Street (JST) site in July 2001 and January 2002. The procedure provides updated uncertainties for the individual SA methods that are calculated in a consistent way across methods. Overall, the ensemble has lower relative uncertainties as compared to the individual SA methods. Calculated CMB-LGO uncertainties tend to decrease from initial estimates, while PMF and CMB-MM uncertainties increase. Estimated CMAQ source impact uncertainties are comparable to other SA methods for gasoline vehicles and SOC but are larger than other methods for other sources. In addition to providing improved estimates of source impact uncertainties, the ensemble estimates do not have unrealistic extremes as compared to individual SA methods and avoids zero impact

  5. A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems

    Science.gov (United States)

    Iglesias, Marco A.

    2016-02-01

    We introduce a derivative-free computational framework for approximating solutions to nonlinear PDE-constrained inverse problems. The general aim is to merge ideas from iterative regularization with ensemble Kalman methods from Bayesian inference to develop a derivative-free stable method easy to implement in applications where the PDE (forward) model is only accessible as a black box (e.g. with commercial software). The proposed regularizing ensemble Kalman method can be derived as an approximation of the regularizing Levenberg-Marquardt (LM) scheme (Hanke 1997 Inverse Problems 13 79-95) in which the derivative of the forward operator and its adjoint are replaced with empirical covariances from an ensemble of elements from the admissible space of solutions. The resulting ensemble method consists of an update formula that is applied to each ensemble member and that has a regularization parameter selected in a similar fashion to the one in the LM scheme. Moreover, an early termination of the scheme is proposed according to a discrepancy principle-type of criterion. The proposed method can be also viewed as a regularizing version of standard Kalman approaches which are often unstable unless ad hoc fixes, such as covariance localization, are implemented. The aim of this paper is to provide a detailed numerical investigation of the regularizing and convergence properties of the proposed regularizing ensemble Kalman scheme; the proof of these properties is an open problem. By means of numerical experiments, we investigate the conditions under which the proposed method inherits the regularizing properties of the LM scheme of (Hanke 1997 Inverse Problems 13 79-95) and is thus stable and suitable for its application in problems where the computation of the Fréchet derivative is not computationally feasible. More concretely, we study the effect of ensemble size, number of measurements, selection of initial ensemble and tunable parameters on the performance of the method

  6. Stochastic dynamics simulations in a new generalized ensemble

    CERN Document Server

    Hansmann, Uwe H E; Okamoto, Y; Hansmann, Ulrich H.E.; Eisenmenger, Frank; Okamoto, Yuko

    1998-01-01

    We develop a formulation for molecular dynamics, Langevin, and hybrid Monte Carlo algorithms in the recently proposed generalized ensemble that is based on a physically motivated realisation of Tsallis weights. The effectiveness of the methods are tested with an energy function for a protein system. Simulations in this generalized ensemble by the three methods are performed for a penta peptide, Met-enkephalin. For each algorithm, it is shown that from only one simulation run one can not only find the global-minimum-energy conformation but also obtain probability distributions in canonical ensemble at any temperature, which allows the calculation of any thermodynamic quantity as a function of temperature.

  7. Method to detect gravitational waves from an ensemble of known pulsars

    CERN Document Server

    Fan, Xilong; Messenger, Christopher

    2016-01-01

    Combining information from weak sources, such as known pulsars, for gravitational wave detection, is an attractive approach to improve detection efficiency. We propose an optimal statistic for a general ensemble of signals and apply it to an ensemble of known pulsars. Our method combines $\\mathcal F$-statistic values from individual pulsars using weights proportional to each pulsar's expected optimal signal-to-noise ratio to improve the detection efficiency. We also point out that to detect at least one pulsar within an ensemble, different thresholds should be designed for each source based on the expected signal strength. The performance of our proposed detection statistic is demonstrated using simulated sources, with the assumption that all pulsars' ellipticities belong to a common (yet unknown) distribution. Comparing with an equal-weight strategy and with individual source approaches, we show that the weighted-combination of all known pulsars, where weights are assigned based on the pulsars' known informa...

  8. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  9. An Introduction to Ensemble Methods for Data Analysis (Revised July, 2004)

    OpenAIRE

    Berk, Richard

    2004-01-01

    This paper provides an introduction to ensemble statistical procedures as a special case of algorithmic methods. The discussion beings with classification and regression trees (CART) as a didactic device to introduce many of the key issues. Following the material on CART is a consideration of cross-validation, bagging, random forests and boosting. Major points are illustrated with analyses of real data.

  10. Linear interpolation method in ensemble Kohn-Sham and range-separated density-functional approximations for excited states

    CERN Document Server

    Senjean, Bruno; Jensen, Hans Jørgen Aa; Fromager, Emmanuel

    2015-01-01

    The computation of excitation energies in range-separated ensemble density-functional theory (DFT) is discussed. The latter approach is appealing as it enables the rigorous formulation of a multi-determinant state-averaged DFT method. In the exact theory, the short-range density functional, that complements the long-range wavefunction-based ensemble energy contribution, should vary with the ensemble weights even when the density is held fixed. This weight dependence ensures that the range-separated ensemble energy varies linearly with the ensemble weights. When the (weight-independent) ground-state short-range exchange-correlation functional is used in this context, curvature appears thus leading to an approximate weight-dependent excitation energy. In order to obtain unambiguous approximate excitation energies, we simply propose to interpolate linearly the ensemble energy between equiensembles. It is shown that such a linear interpolation method (LIM) effectively introduces weight dependence effects. LIM has...

  11. Identifying a robust method to build RCMs ensemble as climate forcing for hydrological impact models

    Science.gov (United States)

    Olmos Giménez, P.; García Galiano, S. G.; Giraldo-Osorio, J. D.

    2016-06-01

    The regional climate models (RCMs) improve the understanding of the climate mechanism and are often used as climate forcing to hydrological impact models. Rainfall is the principal input to the water cycle, so special attention should be paid to its accurate estimation. However, climate change projections of rainfall events exhibit great divergence between RCMs. As a consequence, the rainfall projections, and the estimation of uncertainties, are better based in the combination of the information provided by an ensemble approach from different RCMs simulations. Taking into account the rainfall variability provided by different RCMs, the aims of this work are to evaluate the performance of two novel approaches based on the reliability ensemble averaging (REA) method for building RCMs ensembles of monthly precipitation over Spain. The proposed methodologies are based on probability density functions (PDFs) considering the variability of different levels of information, on the one hand of annual and seasonal rainfall, and on the other hand of monthly rainfall. The sensitivity of the proposed approaches, to two metrics for identifying the best ensemble building method, is evaluated. The plausible future scenario of rainfall for 2021-2050 over Spain, based on the more robust method, is identified. As a result, the rainfall projections are improved thus decreasing the uncertainties involved, to drive hydrological impacts models and therefore to reduce the cumulative errors in the modeling chain.

  12. Monthly water balance modeling: Probabilistic, possibilistic and hybrid methods for model combination and ensemble simulation

    Science.gov (United States)

    Nasseri, M.; Zahraie, B.; Ajami, N. K.; Solomatine, D. P.

    2014-04-01

    Multi-model (ensemble, or committee) techniques have shown to be an effective way to improve hydrological prediction performance and provide uncertainty information. This paper presents two novel multi-model ensemble techniques, one probabilistic, Modified Bootstrap Ensemble Model (MBEM), and one possibilistic, FUzzy C-means Ensemble based on data Pattern (FUCEP). The paper also explores utilization of the Ordinary Kriging (OK) method as a multi-model combination scheme for hydrological simulation/prediction. These techniques are compared against Bayesian Model Averaging (BMA) and Weighted Average (WA) methods to demonstrate their effectiveness. The mentioned techniques are applied to the three monthly water balance models used to generate stream flow simulations for two mountainous basins in the South-West of Iran. For both basins, the results demonstrate that MBEM and FUCEP generate more skillful and reliable probabilistic predictions, outperforming all the other techniques. We have also found that OK did not demonstrate any improved skill as a simple combination method over WA scheme for neither of the basins.

  13. Application of the Multimodel Ensemble Kalman Filter Method in Groundwater System

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2015-02-01

    Full Text Available With the development of in-situ monitoring techniques, the ensemble Kalman filter (EnKF has become a popular data assimilation method due to its capability to jointly update model parameters and state variables in a sequential way, and to assess the uncertainty associated with estimation and prediction. To take the conceptual model uncertainty into account during the data assimilation process, a novel multimodel ensemble Kalman filter method has been proposed by incorporating the standard EnKF with Bayesian model averaging framework. In this paper, this method is applied to analyze the dataset obtained from the Hailiutu River Basin located in the northwest part of China. Multiple conceptual models are created by considering two important factors that control groundwater dynamics in semi-arid areas: the zonation pattern of the hydraulic conductivity field and the relationship between evapotranspiration and groundwater level. The results show that the posterior model weights of the postulated models can be dynamically adjusted according to the mismatch between the measurements and the ensemble predictions, and the multimodel ensemble estimation and the corresponding uncertainty can be quantified.

  14. Canonical and grand canonical theory of spinodal instabilities

    International Nuclear Information System (INIS)

    In the context of the mean field approximation to the Landau-Ginzburg-Wilson functional integral, describing the equilibrium properties of a system with a conserved order parameter, the conditions for critical instabilities in the canonical ensemble are analysed. (A.C.A.S.)

  15. EXPERIMENTS OF ENSEMBLE FORECAST OF TYPHOON TRACK USING BDA PERTURBING METHOD

    Institute of Scientific and Technical Information of China (English)

    HUANG Yan-yan; WAN Qi-lin; YUAN Jin-nan; DING Wei-yu

    2006-01-01

    A new method, BDA perturbing, is used in ensemble forecasting of typhoon track. This method is based on the Bogus Data Assimilation scheme. It perturbs the initial position and intensity of typhoons and gets a series of bogus vortex. Then each bogus vortex is used in data assimilation to obtain initial conditions. Ensemble forecast members are constructed by conducting simulation with these initial conditions. Some cases of typhoon are chosen to test the validity of this new method and the results show that: using the BDA perturbing method to perturb initial position and intensity of typhoon for track forecast can improve accuracy, compared with the direct use of the BDA assimilation scheme. And it is concluded that a perturbing amplitude of intensity of 5 hPa is probably more appropriate than 10 hPa if the BDA perturbing method is used in combination with initial position perturbation.

  16. An Introduction to Ensemble Methods for Data Analysis

    OpenAIRE

    Berk, Richard A.

    2011-01-01

    There are a growing number of new statistical procedures Leo Breiman (2001b) has called "algorithmic". Coming from work primarily in statistics, applied mathematics, and computer science, these techniques are sometimes linked to "data mining", "machine learning", and "statistical learning". A key idea behind algorithmic methods is that there is no statistical model in the usual sense; no effort to made to represent how the data were generated. And no apologies are made for the absence of a mo...

  17. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  18. A Numerical Comparison of Rule Ensemble Methods and Support Vector Machines

    Energy Technology Data Exchange (ETDEWEB)

    Meza, Juan C.; Woods, Mark

    2009-12-18

    Machine or statistical learning is a growing field that encompasses many scientific problems including estimating parameters from data, identifying risk factors in health studies, image recognition, and finding clusters within datasets, to name just a few examples. Statistical learning can be described as 'learning from data' , with the goal of making a prediction of some outcome of interest. This prediction is usually made on the basis of a computer model that is built using data where the outcomes and a set of features have been previously matched. The computer model is called a learner, hence the name machine learning. In this paper, we present two such algorithms, a support vector machine method and a rule ensemble method. We compared their predictive power on three supernova type 1a data sets provided by the Nearby Supernova Factory and found that while both methods give accuracies of approximately 95%, the rule ensemble method gives much lower false negative rates.

  19. Rhythmic canons and modular tiling

    OpenAIRE

    Caure, Hélianthe

    2016-01-01

    This thesis is a contribution to the study of modulo p tiling. Many mathematical and computational tools were used for the study of rhythmic tiling canons. Recent research has mainly focused in finding tiling without inner periodicity, being called Vuza canons. Those canons are a constructive basis for all rhythmic tiling canons, however, they are really difficult to obtain. Best current method is a brut force exploration that, despite a few recent enhancements, is exponential. Many technics ...

  20. Thermodynamic stability of charged BTZ black holes: Ensemble dependency problem and its solution

    CERN Document Server

    Hendi, S H; Mamasani, R

    2015-01-01

    Motivated by the wide applications of thermal stability and phase transition, we investigate thermodynamic properties of charged BTZ black holes. We apply the standard method to calculate the heat capacity and the Hessian matrix and find that thermal stability of charged BTZ solutions depends on the choice of ensemble. To overcome this problem, we take into account cosmological constant as a thermodynamical variable. By this modification, we show that the ensemble dependency is eliminated and thermal stability conditions are the same in both ensembles. Then, we generalize our solutions to the case of nonlinear electrodynamics. We show how nonlinear matter field modifies the geometrical behavior of the metric function. We also study phase transition and thermal stability of these black holes in context of both canonical and grand canonical ensembles. We show that by considering the cosmological constant as a thermodynamical variable and modifying the Hessian matrix, the ensemble dependency of thermal stability...

  1. Short ensembles: an efficient method for discerning climate-relevant sensitivities in atmospheric general circulation models

    Science.gov (United States)

    Wan, H.; Rasch, P. J.; Zhang, K.; Qian, Y.; Yan, H.; Zhao, C.

    2014-09-01

    This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of high

  2. A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface

    Science.gov (United States)

    Cavrini, Francesco; Quitadamo, Lucia Rita; Saggio, Giovanni

    2016-01-01

    We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI) based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control. PMID:26819595

  3. Multi-model ensemble forecasts of tropical cyclones in 2010 and 2011 based on the Kalman Filter method

    Science.gov (United States)

    He, Chengfei; Zhi, Xiefei; You, Qinglong; Song, Bin; Fraedrich, Klaus

    2015-08-01

    This study conducted 24- to 72-h multi-model ensemble forecasts to explore the tracks and intensities (central mean sea level pressure) of tropical cyclones (TCs). Forecast data for the northwestern Pacific basin in 2010 and 2011 were selected from the China Meteorological Administration, European Centre for Medium-Range Weather Forecasts (ECMWF), Japan Meteorological Agency, and National Centers for Environmental Prediction datasets of the Observing System Research and Predictability Experiment Interactive Grand Global Ensemble project. The Kalman Filter was employed to conduct the TC forecasts, along with the ensemble mean and super-ensemble for comparison. The following results were obtained: (1) The statistical-dynamic Kalman Filter, in which recent observations are given more importance and model weighting coefficients are adjusted over time, produced quite different results from that of the super-ensemble. (2) The Kalman Filter reduced the TC mean absolute track forecast error by approximately 50, 80 and 100 km in the 24-, 48- and 72-h forecasts, respectively, compared with the best individual model (ECMWF). Also, the intensity forecasts were improved by the Kalman Filter to some extent in terms of average intensity deviation (AID) and correlation coefficients with reanalysis intensity data. Overall, the Kalman Filter technique performed better compared to multi-models, the ensemble mean, and the super-ensemble in 3-day forecasts. The implication of this study is that this technique appears to be a very promising statistical-dynamic method for multi-model ensemble forecasts of TCs.

  4. Data Mining and Ensemble of Learning Methods%数据挖掘与组合学习

    Institute of Scientific and Technical Information of China (English)

    刁力力; 胡可云; 陆玉昌; 石纯一

    2001-01-01

    Data-mining is a kind of solution for solving the problem of information exploding. Classification and prediction belong to the most fundamental tasks in data-mining field. Many experiments have showed that the results of ensemble of learning methods are generally better than those of single learning methods under most of the time. In the sense,it is of great value to introduce ensemble of learning methods to data mining. This paper introduces data mining and ensemble of learning methods respectively,along with the analysis and formulation about the role ensemble of learning methods can act in some important practicing aspects of data mining:Text mining,multi-media information mining and web mining.

  5. Extensions and applications of ensemble-of-trees methods in machine learning

    Science.gov (United States)

    Bleich, Justin

    Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of

  6. Canonical Information Analysis

    DEFF Research Database (Denmark)

    Vestergaard, Jacob Schack; Nielsen, Allan Aasbjerg

    2015-01-01

    Canonical correlation analysis is an established multivariate statistical method in which correlation between linear combinations of multivariate sets of variables is maximized. In canonical information analysis introduced here, linear correlation as a measure of association between variables is...... replaced by the information theoretical, entropy based measure mutual information, which is a much more general measure of association. We make canonical information analysis feasible for large sample problems, including for example multispectral images, due to the use of a fast kernel density estimator...... for entropy estimation. Canonical information analysis is applied successfully to (1) simple simulated data to illustrate the basic idea and evaluate performance, (2) fusion of weather radar and optical geostationary satellite data in a situation with heavy precipitation, and (3) change detection in...

  7. Multilevel Monte Carlo methods using ensemble level mixed MsFEM for two-phase flow and transport simulations

    KAUST Repository

    Efendiev, Yalchin R.

    2013-08-21

    In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate

  8. Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov-Maxwell system

    OpenAIRE

    Qin, Hong; Liu, Jian; Xiao, Jianyuan; ZHANG, RUILI; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao

    2015-01-01

    Particle-in-Cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretizing its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-sca...

  9. Simulating large-scale crop yield by using perturbed-parameter ensemble method

    Science.gov (United States)

    Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.

    2010-12-01

    Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence

  10. Fast-sum method for the elastic field of three-dimensional dislocation ensembles

    International Nuclear Information System (INIS)

    The elastic field of complex shape ensembles of dislocation loops is developed as an essential ingredient in the dislocation dynamics method for computer simulation of mesoscopic plastic deformation. Dislocation ensembles are sorted into individual loops, which are then divided into segments represented as parametrized space curves. Numerical solutions are presented as fast numerical sums for relevant elastic field variables (i.e., displacement, strain, stress, force, self-energy, and interaction energy). Gaussian numerical quadratures are utilized to solve for field equations of linear elasticity in an infinite isotropic elastic medium. The accuracy of the method is verified by comparison of numerical results to analytical solutions for typical prismatic and slip dislocation loops. The method is shown to be highly accurate, computationally efficient, and numerically convergent as the number of segments and quadrature points are increased on each loop. Several examples of method applications to calculations of the elastic field of simple and complex loop geometries are given in infinite crystals. The effect of crystal surfaces on the redistribution of the elastic field is demonstrated by superposition of a finite-element image force field on the computed results. copyright 1999 The American Physical Society

  11. An ensemble method for data stream classification in the presence of concept drift

    Institute of Scientific and Technical Information of China (English)

    Omid ABBASZADEH; Ali AMIRI‡; Ali Reza KHANTEYMOORI

    2015-01-01

    One recent area of interest in computer science is data stream management and processing. By ‘data stream’, we refer to continuous and rapidly generated packages of data. Specifi c features of data streams are immense volume, high production rate, limited data processing time, and data concept drift; these features differentiate the data stream from standard types of data. An issue for the data stream is classifi cation of input data. A novel ensemble classifi er is proposed in this paper. The classifi er uses base classifi ers of two weighting functions under different data input conditions. In addition, a new method is used to determine drift, which emphasizes the precision of the algorithm. Another characteristic of the proposed method is removal of different numbers of the base classifi ers based on their quality. Implementation of a weighting mechanism to the base classifi ers at the decision-making stage is another advantage of the algorithm. This facilitates adaptability when drifts take place, which leads to classifi ers with higher efficiency. Furthermore, the proposed method is tested on a set of standard data and the results confi rm higher accuracy compared to available ensemble classifi ers and single classifi ers. In addition, in some cases the proposed classifi er is faster and needs less storage space.

  12. Examination of multi-perturbation methods for ensemble prediction of the MJO during boreal summer

    Science.gov (United States)

    Kang, In-Sik; Jang, Pyong-Hwa; Almazroui, Mansour

    2014-05-01

    The impact of initialization and perturbation methods on the ensemble prediction of the boreal summer intraseasonal oscillation was investigated using 20-year hindcast predictions of a coupled general circulation model. The three perturbation methods used in the present study are the lagged-averaged forecast (LAF) method, the breeding method, and the empirical singular vector (ESV) method. Hindcast experiments were performed with a prediction interval of 10 days for extended boreal summer (May-October) seasons over a 20 year period. The empirical orthogonal function (EOF) eigenvectors of the initial perturbations depend on the individual perturbation method used. The leading EOF eigenvectors of the LAF perturbations exhibit large variances in the extratropics. Bred vectors with a breeding interval of 3 days represent the local unstable mode moving northward and eastward over the Indian and western Pacific region, and the leading EOF modes of the ESV perturbations represent planetary-scale eastward moving perturbations over the tropics. By combining the three perturbation methods, a multi-perturbation (MP) ensemble prediction system for the intraseasonal time scale was constructed, and the effectiveness of the MP prediction system for the Madden and Julian oscillation (MJO) prediction was examined in the present study. The MJO prediction skills of the individual perturbation methods are all similar; however, the MP-based prediction has a higher level of correlation skill for predicting the real-time multivariate MJO indices compared to those of the other individual perturbation methods. The predictability of the intraseasonal oscillation is sensitive to the MJO amplitude and to the location of the dominant convective anomaly in the initial state. The improvement in the skill of the MP prediction system is more effective during periods of weak MJO activity.

  13. Fault diagnosis method for nuclear power plant based on ensemble learning

    International Nuclear Information System (INIS)

    Nuclear power plant (NPP) is a very complex system, which needs to collect and monitor vast parameters, so it's hard to diagnose the faults of NPP. An ensemble learning method was proposed according to the problem. And the method was applied to learn from training samples which were the typical faults of nuclear power plant, i. e., loss of coolant accident (LOCA), feed water pipe rupture, steam generator tube rupture (SGTR), main steam pipe rupture. And the simulation results were carried out on the condition of normal and invalid and absent parameters respectively. The simulation results show that this method can get a good result on the condition of invalid and absent parameters. The method shows very good generalization performance and fault tolerance. (authors)

  14. An efficient ensemble of radial basis functions method based on quadratic programming

    Science.gov (United States)

    Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian

    2016-07-01

    Radial basis function (RBF) surrogate models have been widely applied in engineering design optimization problems to approximate computationally expensive simulations. Ensemble of radial basis functions (ERBF) using the weighted sum of stand-alone RBFs improves the approximation performance. To achieve a good trade-off between the accuracy and efficiency of the modelling process, this article presents a novel efficient ERBF method to determine the weights through solving a quadratic programming subproblem, denoted ERBF-QP. Several numerical benchmark functions are utilized to test the performance of the proposed ERBF-QP method. The results show that ERBF-QP can significantly improve the modelling efficiency compared with several existing ERBF methods. Moreover, ERBF-QP also provides satisfactory performance in terms of approximation accuracy. Finally, the ERBF-QP method is applied to a satellite multidisciplinary design optimization problem to illustrate its practicality and effectiveness for real-world engineering applications.

  15. Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.

  16. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  17. Acceleration of ensemble machine learning methods using many-core devices

    Science.gov (United States)

    Tamerus, A.; Washbrook, A.; Wyeth, D.

    2015-12-01

    We present a case study into the acceleration of ensemble machine learning methods using many-core devices in collaboration with Toshiba Medical Visualisation Systems Europe (TMVSE). The adoption of GPUs to execute a key algorithm in the classification of medical image data was shown to significantly reduce overall processing time. Using a representative dataset and pre-trained decision trees as input we will demonstrate how the decision forest classification method can be mapped onto the GPU data processing model. It was found that a GPU-based version of the decision forest method resulted in over 138 times speed-up over a single-threaded CPU implementation with further improvements possible. The same GPU-based software was then directly applied to a suitably formed dataset to benefit supervised learning techniques applied in High Energy Physics (HEP) with similar improvements in performance.

  18. Regularized Generalized Canonical Correlation Analysis

    Science.gov (United States)

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  19. Combining linear interpolation with extrapolation methods in range-separated ensemble density-functional theory

    CERN Document Server

    Senjean, Bruno; Alam, Md Mehboob; Knecht, Stefan; Fromager, Emmanuel

    2015-01-01

    The combination of a recently proposed linear interpolation method (LIM) [Senjean et al., Phys. Rev. A 92, 012518 (2015)], which enables the calculation of weight-independent excitation energies in range-separated ensemble density-functional approximations, with the extrapolation scheme of Savin [J. Chem. Phys. 140, 18A509 (2014)] is presented in this work. It is shown that LIM excitation energies vary quadratically with the inverse of the range-separation parameter mu when the latter is large. As a result, the extrapolation scheme, which is usually applied to long-range interacting energies, can be adapted straightforwardly to LIM. This extrapolated LIM (ELIM) has been tested on a small test set consisting of He, Be, H2 and HeH+. Relatively accurate results have been obtained for the first singlet excitation energies with the typical mu=0.4 value. The improvement of LIM after extrapolation is remarkable, in particular for the doubly-excited 2^1Sigma+g state in the stretched H2 molecule. Three-state ensemble ...

  20. A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Francesco Cavrini

    2016-01-01

    Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.

  1. Ensembl 2012.

    Science.gov (United States)

    Flicek, Paul; Amode, M Ridwan; Barrell, Daniel; Beal, Kathryn; Brent, Simon; Carvalho-Silva, Denise; Clapham, Peter; Coates, Guy; Fairley, Susan; Fitzgerald, Stephen; Gil, Laurent; Gordon, Leo; Hendrix, Maurice; Hourlier, Thibaut; Johnson, Nathan; Kähäri, Andreas K; Keefe, Damian; Keenan, Stephen; Kinsella, Rhoda; Komorowska, Monika; Koscielny, Gautier; Kulesha, Eugene; Larsson, Pontus; Longden, Ian; McLaren, William; Muffato, Matthieu; Overduin, Bert; Pignatelli, Miguel; Pritchard, Bethan; Riat, Harpreet Singh; Ritchie, Graham R S; Ruffier, Magali; Schuster, Michael; Sobral, Daniel; Tang, Y Amy; Taylor, Kieron; Trevanion, Stephen; Vandrovcova, Jana; White, Simon; Wilson, Mark; Wilder, Steven P; Aken, Bronwen L; Birney, Ewan; Cunningham, Fiona; Dunham, Ian; Durbin, Richard; Fernández-Suarez, Xosé M; Harrow, Jennifer; Herrero, Javier; Hubbard, Tim J P; Parker, Anne; Proctor, Glenn; Spudich, Giulietta; Vogel, Jan; Yates, Andy; Zadissa, Amonida; Searle, Stephen M J

    2012-01-01

    The Ensembl project (http://www.ensembl.org) provides genome resources for chordate genomes with a particular focus on human genome data as well as data for key model organisms such as mouse, rat and zebrafish. Five additional species were added in the last year including gibbon (Nomascus leucogenys) and Tasmanian devil (Sarcophilus harrisii) bringing the total number of supported species to 61 as of Ensembl release 64 (September 2011). Of these, 55 species appear on the main Ensembl website and six species are provided on the Ensembl preview site (Pre!Ensembl; http://pre.ensembl.org) with preliminary support. The past year has also seen improvements across the project. PMID:22086963

  2. On Reducing the Effect of Covariate Factors in Gait Recognition: A Classifier Ensemble Method.

    Science.gov (United States)

    Guan, Yu; Li, Chang-Tsun; Roli, Fabio

    2015-07-01

    Robust human gait recognition is challenging because of the presence of covariate factors such as carrying condition, clothing, walking surface, etc. In this paper, we model the effect of covariates as an unknown partial feature corruption problem. Since the locations of corruptions may differ for different query gaits, relevant features may become irrelevant when walking condition changes. In this case, it is difficult to train one fixed classifier that is robust to a large number of different covariates. To tackle this problem, we propose a classifier ensemble method based on the random subspace Method (RSM) and majority voting (MV). Its theoretical basis suggests it is insensitive to locations of corrupted features, and thus can generalize well to a large number of covariates. We also extend this method by proposing two strategies, i.e, local enhancing (LE) and hybrid decision-level fusion (HDF) to suppress the ratio of false votes to true votes (before MV). The performance of our approach is competitive against the most challenging covariates like clothing, walking surface, and elapsed time. We evaluate our method on the USF dataset and OU-ISIR-B dataset, and it has much higher performance than other state-of-the-art algorithms. PMID:26352457

  3. Inferring Association between Compound and Pathway with an Improved Ensemble Learning Method.

    Science.gov (United States)

    Song, Meiyue; Jiang, Zhenran

    2015-11-01

    Emergence of compound molecular data coupled to pathway information offers the possibility of using machine learning methods for compound-pathway associations' inference. To provide insights into the global relationship between compounds and their affected pathways, a improved Rotation Forest ensemble learning method called RGRF (Relief & GBSSL - Rotation Forest) was proposed to predict their potential associations. The main characteristic of the RGRF lies in using the Relief algorithm for feature extraction and regarding the Graph-Based Semi-Supervised Learning method as classifier. By incorporating the chemical structure information, drug mode of action information and genomic space information, our method can achieve a better precision and flexibility on compound-pathway prediction. Moreover, several new compound-pathway associations that having the potential for further clinical investigation have been identified by database searching. In the end, a prediction tool was developed using RGRF algorithm, which can predict the interactions between pathways and all of the compounds in cMap database. PMID:27491036

  4. The random-variable canonical distribution

    International Nuclear Information System (INIS)

    An alternative interpretation to Gibbs' concept of the canonical distribution for an ensemble of systems in statistical equilibrium is proposed. Whereas Gibbs' theory is based upon a consideration of systems subject to dynamical law, the present analysis relies neither on the classical equations of motion nor makes use of any a priori probability of a complexion; rather, it makes avail of the basic algebra of random variables and, specifically, invokes the law of large numbers. Thereby, a canonical distribution is derived which describes a macrosystem in probabilistic, rather than deterministic, terms, and facilitates the understanding of energy fluctuations which occur in macrosystems at an overall constant ensemble temperature. A discussion is given of a modified form of the Gibbs canonical distribution which takes full account of the effects of random energy fluctuations. It is demonstrated that the results from this modified analysis are entirely consonant with those derived from the random-variable approach. (author)

  5. Functional Multiple-Set Canonical Correlation Analysis

    Science.gov (United States)

    Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.

    2012-01-01

    We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…

  6. An efficient method to generate a perturbed parameter ensemble of a fully coupled AOGCM without flux-adjustment

    Directory of Open Access Journals (Sweden)

    P. J. Irvine

    2013-02-01

    Full Text Available We present a simple method to generate a perturbed parameter ensemble (PPE of a fully-coupled atmosphere-ocean general circulation model (AOGCM, HadCM3, without requiring flux-adjustment. The aim was to produce an ensemble that samples parametric uncertainty in some key variables and displays a similar range of behavior as seen in multi-model ensembles (MMEs. Six atmospheric parameters, a sea-ice parameter and an ocean parameter were jointly perturbed within a reasonable range to generate an initial group of 200 members. To screen out implausible ensemble members, 20 yr pre-industrial control simulations were run and members whose temperature response to the parameter perturbations was projected to be outside the range of 13.6 ± 2°C, i.e. near to the observed pre-industrial global mean, were discarded. 21 members, including the standard unperturbed model, were accepted, covering almost the entire span of the eight parameters, challenging the argument that without flux-adjustment parameter ranges would be unduly restricted. This ensemble was used in 3 experiments; a 800 yr pre-industrial, a 150 yr quadrupled CO2, and a 150 yr 1% CO2 rise per annum simulation. The behavior of the PPE for the pre-industrial control compared well to the CMIP3 ensemble for a number of surface and atmospheric column variables with the exception of a few members in the Tropics. However, we find that members of the PPE with low values of the entrainment rate coefficient show very large increases in upper tropospheric and stratospheric water vapor concentrations in response to elevated CO2 and some show implausibly high climate sensitivities, and as such some of these members will be excluded from future experiments with this ensemble. The outcome of this study is a PPE of a fully-coupled AOGCM which samples parametric uncertainty with a range of behavior similar to the CMIP3 ensemble and a simple methodology which would be applicable to other GCMs.

  7. Ensembl 2012

    OpenAIRE

    Flicek, Paul; Amode, M. Ridwan; Barrell, Daniel; Beal, Kathryn; Brent, Simon; Carvalho-Silva, Denise; Clapham, Peter; Coates, Guy; Fairley, Susan; Fitzgerald, Stephen; Gil, Laurent; Gordon, Leo; Hendrix, Maurice; Hourlier, Thibaut; Johnson, Nathan

    2011-01-01

    The Ensembl project (http://www.ensembl.org) provides genome resources for chordate genomes with a particular focus on human genome data as well as data for key model organisms such as mouse, rat and zebrafish. Five additional species were added in the last year including gibbon (Nomascus leucogenys) and Tasmanian devil (Sarcophilus harrisii) bringing the total number of supported species to 61 as of Ensembl release 64 (September 2011). Of these, 55 species appear on the main Ensembl website ...

  8. Landmine detection using ensemble discrete hidden Markov models with context dependent training methods

    Science.gov (United States)

    Hamdi, Anis; Missaoui, Oualid; Frigui, Hichem; Gader, Paul

    2010-04-01

    We propose a landmine detection algorithm that uses ensemble discrete hidden Markov models with context dependent training schemes. We hypothesize that the data are generated by K models. These different models reflect the fact that mines and clutter objects have different characteristics depending on the mine type, soil and weather conditions, and burial depth. Model identification is based on clustering in the log-likelihood space. First, one HMM is fit to each of the N individual sequence. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an N x N log-likelihood distance matrix that will be partitioned into K groups. In the second step, we learn the parameters of one discrete HMM per group. We propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we will investigate the maximum likelihood, and the MCE-based discriminative training approaches. Results on large and diverse Ground Penetrating Radar data collections show that the proposed method can identify meaningful and coherent HMM models that describe different properties of the data. Each HMM models a group of alarm signatures that share common attributes such as clutter, mine type, and burial depth. Our initial experiments have also indicated that the proposed mixture model outperform the baseline HMM that uses one model for the mine and one model for the background.

  9. An ensemble method for gene discovery based on DNA microarray data

    Institute of Scientific and Technical Information of China (English)

    LI Xia; RAO Shaoqi; ZHANG Tianwen; GUO Zheng; ZHANG Qingpu; Kathy L. MOSER; Eric J. TOPOL

    2004-01-01

    The advent of DNA microarray technology has offered the promise of casting new insights onto deciphering secrets of life by monitoring activities of thousands of genes simultaneously.Current analyses of microarray data focus on precise classification of biological types,for example,tumor versus normal tissues.A further scientific challenging task is to extract disease-relevant genes from the bewildering amounts of raw data,which is one of the most critical themes in the post-genomic era,but it is generally ignored due to lack of an efficient approach.In this paper,we present a novel ensemble method for gene extraction that can be tailored to fulfill multiple biological tasks including(i)precise classification of biological types;(ii)disease gene mining; and(iii)target-driven gene networking.We also give a numerical application for(i)and(ii)using a public microarrary data set and set aside a separate paper to address(iii).

  10. An adaptively fast ensemble empirical mode decomposition method and its applications to rolling element bearing fault diagnosis

    Science.gov (United States)

    Xue, Xiaoming; Zhou, Jianzhong; Xu, Yanhe; Zhu, Wenlong; Li, Chaoshun

    2015-10-01

    Ensemble empirical mode decomposition (EEMD) represents a significant improvement over the original empirical mode decomposition (EMD) method for eliminating the mode mixing problem. However, the added white noises generate some tough problems including the high computational cost, the determination of the two critical parameters (the amplitude of the added white noise and the number of ensemble trials), and the contamination of the residue noise in the signal reconstruction. To solve these problems, an adaptively fast EEMD (AFEEMD) method combined with complementary EEMD (CEEMD) is proposed in this paper. In the proposed method, the two critical parameters are respectively fixed as 0.01 times standard deviation of the original signal and two ensemble trials. Instead, the upper frequency limit of the added white noise is the key parameter which needs to be prescribed beforehand. Unlike the original EEMD method, only two high-frequency white noises are added to the signal to be investigated with anti-phase in AFEEMD. Furthermore, an index termed relative root-mean-square error is employed for the adaptive selection of the proper upper frequency limit of the added white noises. Simulation test and vibration signals based fault diagnosis of rolling element bearing under different fault types are utilized to demonstrate the feasibility and effectiveness of the proposed method. The analysis results indicate that the AFEEMD method represents a sound improvement over the original EEMD method, and has strong practicability.

  11. An artificial neural network ensemble method for fault diagnosis of proton exchange membrane fuel cell system

    International Nuclear Information System (INIS)

    The commercial viability of PEMFC (proton exchange membrane fuel cell) systems depends on using effective fault diagnosis technologies in PEMFC systems. However, many researchers have experimentally studied PEMFC (proton exchange membrane fuel cell) systems without considering certain fault conditions. In this paper, an ANN (artificial neural network) ensemble method is presented that improves the stability and reliability of the PEMFC systems. In the first part, a transient model giving it flexibility in application to some exceptional conditions is built. The PEMFC dynamic model is built and simulated using MATLAB. In the second, using this model and experiments, the mechanisms of four different faults in PEMFC systems are analyzed in detail. Third, the ANN ensemble for the fault diagnosis is built and modeled. This model is trained and tested by the data. The test result shows that, compared with the previous method for fault diagnosis of PEMFC systems, the proposed fault diagnosis method has higher diagnostic rate and generalization ability. Moreover, the partial structure of this method can be altered easily, along with the change of the PEMFC systems. In general, this method for diagnosis of PEMFC has value for certain applications. - Highlights: • We analyze the principles and mechanisms of the four faults in PEMFC (proton exchange membrane fuel cell) system. • We design and model an ANN (artificial neural network) ensemble method for the fault diagnosis of PEMFC system. • This method has high diagnostic rate and strong generalization ability

  12. Enhanced Sampling in the Well-Tempered Ensemble

    Science.gov (United States)

    Bonomi, M.; Parrinello, M.

    2010-05-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  13. Blending a probabilistic nowcasting method with a high-resolution numerical weather prediction ensemble for convective precipitation forecasts

    OpenAIRE

    Kober, K.; Craig, C; Keil, C.; A. Dörnbrack

    2012-01-01

    A seamless prediction of convective precipitation for a continuous range of lead times from 0�8 h requires the application of different approaches. Here, a nowcasting method and a high-resolution numerical weather prediction ensemble are combined to provide probabilistic precipitation forecasts. For the nowcast, an existing deterministic extrapolation technique was modified by the local Lagrangian method to calculate the probability of exceeding a threshold value in radar reflectivity...

  14. A novel test method for measuring the thermal properties of clothing ensembles under dynamic conditions

    International Nuclear Information System (INIS)

    The dynamic thermal properties of clothing ensembles are important to thermal transient comfort, but have so far not been properly quantified. In this paper, a novel test procedure and new index based on measurements on the sweating fabric manikin-Walter are proposed to quantify and measure the dynamic thermal properties of clothing ensembles. Experiments showed that the new index is correlated to the changing rate of the body temperature of the wearer, which is an important indicator of thermal transient comfort. Clothing ensembles having higher values of the index means the wearer will have a faster changing rate of body temperature and shorter duration before approaching a dangerous thermo-physiological state, when he changes from 'resting' to 'exercising' mode. Clothing should therefore be designed to reduce the value of the index

  15. Design Hybrid method for intrusion detection using Ensemble cluster classification and SOM network

    Directory of Open Access Journals (Sweden)

    Deepak Rathore

    2012-09-01

    Full Text Available In current scenario of internet technology security is big challenge. Internet network threats by various cyber-attack and loss the system data and degrade the performance of host computer. In this sense intrusion detection are challenging field of research in concern of network security based on firewall and some rule based detection technique. In this paper we proposed an Ensemble Cluster Classification technique using som network for detection of mixed variable data generated by malicious software for attack purpose in host system. In our methodology SOM network control the iteration of distance of different parameters of ensembling our experimental result show that better empirical evaluation on KDD data set 99 in comparison of existing ensemble classifier.

  16. Stochastic methods for light propagation and recurrent scattering in saturated and nonsaturated atomic ensembles

    CERN Document Server

    Lee, Mark D; Ruostekoski, Janne

    2016-01-01

    We derive equations for the strongly coupled system of light and dense atomic ensembles. The formalism includes an arbitrary internal level structure for the atoms and is not restricted to weak excitation of atoms by light. In the low light intensity limit for atoms with a single electronic ground state, the full quantum field-theoretical representation of the model can be solved exactly by means of classical stochastic electrodynamics simulations for stationary atoms that represent cold atomic ensembles. Simulations for the optical response of atoms in a quantum degenerate regime require one to synthesize a stochastic ensemble of atomic positions that generates the corresponding quantum statistical position correlations between the atoms. In the case of multiple ground levels or at light intensities where saturation becomes important, the classical simulations require approximations that neglect quantum fluctuations between the levels. We show how the model is extended to incorporate corrections due to quant...

  17. Contour-integral method for transitions to the circular unitary ensemble

    Science.gov (United States)

    Vinayak Akhilesh Pandey Vinayak; Pandey, Akhilesh

    2009-08-01

    The representation of correlation functions as a contour integral has been useful in the study of transitions to the Gaussian unitary ensemble (GUE). We develop the formalism for transitions to the circular unitary ensemble (CUE) and consider the general ellCUE to CUE transition where ellCUE denotes a superposition of ell independent CUE spectra in an arbitrary ratio. For large matrices, we derive the two-level correlation function for all ell including ell = ∞ (the Poisson case). The results are useful in the study of weakly broken partitioning symmetries and weakly coupled mesoscopic cavities.

  18. Contour-integral method for transitions to the circular unitary ensemble

    International Nuclear Information System (INIS)

    The representation of correlation functions as a contour integral has been useful in the study of transitions to the Gaussian unitary ensemble (GUE). We develop the formalism for transitions to the circular unitary ensemble (CUE) and consider the general lCUE to CUE transition where lCUE denotes a superposition of l independent CUE spectra in an arbitrary ratio. For large matrices, we derive the two-level correlation function for all l including l = ∞ (the Poisson case). The results are useful in the study of weakly broken partitioning symmetries and weakly coupled mesoscopic cavities.

  19. The method of canonical transformations applied to the motion of a charge coupled to the electromagnetic field in the non-relativistic approximation

    International Nuclear Information System (INIS)

    A method of canonical transformations extended to dissipative Hamiltonian systems in a previous article is here applied to the behaviour of an extended charge coupled to the em field which is deductible from a Lagrangian function explicitly dependent on time. The generating function of a transformation which decouples the variables of the system is given, for an elastic applied force, and hence the constants in motion are found by a general method. Some limit cases are examined. (auth)

  20. A new method to calibrate aerodynamic roughness over the Tibetan Plateau using Ensemble Kalman Filter

    Science.gov (United States)

    Lee, J. H.; Timmermans, J.; Su, Z.; Mancini, M.

    2012-04-01

    Aerodynamic roughness height (Zom) is a key parameter required in land surface hydrological model, since errors in heat flux estimations are largely dependent on accurate optimization of this parameter. Despite its significance, it remains an uncertain parameter that is not easily determined. This is mostly because of non-linear relationship in Monin-Obukhov Similarity (MOS) and unknown vertical characteristic of vegetation. Previous studies determined aerodynamic roughness using traditional wind profile method, remotely sensed vegetation index, minimization of cost function over MOS relationship or linear regression. However, these are complicated procedures that presume high accuracy for several other related parameters embedded in MOS equations. In order to simplify a procedure and reduce the number of parameters in need, this study suggests a new approach to extract aerodynamic roughness parameter via Ensemble Kalman Filter (EnKF) that affords non-linearity and that requires only single or two heat flux measurement. So far, to our knowledge, no previous study has applied EnKF to aerodynamic roughness estimation, while a majority of data assimilation study has paid attention to land surface state variables such as soil moisture or land surface temperature. This approach was applied to grassland in semi-arid Tibetan area and maize on moderately wet condition in Italy. It was demonstrated that aerodynamic roughness parameter can inversely be tracked from data assimilated heat flux analysis. The aerodynamic roughness height estimated in this approach was consistent with eddy covariance result and literature value. Consequently, this newly estimated input adjusted the sensible heat overestimated and latent heat flux underestimated by the original Surface Energy Balance System (SEBS) model, suggesting better heat flux estimation especially during the summer Monsoon period. The advantage of this approach over other methodologies is that aerodynamic roughness height

  1. Ensemble averaged surface normal impedance of material using an in-situ technique: preliminary study using boundary element method.

    Science.gov (United States)

    Otsuru, Toru; Tomiku, Reiji; Din, Nazli Bin Che; Okamoto, Noriko; Murakami, Masahiko

    2009-06-01

    An in-situ measurement technique of a material surface normal impedance is proposed. It includes a concept of "ensemble averaged" surface normal impedance that extends the usage of obtained values to various applications such as architectural acoustics and computational simulations, especially those based on the wave theory. The measurement technique itself is a refinement of a method using a two-microphone technique and environmental anonymous noise, or diffused ambient noise, as proposed by Takahashi et al. [Appl. Acoust. 66, 845-865 (2005)]. Measured impedance can be regarded as time-space averaged normal impedance at the material surface. As a preliminary study using numerical simulations based on the boundary element method, normal incidence and random incidence measurements are compared numerically: results clarify that ensemble averaging is an effective mode of measuring sound absorption characteristics of materials with practical sizes in the lower frequency range of 100-1000 Hz, as confirmed by practical measurements. PMID:19507960

  2. Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov-Maxwell equations

    Science.gov (United States)

    Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao

    2016-01-01

    Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 109, degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani’s theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.

  3. Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov–Maxwell equations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao

    2015-12-14

    Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.

  4. Comparison of Selected methods of Ensemble Generation in EnKF for Air-Quality Modelling

    Czech Academy of Sciences Publication Activity Database

    Resler, Jaroslav; Juruš, Pavel; Eben, Kryštof; Belda, Michal

    Praha: Český hydrometeorologický ústav, 2005. s. 37-37. ISBN 80-86690-23-7. [WMO International Symposium on Assimilation of Observations in Meteorology and Oceanography /4./. 18.04.2005-22.04.2005, Prague] Institutional research plan: CEZ:AV0Z10300504 Keywords : ensemble Kalman filter * data assimilation * spatial correlation * NMC

  5. Sparse calibration of subsurface flow models using nonlinear orthogonal matching pursuit and an iterative stochastic ensemble method

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.

  6. Molecular dynamics, Langevin, and hybrid Monte Carlo simulations in multicanonical ensemble

    CERN Document Server

    Hansmann, Uwe H E; Eisenmenger, F; Hansmann, Ulrich H.E.; Okamoto, Yuko; Eisenmenger, Frank

    1996-01-01

    We demonstrate that the multicanonical approach is not restricted to Monte Carlo simulations, but can also be applied to simulation techniques such as molecular dynamics, Langevin, and hybrid Monte Carlo algorithms. The effectiveness of the methods are tested with an energy function for the protein folding problem. Simulations in the multicanonical ensemble by the three methods are performed for a penta peptide, Met-enkephalin. For each algorithm, it is shown that from only one simulation run one can not only find the global-minimum-energy conformation but also obtain probability distributions in canonical ensemble at any temperature, which allows the calculation of any thermodynamic quantity as a function of temperature.

  7. The role of the canonical biplot method in the study of volatile compounds in cheeses of variable composition

    Directory of Open Access Journals (Sweden)

    González-Martín, M. I.

    2016-03-01

    Full Text Available The canonical biplot method (CB is used to determine the discriminatory power of volatile chemical compounds in cheese. These volatile compounds were used as variables in order to differentiate among 6 groups or populations of cheeses (combinations of two seasons (winter and summer with 3 types of cheese (cow, sheep and goat’s milk. We analyzed a total of 17 volatile compounds by means of gas chromatography coupled with mass detection. The compounds included aldehydes and methyl-aldehydes, alcohols (primary, secondary and branched chain, ketones, methyl-ketones and esters in winter (WC and summer (SC cow’s cheeses, winter (WSh and summer (SSh sheep’s cheeses and in winter (WG and summer (SG goat’s cheeses. The CB method allows differences to be found as a function of the elaboration of the cheeses, the seasonality of the milk, and the separation of the six groups of cheeses, characterizing the specific volatile chemical compounds responsible for such differences.El m.todo biplot can.nico (CB se utiliza para determinar el poder discriminatorio de compuestos qu.micos vol.tiles en queso. Los compuestos vol.tiles se utilizan como variables con el fin de diferenciar entre los 6 grupos o poblaciones de quesos (combinaciones de dos temporadas (invierno y verano con 3 tipos de queso (vaca, oveja y cabra. Se analizan un total de 17 compuestos vol.tiles por medio de cromatograf.a de gases acoplada con detecci.n de masas. Los compuestos incluyen aldeh.dos y metil-aldeh.dos, alcoholes (primarios de cadena, secundaria y ramificada, cetonas, metil-cetonas y .steres. Los seis grupos de quesos son, quesos de vaca de invierno (WC y verano (SC; quesos de oveja de invierno (WSh y verano (SSh y quesos de cabra de invierno (WG y verano (SG. El m.todo CB permite la separaci.n de los seis grupos de quesos y encontrar las diferencias en funci.n del tipo y estacionalidad de la leche, caracterizando los compuestos qu.micos vol.tiles espec.ficos responsables de

  8. A new method to calibrate aerodynamic roughness over the Tibetan Plateau using Ensemble Kalman Filter

    Directory of Open Access Journals (Sweden)

    J. H. Lee

    2012-04-01

    Full Text Available Aerodynamic roughness height (Zom is a key parameter required in land surface hydrological model, since errors in heat flux estimations are largely dependent on accurate optimization of this parameter. Despite its significance, it remains an uncertain parameter that is not easily determined. This is mostly because of non-linear relationship in Monin-Obukhov Similarity (MOS and unknown vertical characteristic of vegetation. Previous studies determined aerodynamic roughness using traditional wind profile method, remotely sensed vegetation index, minimization of cost function over MOS relationship or linear regression. However, these are complicated procedures that presume high accuracy for several other related parameters embedded in MOS equations. In order to simplify a procedure and reduce the number of parameters in need, this study suggests a new approach to extract aerodynamic roughness parameter via Ensemble Kalman Filter (EnKF that affords non-linearity and that requires only single or two heat flux measurement. So far, to our knowledge, no previous study has applied EnKF to aerodynamic roughness estimation, while a majority of data assimilation study has paid attention to land surface state variables such as soil moisture or land surface temperature. This approach was applied to grassland in semi-arid Tibetan area and maize on moderately wet condition in Italy. It was demonstrated that aerodynamic roughness parameter can inversely be tracked from data assimilated heat flux analysis. The aerodynamic roughness height estimated in this approach was consistent with eddy covariance result and literature value. Consequently, this newly estimated input adjusted the sensible heat overestimated and latent heat flux underestimated by the original Surface Energy Balance System (SEBS model, suggesting better heat flux estimation especially during the summer Monsoon period. The advantage of this approach over other methodologies is

  9. Performance comparison of several response surface surrogate models and ensemble methods for water injection optimization under uncertainty

    Science.gov (United States)

    Babaei, Masoud; Pan, Indranil

    2016-06-01

    In this paper we defined a relatively complex reservoir engineering optimization problem of maximizing the net present value of the hydrocarbon production in a water flooding process by controlling the water injection rates in multiple control periods. We assessed the performance of a number of response surface surrogate models and their ensembles which are combined by Dempster-Shafer theory and Weighted Averaged Surrogates as found in contemporary literature works. Most of these ensemble methods are based on the philosophy that multiple weak learners can be leveraged to obtain one strong learner which is better than the individual weak ones. Even though these techniques have been shown to work well for test bench functions, we found them not offering a considerable improvement compared to an individually used cubic radial basis function surrogate model. Our simulations on two and three dimensional cases, with varying number of optimization variables suggest that cubic radial basis functions-based surrogate model is reliable, outperforms Kriging surrogates and multivariate adaptive regression splines, and if it does not outperform, it is rarely outperformed by the ensemble surrogate models.

  10. NYYD Ensemble

    Index Scriptorium Estoniae

    2002-01-01

    NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx

  11. A Online NIR Sensor for the Pilot-Scale Extraction Process in Fructus Aurantii Coupled with Single and Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Xiaoning Pan

    2015-04-01

    Full Text Available Model performance of the partial least squares method (PLS alone and bagging-PLS was investigated in online near-infrared (NIR sensor monitoring of pilot-scale extraction process in Fructus aurantii. High-performance liquid chromatography (HPLC was used as a reference method to identify the active pharmaceutical ingredients: naringin, hesperidin and neohesperidin. Several preprocessing methods and synergy interval partial least squares (SiPLS and moving window partial least squares (MWPLS variable selection methods were compared. Single quantification models (PLS and ensemble methods combined with partial least squares (bagging-PLS were developed for quantitative analysis of naringin, hesperidin and neohesperidin. SiPLS was compared to SiPLS combined with bagging-PLS. Final results showed the root mean square error of prediction (RMSEP of bagging-PLS to be lower than that of PLS regression alone. For this reason, an ensemble method of online NIR sensor is here proposed as a means of monitoring the pilot-scale extraction process in Fructus aurantii, which may also constitute a suitable strategy for online NIR monitoring of CHM.

  12. Consecutive Charging of a Molecule-on-Insulator Ensemble Using Single Electron Tunnelling Methods.

    Science.gov (United States)

    Rahe, Philipp; Steele, Ryan P; Williams, Clayton C

    2016-02-10

    We present the local charge state modification at room temperature of small insulator-supported molecular ensembles formed by 1,1'-ferrocenedicarboxylic acid on calcite. Single electron tunnelling between the conducting tip of a noncontact atomic force microscope (NC-AFM) and the molecular islands is observed. By joining NC-AFM with Kelvin probe force microscopy, successive charge build-up in the sample is observed from consecutive experiments. Charge transfer within the islands and structural relaxation of the adsorbate/surface system is suggested by the experimental data. PMID:26713686

  13. Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov-Maxwell system

    CERN Document Server

    Qin, Hong; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Burby, Joshua W; Ellison, Leland; Zhou, Yao

    2015-01-01

    Particle-in-Cell (PIC) simulation is the most important numerical tool in plasma physics and accelerator physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretizing the Marsden-Weinstein bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root researching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g., $10^{9}$, degrees of freedom.

  14. A comparison study of three statistical downscaling methods and their model-averaging ensemble for precipitation downscaling in China

    Science.gov (United States)

    Duan, Kai; Mei, Yadong

    2014-05-01

    This study evaluated the performance of three frequently applied statistical downscaling tools including SDSM, SVM, and LARS-WG, and their model-averaging ensembles under diverse moisture conditions with respect to the capability of reproducing the extremes as well as mean behaviors of precipitation. Daily observed precipitation and NCEP reanalysis data of 30 stations across China were collected for the period 1961-2000, and model parameters were calibrated for each season at individual site with 1961-1990 as the calibration period and 1991-2000 as the validation period. A flexible framework of multi-criteria model averaging was established in which model weights were optimized by the shuffled complex evolution algorithm. Model performance was compared for the optimal objective and nine more specific metrics. Results indicate that different downscaling methods can gain diverse usefulness and weakness in simulating various precipitation characteristics under different circumstances. SDSM showed more adaptability by acquiring better overall performance at a majority of the stations while LARS-WG revealed better accuracy in modeling most of the single metrics, especially extreme indices. SVM provided more usefulness under drier conditions, but it had less skill in capturing temporal patterns. Optimized model averaging, aiming at certain objective functions, can achieve a promising ensemble with increasing model complexity and computational cost. However, the variation of different methods' performances highlighted the tradeoff among different criteria, which compromised the ensemble forecast in terms of single metrics. As the superiority over single models cannot be guaranteed, model averaging technique should be used cautiously in precipitation downscaling.

  15. Ensemble Equivalence for Distinguishable Particles

    Directory of Open Access Journals (Sweden)

    Antonio Fernández-Peralta

    2016-07-01

    Full Text Available Statistics of distinguishable particles has become relevant in systems of colloidal particles and in the context of applications of statistical mechanics to complex networks. In this paper, we present evidence that a commonly used expression for the partition function of a system of distinguishable particles leads to huge fluctuations of the number of particles in the grand canonical ensemble and, consequently, to nonequivalence of statistical ensembles. We will show that the alternative definition of the partition function including, naturally, Boltzmann’s correct counting factor for distinguishable particles solves the problem and restores ensemble equivalence. Finally, we also show that this choice for the partition function does not produce any inconsistency for a system of distinguishable localized particles, where the monoparticular partition function is not extensive.

  16. Detection of non-canonical DNA structures in genomic DNA sequences by enzymatic, electrophoretic and in silico methods

    Czech Academy of Sciences Publication Activity Database

    Brázdová, Marie; Kyjovský, Ivo; Tichý, Vlastimil; Navrátilová, Lucie; Loscher, Ch.; Jurčo, J.; Kotrs, J.; Lexa, M.; Martínek, T.; Tolstonog, G.; Fojta, Miroslav; Paleček, Emil; Deppert, W.

    Brno, 2009. s. 78. ISBN 978-80-210-4830-0. [Pracovní setkání biochemiků a molekulárních biologů /13./. 14.04.2009-15.04.2009, Brno] R&D Projects: GA MŠk(CZ) 1K04119; GA MŠk(CZ) LC06035; GA ČR(CZ) GP204/06/P369; GA ČR(CZ) GA204/08/1560; GA AV ČR(CZ) IAA500040701 Institutional research plan: CEZ:AV0Z50040507; CEZ:AV0Z50040702 Keywords : mutant p53 * non-canonical DNA * glioblastoma cells Subject RIV: BO - Biophysics

  17. An Ensemble Method to Distinguish Bacteriophage Virion from Non-Virion Proteins Based on Protein Sequence Characteristics

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    2015-09-01

    Full Text Available Bacteriophage virion proteins and non-virion proteins have distinct functions in biological processes, such as specificity determination for host bacteria, bacteriophage replication and transcription. Accurate identification of bacteriophage virion proteins from bacteriophage protein sequences is significant to understand the complex virulence mechanism in host bacteria and the influence of bacteriophages on the development of antibacterial drugs. In this study, an ensemble method for bacteriophage virion protein prediction from bacteriophage protein sequences is put forward with hybrid feature spaces incorporating CTD (composition, transition and distribution, bi-profile Bayes, PseAAC (pseudo-amino acid composition and PSSM (position-specific scoring matrix. When performing on the training dataset 10-fold cross-validation, the presented method achieves a satisfactory prediction result with a sensitivity of 0.870, a specificity of 0.830, an accuracy of 0.850 and Matthew’s correlation coefficient (MCC of 0.701, respectively. To evaluate the prediction performance objectively, an independent testing dataset is used to evaluate the proposed method. Encouragingly, our proposed method performs better than previous studies with a sensitivity of 0.853, a specificity of 0.815, an accuracy of 0.831 and MCC of 0.662 on the independent testing dataset. These results suggest that the proposed method can be a potential candidate for bacteriophage virion protein prediction, which may provide a useful tool to find novel antibacterial drugs and to understand the relationship between bacteriophage and host bacteria. For the convenience of the vast majority of experimental Int. J. Mol. Sci. 2015, 16 21735 scientists, a user-friendly and publicly-accessible web-server for the proposed ensemble method is established.

  18. Technical Note: Measuring contrast- and noise-dependent spatial resolution of an iterative reconstruction method in CT using ensemble averaging

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Lifeng, E-mail: yu.lifeng@mayo.edu; Vrieze, Thomas J.; Leng, Shuai; Fletcher, Joel G.; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2015-05-15

    Purpose: The spatial resolution of iterative reconstruction (IR) in computed tomography (CT) is contrast- and noise-dependent because of the nonlinear regularization. Due to the severe noise contamination, it is challenging to perform precise spatial-resolution measurements at very low-contrast levels. The purpose of this study was to measure the spatial resolution of a commercially available IR method using ensemble-averaged images acquired from repeated scans. Methods: A low-contrast phantom containing three rods (7, 14, and 21 HU below background) was scanned on a 128-slice CT scanner at three dose levels (CTDI{sub vol} = 16, 8, and 4 mGy). Images were reconstructed using two filtered-backprojection (FBP) kernels (B40 and B20) and a commercial IR method (sinogram affirmed iterative reconstruction, SAFIRE, Siemens Healthcare) with two strength settings (I40-3 and I40-5). The same scan was repeated 100 times at each dose level. The modulation transfer function (MTF) was calculated based on the edge profile measured on the ensemble-averaged images. Results: The spatial resolution of the two FBP kernels, B40 and B20, remained relatively constant across contrast and dose levels. However, the spatial resolution of the two IR kernels degraded relative to FBP as contrast or dose level decreased. For a given dose level at 16 mGy, the MTF{sub 50%} value normalized to the B40 kernel decreased from 98.4% at 21 HU to 88.5% at 7 HU for I40-3 and from 97.6% to 82.1% for I40-5. At 21 HU, the relative MTF{sub 50%} value decreased from 98.4% at 16 mGy to 90.7% at 4 mGy for I40-3 and from 97.6% to 85.6% for I40-5. Conclusions: A simple technique using ensemble averaging from repeated CT scans can be used to measure the spatial resolution of IR techniques in CT at very low contrast levels. The evaluated IR method degraded the spatial resolution at low contrast and high noise levels.

  19. Ensembl variation resources

    Directory of Open Access Journals (Sweden)

    Marin-Garcia Pablo

    2010-05-01

    Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.

  20. An Ensemble Method based on Particle of Swarm for the Reduction of Noise, Outlier and Core Point

    Directory of Open Access Journals (Sweden)

    Satish Dehariya,

    2013-04-01

    Full Text Available The majority voting and accurate prediction ofclassification algorithm in data mining arechallenging task for data classification. For theimprovement of data classification used differentclassifier along with another classifier in a mannerof ensembleprocess. Ensemble process increase theclassification ratio of classification algorithm, nowsuch par diagram of classification algorithm iscalled ensemble classifier. Ensemble learning is atechnique to improve the performance and accuracyof classification and predication of machinelearning algorithm. Many researchers proposed amodel for ensemble classifier for merging adifferent classification algorithm, but theperformance of ensemble algorithm suffered fromproblem of outlier, noise and core pointproblem ofdata from features selection process. In this paperwe combined core, outlier and noise data (COB forfeatures selection process for ensemble model. Theprocess of best feature selection with appropriateclassifier used particle of swarm optimization.

  1. Enhanced Sampling in the Well-Tempered Ensemble

    OpenAIRE

    Bonomi, M.; Parrinello, M

    2009-01-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the ...

  2. El canon literario peruano

    Directory of Open Access Journals (Sweden)

    Carlos García-Bedoya Maguiña

    2011-05-01

    Full Text Available Canon es un concepto clave en la historia literaria. En el presente artículo,se revisa la evolución histórica del canon literario peruano. Es solo con la llamada República Aristocrática, en las primeras décadas del siglo XX, que cabe hablar en el caso peruano de la formación de un auténtico canon nacional. El autor denomina a esta primera versión del canon literario peruano como canon oligárquico y destaca la importancia de la obra de Riva Agüero y de Ventura García Calderón en su configuración. Es solo más tarde, desde los años 20 y de modo definitivo desde los años 50, que puede hablarse de la emergencia de un nuevo canon literarioal que el autor propone determinar canon posoligárquico.

  3. Resistant multiple sparse canonical correlation.

    Science.gov (United States)

    Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna

    2016-04-01

    Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca. PMID:26963062

  4. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    Science.gov (United States)

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  5. Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach

    International Nuclear Information System (INIS)

    Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are

  6. An introduction to the theory of canonical matrices

    CERN Document Server

    Turnbull, H W

    2004-01-01

    Thorough and self-contained, this penetrating study of the theory of canonical matrices presents a detailed consideration of all the theory's principal features. Topics include elementary transformations and bilinear and quadratic forms; canonical reduction of equivalent matrices; subgroups of the group of equivalent transformations; and rational and classical canonical forms. The final chapters explore several methods of canonical reduction, including those of unitary and orthogonal transformations. 1952 edition. Index. Appendix. Historical notes. Bibliographies. 275 problems.

  7. Canonical Big Operators

    OpenAIRE

    Bertot, Yves; Gonthier, Georges; Ould Biha, Sidi; Pasca, Ioana

    2008-01-01

    In this paper, we present an approach to describe uniformly iterated “big” operations and to provide lemmas that encapsulate all the commonly used reasoning steps on these constructs. We show that these iterated operations can be handled generically using the syntactic notation and canonical structure facilities provided by the Coq system. We then show how these canonical big operations played a crucial enabling role in the study of various parts of linear algebra and multi-dimensional real a...

  8. An Ensemble Method based on Particle of Swarm for the Reduction of Noise, Outlier and Core Point

    Directory of Open Access Journals (Sweden)

    Satish Dehariya

    2013-03-01

    Full Text Available The majority voting and accurate prediction of classification algorithm in data mining are challenging task for data classification. For the improvement of data classification used different classifier along with another classifier in a manner of ensemble process. Ensemble process increase the classification ratio of classification algorithm, now such par diagram of classification algorithm is called ensemble classifier. Ensemble learning is a technique to improve the performance and accuracy of classification and predication of machine learning algorithm. Many researchers proposed a model for ensemble classifier for merging a different classification algorithm, but the performance of ensemble algorithm suffered from problem of outlier, noise and core point problem of data from features selection process. In this paper we combined core, outlier and noise data (COB for features selection process for ensemble model. The process of best feature selection with appropriate classifier used particle of swarm optimization. Empirical results with UCI data set prediction on Ecoil and glass dataset indicate that the proposed COB model optimization algorithm can help to improve accuracy and classification.

  9. Relations between canonical and non-canonical inflation

    Energy Technology Data Exchange (ETDEWEB)

    Gwyn, Rhiannon [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany); Rummel, Markus [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group

    2012-12-15

    We look for potential observational degeneracies between canonical and non-canonical models of inflation of a single field {phi}. Non-canonical inflationary models are characterized by higher than linear powers of the standard kinetic term X in the effective Lagrangian p(X,{phi}) and arise for instance in the context of the Dirac-Born-Infeld (DBI) action in string theory. An on-shell transformation is introduced that transforms non-canonical inflationary theories to theories with a canonical kinetic term. The 2-point function observables of the original non-canonical theory and its canonical transform are found to match in the case of DBI inflation.

  10. Relations between canonical and non-canonical inflation

    International Nuclear Information System (INIS)

    We look for potential observational degeneracies between canonical and non-canonical models of inflation of a single field φ. Non-canonical inflationary models are characterized by higher than linear powers of the standard kinetic term X in the effective Lagrangian p(X,φ) and arise for instance in the context of the Dirac-Born-Infeld (DBI) action in string theory. An on-shell transformation is introduced that transforms non-canonical inflationary theories to theories with a canonical kinetic term. The 2-point function observables of the original non-canonical theory and its canonical transform are found to match in the case of DBI inflation.

  11. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  12. Ensemble and constrained clustering with applications

    OpenAIRE

    Abdala, D.D. (Daniel)

    2011-01-01

    Diese Arbeit stellt neue Entwicklungen in Ensemble und Constrained Clustering vor und enthält die folgenden wesentlichen Beiträge: 1) Eine Vereinigung von Constrained und Ensemble Clustering in einem einheitlichen Framework. 2) Eine neue Methode zur Messung und Visualisierung der Variabilität von Ensembles. 3) Ein neues, Random Walker basiertes Verfahren für Ensemble Clustering. 4) Anwendung von Ensemble Clustering für Bildsegmentierung. 5) Eine neue Consensus-Funktion für das Ensemble Cluste...

  13. Comparison of the ensemble Kalman filter and 4D-Var assimilation methods using a stratospheric tracer transport model

    Directory of Open Access Journals (Sweden)

    S. Skachko

    2014-01-01

    Full Text Available The Ensemble Kalman filter (EnKF assimilation method is applied to the tracer transport using the same stratospheric transport model as in the 4D-Var assimilation system BASCOE. This EnKF version of BASCOE was built primarily to avoid the large costs associated with the maintenance of an adjoint model. The EnKF developed in BASCOE accounts for two adjustable parameters: a parameter α controlling the model error term and a parameter r controlling the observational error. The EnKF system is shown to be markedly sensitive to these two parameters, which are adjusted based on the monitoring of a χ2-test measuring the misfit between the control variable and the observations. The performance of the EnKF and 4D-Var versions was estimated through the assimilation of Aura-MLS ozone observations during an 8 month period which includes the formation of the 2008 Antarctic ozone hole. To ensure a proper comparison, despite the fundamental differences between the two assimilation methods, both systems use identical and carefully calibrated input error statistics. We provide the detailed procedure for these calibrations, and compare the two sets of analyses with a focus on the lower and middle stratosphere where the ozone lifetime is much larger than the observational update frequency. Based on the Observation-minus-Forecast statistics, we show that the analyses provided by the two systems are markedly similar, with biases smaller than 5% and standard deviation errors smaller than 10% in most of the stratosphere. Since the biases are markedly similar, they have most probably the same causes: these can be deficiencies in the model and in the observation dataset, but not in the assimilation algorithm nor in the error calibration. The remarkably similar performance also shows that in the context of stratospheric transport, the choice of the assimilation method can be based on application-dependent factors, such as CPU cost or the ability to generate an ensemble

  14. A Classifier Ensemble of Binary Classifier Ensembles

    Directory of Open Access Journals (Sweden)

    Sajad Parvin

    2011-09-01

    Full Text Available This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of classifiers. Although a better performance for classifier is defined the more accurate classifier, but turning to the best classifier is not always the best option to obtain the best quality in classification. It means to reach the best classification there is another alternative to use many inaccurate or weak classifiers each of them is specialized for a sub-space in the problem space and using their consensus vote as the final classifier. So this paper proposes a heuristic classifier ensemble to improve the performance of classification learning. It is specially deal with multiclass problems which their aim is to learn the boundaries of each class from many other classes. Based on the concept of multiclass problems classifiers are divided into two different categories: pairwise classifiers and multiclass classifiers. The aim of a pairwise classifier is to separate one class from another one. Because of pairwise classifiers just train for discrimination between two classes, decision boundaries of them are simpler and more effective than those of multiclass classifiers.The main idea behind the proposed method is to focus classifier in the erroneous spaces of problem and use of pairwise classification concept instead of multiclass classification concept. Indeed although usage of pairwise classification concept instead of multiclass classification concept is not new, we propose a new pairwise classifier ensemble with a very lower order. In this paper, first the most confused classes are determined and then some ensembles of classifiers are created. The classifiers of each of these ensembles jointly work using majority weighting votes. The results of these ensembles

  15. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  16. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Science.gov (United States)

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua

    2015-01-01

    In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985

  17. Canonical field theory

    Science.gov (United States)

    You, Setthivoine

    2015-11-01

    A new canonical field theory has been developed to help interpret the interaction between plasma flows and magnetic fields. The theory augments the Lagrangian of general dynamical systems to rigourously demonstrate that canonical helicity transport is valid across single particle, kinetic and fluid regimes, on scales ranging from classical to general relativistic. The Lagrangian is augmented with two extra terms that represent the interaction between the motion of matter and electromagnetic fields. The dynamical equations can then be re-formulated as a canonical form of Maxwell's equations or a canonical form of Ohm's law valid across all non-quantum regimes. The field theory rigourously shows that helicity can be preserved in kinetic regimes and not only fluid regimes, that helicity transfer between species governs the formation of flows or magnetic fields, and that helicity changes little compared to total energy only if density gradients are shallow. The theory suggests a possible interpretation of particle energization partitioning during magnetic reconnection as canonical wave interactions. This work is supported by US DOE Grant DE-SC0010340.

  18. Canonical phylogenetic ordination.

    Science.gov (United States)

    Giannini, Norberto P

    2003-10-01

    A phylogenetic comparative method is proposed for estimating historical effects on comparative data using the partitions that compose a cladogram, i.e., its monophyletic groups. Two basic matrices, Y and X, are defined in the context of an ordinary linear model. Y contains the comparative data measured over t taxa. X consists of an initial tree matrix that contains all the xj monophyletic groups (each coded separately as a binary indicator variable) of the phylogenetic tree available for those taxa. The method seeks to define the subset of groups, i.e., a reduced tree matrix, that best explains the patterns in Y. This definition is accomplished via regression or canonical ordination (depending on the dimensionality of Y) coupled with Monte Carlo permutations. It is argued here that unrestricted permutations (i.e., under an equiprobable model) are valid for testing this specific kind of groupwise hypothesis. Phylogeny is either partialled out or, more properly, incorporated into the analysis in the form of component variation. Direct extensions allow for testing ecomorphological data controlled by phylogeny in a variation partitioning approach. Currently available statistical techniques make this method applicable under most univariate/multivariate models and metrics; two-way phylogenetic effects can be estimated as well. The simplest case (univariate Y), tested with simulations, yielded acceptable type I error rates. Applications presented include examples from evolutionary ethology, ecology, and ecomorphology. Results showed that the new technique detected previously overlooked variation clearly associated with phylogeny and that many phylogenetic effects on comparative data may occur at particular groups rather than across the entire tree. PMID:14530135

  19. Canonical affordances in context

    Directory of Open Access Journals (Sweden)

    Alan Costall

    2012-12-01

    Full Text Available James Gibson’s concept of affordances was an attempt to undermine the traditional dualism of the objective and subjective. Gibson himself insisted on the continuity of “affordances in general” and those attached to human artifacts. However, a crucial distinction needs to be drawn between “affordances in general” and the “canonical affordances” that are connected primarily to artifacts. Canonical affordances are conventional and normative. It is only in such cases that it makes sense to talk of the affordance of the object. Chairs, for example, are for sitting-on, even though we may also use them in many other ways. A good deal of confusion has arisen in the discussion of affordances from (1 the failure to recognize the normative status of canonical affordances and (2 then generalizing from this special case.

  20. Modeling of two-phase flow in boiling water reactor using phase-weighted ensemble average method

    International Nuclear Information System (INIS)

    Investigations into boiling, the generation of vapor and the prediction of its behavior are important in the stability of boiling water reactors. The present models are limited to simplifications made to draw governing equations or lack of closure framework of the constitutive relations. The commercial codes fall into this category as well. Consequently, researchers cannot simply find the comprehensive updated relations before simplification in order to simplify them for their own works. This study offers a state of the art, phase-weighted, ensemble-averaged, two-phase flow, two-fluid model for the simulation of two-phase flow with heat and mass transfer. This approach is then used for modeling the bulk boiling (thermal-hydraulic modeling) in boiling water reactors. The resultant approach is based on using the energy balance equation to find a relation for quality of vapor at any point. The equations are solved using SIMPLE algorithm in the finite volume method and the results compared with real BWR (PB2 BWR/4 NPP) and the boiling data. Comparison shows that the present model is satisfactorily improved in accuracy.

  1. A composite state method for ensemble data assimilation with multiple limited-area models

    Directory of Open Access Journals (Sweden)

    Matthew Kretschmer

    2015-04-01

    Full Text Available Limited-area models (LAMs allow high-resolution forecasts to be made for geographic regions of interest when resources are limited. Typically, boundary conditions for these models are provided through one-way boundary coupling from a coarser resolution global model. Here, data assimilation is considered in a situation in which a global model supplies boundary conditions to multiple LAMs. The data assimilation method presented combines information from all of the models to construct a single ‘composite state’, on which data assimilation is subsequently performed. The analysis composite state is then used to form the initial conditions of the global model and all of the LAMs for the next forecast cycle. The method is tested by using numerical experiments with simple, chaotic models. The results of the experiments show that there is a clear forecast benefit to allowing LAM states to influence one another during the analysis. In addition, adding LAM information at analysis time has a strong positive impact on global model forecast performance, even at points not covered by the LAMs.

  2. 一种集成式不确定推理方法研究%Research on an Ensemble Method of Uncertainty Reasoning

    Institute of Scientific and Technical Information of China (English)

    贺怀清; 李建伏

    2011-01-01

    Ensemble learning is a machine learning paradigm where multiple models are strategically generated and combined to obtain better predictive performance than a single learning method.It was proven that ensemble learning is feasible and tends to yield better results.Uncertainty reasoning is one of the important directions in artificial intelligence.Various uncertainty reasoning methods have been developed and all have their own advantages and disadvantages.Motivated by ensemble learning, an ensemble method of uncertainty reasoning was proposed.The main idea of the new method is in accordance with the basic framework of ensemble learning,where multiple uncertainty reasoning methods is used in time and the result of various reasoning methods is integrated by some rules as the final result.Finally, theoretical analysis and experimental tests show that the ensemble uncertainty reasoning method is effective and feasible.%集成学习是采用某种规则把一系列学习器的结果进行整合以获得比单个学习器更好的学习效果的一种机器学习方法.研究表明集成学习是可行的,能取得比传统学习方法更好的性能.不确定推理是人工智能的重要研究方向之一,目前已经开发出了多种不确定推理方法,这些方法在实际应用中各有优缺点.借鉴集成学习,提出一种集成式不确定推理方法,其基本思想是按照一定的策略集成多种不确定推理方法,以提高推理的准确性.理论分析和实验结果验证了方法的合理性和可行性.

  3. Measuring sub-canopy evaporation in a forested wetland using an ensemble of methods

    Science.gov (United States)

    Allen, S. T.; Edwards, B.; Reba, M. L.; Keim, R.

    2013-12-01

    and humidity gradients. This suggests the need to use combined methods during periods with problematic boundary layer conditions.

  4. Bayesian Decision-theoretic Methods for Parameter Ensembles with Application to Epidemiology

    Science.gov (United States)

    Gunterman, Haluna Penelope Frances

    and water-uptake behavior of CLs. Isolated CLs were made in-house and commercially and tested for their PC-S response. CLs have the propensity to be highly hydrophilic and require capillary pressures as low as -80 kPa to eject water. The presence of Pt or surface cracks increases hydrophilicity. These findings suggest that saturation in CLs, especially cracked CLs, may exacerbate poor transport. Lastly, this work includes early-stage development of a limiting-current measurement that can be used to calculate effective transport properties as a function of saturation. Results indicate that the method is valid, and different DM have higher transport depending on the operating condition. The technique is yet in a formative stage, and this work includes advice and recommendations for operation and design improvements.

  5. Differential Forms on Log Canonical Spaces

    CERN Document Server

    Greb, Daniel; Kovacs, Sandor J; Peternell, Thomas

    2010-01-01

    The present paper is concerned with differential forms on log canonical varieties. It is shown that any p-form defined on the smooth locus of a variety with canonical or klt singularities extends regularly to any resolution of singularities. In fact, a much more general theorem for log canonical pairs is established. The proof relies on vanishing theorems for log canonical varieties and on methods of the minimal model program. In addition, a theory of differential forms on dlt pairs is developed. It is shown that many of the fundamental theorems and techniques known for sheaves of logarithmic differentials on smooth varieties also hold in the dlt setting. Immediate applications include the existence of a pull-back map for reflexive differentials, generalisations of Bogomolov-Sommese type vanishing results, and a positive answer to the Lipman-Zariski conjecture for klt spaces.

  6. Regularized canonical correlation analysis with unlabeled data

    Institute of Scientific and Technical Information of China (English)

    Xi-chuan ZHOU; Hai-bin SHEN

    2009-01-01

    In standard canonical correlation analysis (CCA), the data from definite datasets are used to estimate their canonical correlation. In real applications, for example in bilingual text retrieval, it may have a great portion of data that we do not know which set it belongs to. This part of data is called unlabeled data, while the rest from definite datasets is called labeled data. We propose a novel method called regularized canonical correlation analysis (RCCA), which makes use of both labeled and unlabeled samples. Specifically, we learn to approximate canonical correlation as if all data were labeled. Then. we describe a generalization of RCCA for the multi-set situation. Experiments on four real world datasets, Yeast, Cloud, Iris, and Haberman, demonstrate that,by incorporating the unlabeled data points, the accuracy of correlation coefficients can be improved by over 30%.

  7. Canonical variate regression.

    Science.gov (United States)

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. PMID:26861909

  8. On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles

    KAUST Repository

    Luo, Xiaodong

    2010-09-19

    The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].

  9. Quantum Gibbs ensemble Monte Carlo

    International Nuclear Information System (INIS)

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of 4He in two dimensions

  10. Quaternion Linear Canonical Transform Application

    OpenAIRE

    Bahri, Mawardi

    2015-01-01

    Quaternion linear canonical transform (QLCT) is a generalization of the classical linear canonical transfom (LCT) using quaternion algebra. The focus of this paper is to introduce an application of the QLCT to study of generalized swept-frequency filter

  11. Realizations of the Canonical Representation

    Indian Academy of Sciences (India)

    M K Vemuri

    2008-02-01

    A characterisation of the maximal abelian subalgebras of the bounded operators on Hilbert space that are normalised by the canonical representation of the Heisenberg group is given. This is used to classify the perfect realizations of the canonical representation.

  12. Sum of ranking differences (SRD) to ensemble multivariate calibration model merits for tuning parameter selection and comparing calibration methods

    International Nuclear Information System (INIS)

    Highlights: • Sum of ranking differences (SRD) used for tuning parameter selection based on fusion of multicriteria. • No weighting scheme is needed for the multicriteria. • SRD allows automatic selection of one model or a collection of models if so desired. • SRD allows simultaneous comparison of different calibration methods with tuning parameter selection. • New MATLAB programs are described and made available. - Abstract: Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a “good” tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user’s preference for the degree of balance between bias and variance ultimately decides the merits used in SRD

  13. Sum of ranking differences (SRD) to ensemble multivariate calibration model merits for tuning parameter selection and comparing calibration methods

    Energy Technology Data Exchange (ETDEWEB)

    Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Héberger, Károly [Research Centre for Natural Sciences, Hungarian Academy of Sciences, Pusztaszeri út 59-67, 1025 Budapest (Hungary); Andries, Erik [Center for Advanced Research Computing, University of New Mexico, Albuquerque, NM 87106 (United States); Department of Mathematics, Central New Mexico Community College, Albuquerque, NM 87106 (United States)

    2015-04-15

    Highlights: • Sum of ranking differences (SRD) used for tuning parameter selection based on fusion of multicriteria. • No weighting scheme is needed for the multicriteria. • SRD allows automatic selection of one model or a collection of models if so desired. • SRD allows simultaneous comparison of different calibration methods with tuning parameter selection. • New MATLAB programs are described and made available. - Abstract: Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a “good” tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user’s preference for the degree of balance between bias and variance ultimately decides the merits used in SRD

  14. A Framework for Non-Equilibrium Statistical Ensemble Theory

    Institute of Scientific and Technical Information of China (English)

    BI Qiao; HE Zu-Tan; LIU Jie

    2011-01-01

    Since Gibbs synthesized a general equilibrium statistical ensemble theory, many theorists have attempted to generalized the Gibbsian theory to non-equilibrium phenomena domain, however the status of the theory of nonequilibrium phenomena can not be said as firm as well established as the Gibbsian ensemble theory. In this work, we present a framework for the non-equilibrium statistical ensemble formalism based on a subdynamic kinetic equation (SKE) rooted from the Brussels-Austin school and followed by some up-to-date works. The constructed key is to use a similarity transformation between Gibbsian ensembles formalism based on Liouville equation and the subdynamic ensemble formalism based on the SKE. Using this formalism, we study the spin-Boson system, as cases of weak coupling or strongly coupling, and obtain the reduced density operators for the Canonical ensembles easily.

  15. Canonical quantization of macroscopic electromagnetism

    OpenAIRE

    Philbin, Thomas Gerard

    2010-01-01

    Application of the standard canonical quantization rules of quantum field theory to macroscopic electromagnetism has encountered obstacles due to material dispersion and absorption. This has led to a phenomenological approach to macroscopic quantum electrodynamics where no canonical formulation is attempted. In this paper macroscopic electromagnetism is canonically quantized. The results apply to any linear, inhomogeneous, magnetodielectric medium with dielectric functions that obey the Krame...

  16. Revisiting Canonical Quantization

    OpenAIRE

    Klauder, John R.

    2012-01-01

    Conventional canonical quantization procedures directly link various c-number and q-number quantities. Here, we advocate a different association of classical and quantum quantities that renders classical theory a natural subset of quantum theory with \\hbar>0, in conformity with the real world wherein nature has chosen \\hbar>0 rather than \\hbar=0. While keeping the good results of conventional procedures, some examples are presented for which the new procedures offer better results than conven...

  17. Canonical Infinitesimal Deformations

    OpenAIRE

    Ran, Ziv

    1998-01-01

    This paper gives a canonical construction, in terms of additive cohomological functors, of the universal formal deformation of a compact complex manifold without vector fields (more generally of a faithful $g$-module, where $g$ is a sheaf of Lie algebras without sections). The construction is based on a certain (multivariate) Jacobi complex $J(g)$ associatd to $g$: indeed ${\\mathbb C}\\oplus {\\mathbb H}^0(J(g))^*$ is precisely the base ring of the universal deformation.

  18. A CLUE for CLUster Ensembles

    OpenAIRE

    Kurt Hornik

    2005-01-01

    Cluster ensembles are collections of individual solutions to a given clustering problem which are useful or necessary to consider in a wide range of applications. The R package clue provides an extensible computational environment for creating and analyzing cluster ensembles, with basic data structures for representing partitions and hierarchies, and facilities for computing on these, including methods for measuring proximity and obtaining consensus and "secondary" clusterings....

  19. Similarity measures for protein ensembles

    DEFF Research Database (Denmark)

    Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper

    2009-01-01

    Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations...... synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single...

  20. Meaning of temperature in different thermostatistical ensembles.

    Science.gov (United States)

    Hänggi, Peter; Hilbert, Stefan; Dunkel, Jörn

    2016-03-28

    Depending on the exact experimental conditions, the thermodynamic properties of physical systems can be related to one or more thermostatistical ensembles. Here, we survey the notion of thermodynamic temperature in different statistical ensembles, focusing in particular on subtleties that arise when ensembles become non-equivalent. The 'mother' of all ensembles, the microcanonical ensemble, uses entropy and internal energy (the most fundamental, dynamically conserved quantity) to derive temperature as a secondary thermodynamic variable. Over the past century, some confusion has been caused by the fact that several competing microcanonical entropy definitions are used in the literature, most commonly the volume and surface entropies introduced by Gibbs. It can be proved, however, that only the volume entropy satisfies exactly the traditional form of the laws of thermodynamics for a broad class of physical systems, including all standard classical Hamiltonian systems, regardless of their size. This mathematically rigorous fact implies that negative 'absolute' temperatures and Carnot efficiencies more than 1 are not achievable within a standard thermodynamical framework. As an important offspring of microcanonical thermostatistics, we shall briefly consider the canonical ensemble and comment on the validity of the Boltzmann weight factor. We conclude by addressing open mathematical problems that arise for systems with discrete energy spectra. PMID:26903095

  1. Stroke in Canon of Medicine

    Science.gov (United States)

    Alorizi, Seyed Morteza Emami; Nimruzi, Majid

    2016-01-01

    Background: Stroke has a huge negative impact on the society and more adversely affect women. There is scarce evidence about any neuroprotective effects of commonly used drug in acute stroke. Bushnell et al. provided a guideline focusing on the risk factors of stroke unique to women, including reproductive factors, metabolic syndrome, obesity, atrial fibrillation, and migraine with aura. The ten variables cited by Avicenna in Canon of Medicine would compensate for the gaps mentioned in this guideline. The prescribed drugs should be selected qualitatively opposite to Mizaj (warm-cold and wet-dry qualities induced by disease state) of the disease and according to ten variables, including the nature of the affected organ, intensity of disease, sex, age, habit, season, place of living, occupation, stamina and physical status. Methods: Information related to stroke was searched in Canon of Medicine, which is an outstanding book in traditional Persian medicine written by Avicenna. Results: A hemorrhagic stroke is the result of increasing sanguine humor in the body. Sanguine has warm-wet quality, and should be treated with food and drugs that quench the abundance of blood in the body. An acute episode of ischemic stroke is due to the abundance of phlegm that causes a blockage in the cerebral vessels. Phlegm has cold-wet quality and treatment should be started with compound medicines that either solve the phlegm or eject it from the body. Conclusion: Avicenna has cited in Canon of Medicine that women have cold and wet temperament compared to men. For this reason, they are more prone to accumulation of phlegm in their body organs including the liver, joints and vessels, and consequently in the risk of fatty liver, degenerative joint disease, atherosclerosis, and stroke especially the ischemic one. This is in accordance with epidemiological studies that showed higher rate of ischemic stroke in women rather than hemorrhagic one. PMID:26722147

  2. Coupling machine learning methods with wavelet transforms and the bootstrap and boosting ensemble approaches for drought prediction

    Science.gov (United States)

    Belayneh, A.; Adamowski, J.; Khalil, B.; Quilty, J.

    2016-05-01

    This study explored the ability of coupled machine learning models and ensemble techniques to predict drought conditions in the Awash River Basin of Ethiopia. The potential of wavelet transforms coupled with the bootstrap and boosting ensemble techniques to develop reliable artificial neural network (ANN) and support vector regression (SVR) models was explored in this study for drought prediction. Wavelet analysis was used as a pre-processing tool and was shown to improve drought predictions. The Standardized Precipitation Index (SPI) (in this case SPI 3, SPI 12 and SPI 24) is a meteorological drought index that was forecasted using the aforementioned models and these SPI values represent short and long-term drought conditions. The performances of all models were compared using RMSE, MAE, and R2. The prediction results indicated that the use of the boosting ensemble technique consistently improved the correlation between observed and predicted SPIs. In addition, the use of wavelet analysis improved the prediction results of all models. Overall, the wavelet boosting ANN (WBS-ANN) and wavelet boosting SVR (WBS-SVR) models provided better prediction results compared to the other model types evaluated.

  3. A Selective Fuzzy Clustering Ensemble Algorithm

    OpenAIRE

    Kai Li; Peng Li

    2013-01-01

    To improve the performance of clustering ensemble method, a selective fuzzy clustering ensemble algorithm is proposed. It mainly includes selection of clustering ensemble members and combination of clustering results. In the process of member selection, measure method is defined to select the better clustering members. Then some selected clustering members are viewed as hyper-graph in order to select the more influential hyper-edges (or features) and to weight the selected features. For proce...

  4. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  5. Canonical brackets of a toy model for the Hodge theory without its canonical conjugate momenta

    CERN Document Server

    Shukla, D; Malik, R P

    2014-01-01

    We consider the toy model of a rigid rotor as an example of the Hodge theory within the framework of the Becchi-Rouet-Stora-Tyutin (BRST) formalism and show that the internal symmetries of this theory lead to the derivation of canonical brackets amongst the creation and annihilation operators of the dynamical variables where the definition of the canonical conjugate momenta is not required. We invoke only the spin-statistics theorem, normal ordering and basic concepts of continuous symmetries (and their generators) to derive the canonical brackets for the model of a one (0 + 1)-dimensional (1D) rigid rotor without using the definition of the canonical conjugate momenta anywhere. Our present method of derivation of the basic brackets is conjectured to be true for a class of theories that provide a set of tractable physical examples for the Hodge theory.

  6. Canonical brackets of a toy model for the Hodge theory without its canonical conjugate momenta

    Science.gov (United States)

    Shukla, D.; Bhanja, T.; Malik, R. P.

    2015-07-01

    We consider the toy model of a rigid rotor as an example of the Hodge theory within the framework of Becchi-Rouet-Stora-Tyutin (BRST) formalism and show that the internal symmetries of this theory lead to the derivation of canonical brackets amongst the creation and annihilation operators of the dynamical variables where the definition of the canonical conjugate momenta is not required. We invoke only the spin-statistics theorem, normal ordering and basic concepts of continuous symmetries (and their generators) to derive the canonical brackets for the model of a one (0 + 1)-dimensional (1D) rigid rotor without using the definition of the canonical conjugate momenta anywhere. Our present method of derivation of the basic brackets is conjectured to be true for a class of theories that provide a set of tractable physical examples for the Hodge theory.

  7. [Epilepsy and Canon Law].

    Science.gov (United States)

    Bonduelle, M

    1987-01-01

    The Canon Law (Codex Iuris Canonici), promulgated in 1917, was a classification of laws and jurisprudence which ruled the early Church, governed the ecclesiastical condition of Roman Church until its reorganisation in 1983. It forbade to be ordained or to exercise orders already received to "those who are or were epileptics either not quite in their right mind or possessed by the Evil One". All the context and in particular the paragraph which treated of bodily lacks, indicated that between these three conditions, there was juxtaposition and no confusion. The texts specified the foundations of these dispositions, not in a malefic view of epilepsy inherited from Morbus Sacer of Antiquity, but in decency and on account of risk incured by Eucharist in case of fit. Some derogations could attenuate the severity of these dispositions--as jurisprudence had taken progresses of Epileptology and therapeutics into consideration. In the new Code of Canon Law (1983) physical disabilities were removed from the text and also possessed evil and epilepsy, the only impediment being "insanity or other psychic defect" appreciation of which is done by experts. Concerning poorly controlled epilepsies, we believe that experts will be allowed to express their opinion and a new jurisprudence will make up for the silence of the law. PMID:3310183

  8. Bridging the gap between single molecule and ensemble methods for measuring lateral dynamics in the plasma membrane

    DEFF Research Database (Denmark)

    Christensen, Eva Arnspang; Schwartzentruber, J.; Clausen, M. P.;

    2013-01-01

    comparing the results for a biotinylated lipid labeled at high densities with Atto647N-strepatvidin (sAv) or sparse densities with sAv-QDs. In this latter case, we see that the recovered diffusion rate is two-fold greater for the same lipid and in the same cell-type when labeled with Atto647N-sAv as...... compared to sAv-QDs. This data demonstrates that kICS can be used for analysis of single molecule data and furthermore can bridge between samples with a labeling densities ranging from single molecule to ensemble level measurements....

  9. Boundary conditions in first order gravity: Hamiltonian and Ensemble

    OpenAIRE

    Aros, Rodrigo

    2005-01-01

    In this work two different boundary conditions for first order gravity, corresponding to a null and a negative cosmological constant respectively, are studied. Both boundary conditions allows to obtain the standard black hole thermodynamics. Furthermore both boundary conditions define a canonical ensemble. Additionally the quasilocal energy definition is obtained for the null cosmological constant case.

  10. Canonic form of linear quaternion functions

    OpenAIRE

    Sangwine, Stephen J.

    2008-01-01

    The general linear quaternion function of degree one is a sum of terms with quaternion coefficients on the left and right. The paper considers the canonic form of such a function, and builds on the recent work of Todd Ell, who has shown that any such function may be represented using at most four quaternion coefficients. In this paper, a new and simple method is presented for obtaining these coefficients numerically using a matrix approach which also gives an alternative proof of the canonic ...

  11. Ensemble algorithms in reinforcement learning.

    Science.gov (United States)

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  12. Multinomial logistic regression ensembles.

    Science.gov (United States)

    Lee, Kyewon; Ahn, Hongshik; Moon, Hojin; Kodell, Ralph L; Chen, James J

    2013-05-01

    This article proposes a method for multiclass classification problems using ensembles of multinomial logistic regression models. A multinomial logit model is used as a base classifier in ensembles from random partitions of predictors. The multinomial logit model can be applied to each mutually exclusive subset of the feature space without variable selection. By combining multiple models the proposed method can handle a huge database without a constraint needed for analyzing high-dimensional data, and the random partition can improve the prediction accuracy by reducing the correlation among base classifiers. The proposed method is implemented using R, and the performance including overall prediction accuracy, sensitivity, and specificity for each category is evaluated on two real data sets and simulation data sets. To investigate the quality of prediction in terms of sensitivity and specificity, the area under the receiver operating characteristic (ROC) curve (AUC) is also examined. The performance of the proposed model is compared to a single multinomial logit model and it shows a substantial improvement in overall prediction accuracy. The proposed method is also compared with other classification methods such as the random forest, support vector machines, and random multinomial logit model. PMID:23611203

  13. Generation of scenarios from calibrated ensemble forecasts with a dynamic ensemble copula coupling approach

    CERN Document Server

    Bouallegue, Zied Ben; Theis, Susanne E; Pinson, Pierre

    2015-01-01

    Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (...

  14. Canonical correlation analysis of course and teacher evaluation

    DEFF Research Database (Denmark)

    Sliusarenko, Tamara; Ersbøll, Bjarne Kjær

    2010-01-01

    information obtained from the course evaluation form overlaps with information obtained from the teacher evaluation form. Employing canonical correlation analysis it was found that course and teacher evaluations are correlated. However, the structure of the canonical correlation is subject to change with...... changes in teaching methods from one year to another....

  15. Building hazard maps of extreme daily rainy events from PDF ensemble, via REA method, on Senegal River Basin

    Directory of Open Access Journals (Sweden)

    J. D. Giraldo

    2011-04-01

    Full Text Available The Sudano-Sahelian zone of West Africa, one of the poorest of the Earth, is characterized by high rainfall variability and rapid population growth. In this region, heavy storm events frequently cause extensive damage. Nonetheless, the projections for change in extreme rainfall values have shown a great divergence between Regional Climate Models (RCM, increasing the forecast uncertainty. Novel methodologies should be applied, taking into account both the variability provided by different RCMs, as well as the non-stationary nature of time series for the building of hazard maps of extreme rainfall events. The present work focuses in the probability density functions (PDFs-based evaluation and a simple quantitative measure of how well each RCM considered can capture the observed annual maximum daily rainfall (AMDR series on the Senegal River basin. Since meaningful trends have been detected in historical rainfall time series for the region, non-stationary probabilistic models were used to fit the PDF parameters to the AMDR time series. In the development of PDF ensemble by bootstrapping techniques, Reliability Ensemble Averaging (REA maps were applied to score the RCMs. The REA factors were computed using a metric to evaluate the agreement between observed -or best estimated- PDFs, and that simulated with each RCM. The assessment of plausible regional trends associated to the return period, from the hazard maps of AMDR, showed a general rise, owing to an increase in the mean and the variability of extreme precipitation. These spatial-temporal distributions could be considered by local stakeholders in such a way as to reach a better balance between mitigation and adaptation.

  16. Data assimilation the ensemble Kalman filter

    CERN Document Server

    Evensen, Geir

    2006-01-01

    Covers data assimilation and inverse methods, including both traditional state estimation and parameter estimation. This text and reference focuses on various popular data assimilation methods, such as weak and strong constraint variational methods and ensemble filters and smoothers.

  17. Ensemble clustering in deterministic ensemble Kalman filters

    Directory of Open Access Journals (Sweden)

    Javier Amezcua

    2012-07-01

    Full Text Available Ensemble clustering (EC can arise in data assimilation with ensemble square root filters (EnSRFs using non-linear models: an M-member ensemble splits into a single outlier and a cluster of M–1 members. The stochastic Ensemble Kalman Filter does not present this problem. Modifications to the EnSRFs by a periodic resampling of the ensemble through random rotations have been proposed to address it. We introduce a metric to quantify the presence of EC and present evidence to dispel the notion that EC leads to filter failure. Starting from a univariate model, we show that EC is not a permanent but transient phenomenon; it occurs intermittently in non-linear models. We perform a series of data assimilation experiments using a standard EnSRF and a modified EnSRF by a resampling though random rotations. The modified EnSRF thus alleviates issues associated with EC at the cost of traceability of individual ensemble trajectories and cannot use some of algorithms that enhance performance of standard EnSRF. In the non-linear regimes of low-dimensional models, the analysis root mean square error of the standard EnSRF slowly grows with ensemble size if the size is larger than the dimension of the model state. However, we do not observe this problem in a more complex model that uses an ensemble size much smaller than the dimension of the model state, along with inflation and localisation. Overall, we find that transient EC does not handicap the performance of the standard EnSRF.

  18. A COMPREHENSIVE EVOLUTIONARY APPROACH FOR NEURAL NETWORK ENSEMBLES AUTOMATIC DESIGN

    OpenAIRE

    Bukhtoyarov, V.; Semenkin, E.

    2010-01-01

    A new comprehensive approach for neural network ensembles design is proposed. It consists of a method of neural networks automatic design and a method of automatic formation of an ensemble solution on the basis of separate neural networks solutions. It is demonstrated that the proposed approach is not less effective than a number of other approaches for neural network ensembles design.

  19. A Selective Fuzzy Clustering Ensemble Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Li

    2013-12-01

    Full Text Available To improve the performance of clustering ensemble method, a selective fuzzy clustering ensemble algorithm is proposed. It mainly includes selection of clustering ensemble members and combination of clustering results. In the process of member selection, measure method is defined to select the better clustering members. Then some selected clustering members are viewed as hyper-graph in order to select the more influential hyper-edges (or features and to weight the selected features. For processing hyper-edges with fuzzy membership, CSPA and MCLA consensus function are generalized. In the experiments, some UCI data sets are chosen to test the presented algorithm’s performance. From the experimental results, it can be seen that the proposed ensemble method can get better clustering ensemble result.

  20. A mollified Ensemble Kalman filter

    CERN Document Server

    Bergemann, Kay

    2010-01-01

    It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high frequency adjustment processes in the model dynamics. Various methods have been devised to continuously spread out the analysis increments over a fixed time interval centered about analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.

  1. Canonical duties, liabilities of trustees and administrators.

    Science.gov (United States)

    Morrisey, F G

    1985-06-01

    The new Code of Canon Law outlines a number of duties of those who have responsibility for administering the Church's temporal goods. Before assuming office, administrators must pledge to be efficient and faithful, and they must prepare an inventory of goods belonging to the juridic person they serve. Among their duties, administrators must: Ensure that adequate insurance is provided; Use civilly valid methods to protect canonical ownership of the goods; Observe civil and canon law prescriptions as well as donors' intentions; Collect and safeguard revenues, repay debts, and invest funds securely; Maintain accurate records, keep documents secure, and prepare an annual budget; Prepare an annual report and present it to the Ordinary where prescribed; Observe civil law concerning labor and social policy, and pay employees a just and decent wage. Administrators who carry out acts that are invalid canonically are liable for such acts. The juridic person is not liable, unless it derived benefit from the transaction. Liability is especially high when the sale of property is involved or when a contract is entered into without proper cannonical consent. Although Church law is relatively powerless to punish those who have been negligent, stewards, administrators, and trustees must do all they can to be truthful to the responsibility with which they have been entrusted. PMID:10271510

  2. Canonical versus grand canonical treatment of the conservation laws

    International Nuclear Information System (INIS)

    The differences between the canonical and the grand canoncial treatment of the conservation laws in the relativistic statistical thermodynamics are discussed. The possible implications on the thermodynamics description of hadronic matter created in particle or ion collisions are considered

  3. Ensemble algorithms in reinforcement learning

    NARCIS (Netherlands)

    Wiering, Marco A; van Hasselt, Hado

    2008-01-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and imple

  4. Generation of scenarios from calibrated ensemble forecasts with a dynamic ensemble copula coupling approach

    DEFF Research Database (Denmark)

    Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.;

    2015-01-01

    is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast...... from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error....... The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (d-ECC). The ensemble based empirical copulas, ECC and d-ECC, are applied to wind forecasts from the high resolution ensemble system COSMO-DEEPS run operationally at the German...

  5. Online Learning with Ensembles

    OpenAIRE

    Urbanczik, R

    1999-01-01

    Supervised online learning with an ensemble of students randomized by the choice of initial conditions is analyzed. For the case of the perceptron learning rule, asymptotically the same improvement in the generalization error of the ensemble compared to the performance of a single student is found as in Gibbs learning. For more optimized learning rules, however, using an ensemble yields no improvement. This is explained by showing that for any learning rule $f$ a transform $\\tilde{f}$ exists,...

  6. Morphing ensemble Kalman filters

    OpenAIRE

    Beezley, Jonathan D.; Mandel, Jan

    2008-01-01

    A new type of ensemble filter is proposed, which combines an ensemble Kalman filter (EnKF) with the ideas of morphing and registration from image processing. This results in filters suitable for non-linear problems whose solutions exhibit moving coherent features, such as thin interfaces in wildfire modelling. The ensemble members are represented as the composition of one common state with a spatial transformation, called registration mapping, plus a residual. A fully automatic registration m...

  7. Morphing Ensemble Kalman Filters

    OpenAIRE

    Beezley, Jonathan D.; Mandel, Jan

    2007-01-01

    A new type of ensemble filter is proposed, which combines an ensemble Kalman filter (EnKF) with the ideas of morphing and registration from image processing. This results in filters suitable for nonlinear problems whose solutions exhibit moving coherent features, such as thin interfaces in wildfire modeling. The ensemble members are represented as the composition of one common state with a spatial transformation, called registration mapping, plus a residual. A fully automatic registration met...

  8. The canon as text for a biblical theology

    Directory of Open Access Journals (Sweden)

    James A. Loader

    2005-10-01

    Full Text Available The novelty of the canonical approach is questioned and its fascination at least partly traced to the Reformation, as well as to the post-Reformation’s need for a clear and authoritative canon to perform the function previously performed by the church. This does not minimise the elusiveness and deeply contradictory positions both within the canon and triggered by it. On the one hand, the canon itself is a centripetal phenomenon and does play an important role in exegesis and theology. Even so, on the other hand, it not only contains many difficulties, but also causes various additional problems of a formal as well as a theological nature. The question is mooted whether the canonical approach alleviates or aggravates the dilemma. Since this approach has become a major factor in Christian theology, aspects of the Christian canon are used to gauge whether “canon” is an appropriate category for eliminating difficulties that arise by virtue of its own existence. Problematic uses and appropriations of several Old Testament canons are advanced, as well as evidence in the New Testament of a consciousness that the “old” has been surpassed(“Überbietungsbewußtsein”. It is maintained that at least the Childs version of the canonical approach fails to smooth out these and similar difficulties. As a method it can cater for the New Testament’s (superior role as the hermeneutical standard for evaluating the Old, but flounders on its inability to create the theological unity it claims can solve religious problems exposed by Old Testament historical criticism. It is concluded that canon as a category cannot be dispensed with, but is useful for the opposite of the purpose to which it is conventionally put: far from bringing about theological “unity” or producing a standard for “correct” exegesis, it requires different readings of different canons.

  9. On the canonical quantization of local field theories

    International Nuclear Information System (INIS)

    A nonconventional extension of the canonical quantization method for local field theories is presented. Some difficulties of the conventional approach are avoided, e.g. there are no divergencies in the corresponding S-matrices. (author)

  10. Asymptotic distributions in the projection pursuit based canonical correlation analysis

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, associations between two sets of random variables based on the projection pursuit (PP) method are studied. The asymptotic normal distributions of estimators of the PP based canonical correlations and weighting vectors are derived.

  11. Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Danny [Los Alamos National Laboratory; Vernon, Louis J. [Los Alamos National Laboratory

    2012-04-04

    Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are

  12. ENCORE: Software for Quantitative Ensemble Comparison.

    Directory of Open Access Journals (Sweden)

    Matteo Tiberti

    2015-10-01

    Full Text Available There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can

  13. On Complex Supermanifolds with Trivial Canonical Bundle

    CERN Document Server

    Groeger, Josua

    2016-01-01

    We give an algebraic characterisation for the triviality of the canonical bundle of a complex supermanifold in terms of a certain Batalin-Vilkovisky superalgebra structure. As an application, we study the Calabi-Yau case, in which an explicit formula in terms of the Levi-Civita connection is achieved. Our methods include the use of complex integral forms and the recently developed theory of superholonomy.

  14. Canonical quantization of substrate-less fields

    CERN Document Server

    Gründler, Gerold

    2015-01-01

    An improved law for the canonical quantization of fields is presented, which is based on the distinction between fields which have a material substrate, and substrate-less fields. It is shown that the improved quantization method solves the (old) cosmological constant problem for all fields of the standard model of elementary particles and for the metric field, but not for the hypothetical inflaton fields, without compromising any of the achievements of the established quantum field theories.

  15. THEOLOGY OF CANONS IN CATHOLIC UNIVERSITIES?

    OpenAIRE

    IVÁN FEDERICO MEJÍA A

    2010-01-01

    Is it useful today, or necessary, an interpretation of the Code of Canon Law from Christology? The article examines some opposition which consider as inappropriate to search for foundations or links from "outside" the Code itself and the normal legislative living tradition of the Catholic Church. The Second Vatican Council and Pope John Paul II sponsored a theological interpretation of the Code, and this article summarizes some features of the validation, method, and the successful applicatio...

  16. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods. PMID:25751882

  17. Comparison of initial perturbation methods for the mesoscale ensemble prediction system of the Meteorological Research Institute for the WWRP Beijing 2008 Olympics Research and Development Project (B08RDP)

    Science.gov (United States)

    Saito, Kazuo; Hara, Masahiro; Kunii, Masaru; Seko, Hiromu; Yamaguchi, Munehiko

    2011-05-01

    Different initial perturbation methods for the mesoscale ensemble prediction were compared by the Meteorological Research Institute (MRI) as a part of the intercomparison of mesoscale ensemble prediction systems (EPSs) of the World Weather Research Programme (WWRP) Beijing 2008 Olympics Research and Development Project (B08RDP). Five initial perturbation methods for mesoscale ensemble prediction were developed for B08RDP and compared at MRI: (1) a downscaling method of the Japan Meteorological Agency (JMA)'s operational one-week EPS (WEP), (2) a targeted global model singular vector (GSV) method, (3) a mesoscale model singular vector (MSV) method based on the adjoint model of the JMA non-hydrostatic model (NHM), (4) a mesoscale breeding growing mode (MBD) method based on the NHM forecast and (5) a local ensemble transform (LET) method based on the local ensemble transform Kalman filter (LETKF) using NHM. These perturbation methods were applied to the preliminary experiments of the B08RDP Tier-1 mesoscale ensemble prediction with a horizontal resolution of 15 km. To make the comparison easier, the same horizontal resolution (40 km) was employed for the three mesoscale model-based initial perturbation methods (MSV, MBD and LET). The GSV method completely outperformed the WEP method, confirming the advantage of targeting in mesoscale EPS. The GSV method generally performed well with regard to root mean square errors of the ensemble mean, large growth rates of ensemble spreads throughout the 36-h forecast period, and high detection rates and high Brier skill scores (BSSs) for weak rains. On the other hand, the mesoscale model-based initial perturbation methods showed good detection rates and BSSs for intense rains. The MSV method showed a rapid growth in the ensemble spread of precipitation up to a forecast time of 6 h, which suggests suitability of the mesoscale SV for short-range EPSs, but the initial large growth of the perturbation did not last long. The

  18. Multilevel ensemble Kalman filtering

    OpenAIRE

    Hoel, Håkon; Law, Kody J. H.; Tempone, Raul

    2015-01-01

    This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (ENKF), thereby yielding a multilevel ensemble Kalman filter (MLENKF) which has provably superior asymptotic cost to a given accuracy level. The theoretical results are illustrated numerically.

  19. Towards Intelligent Ensembles

    Czech Academy of Sciences Publication Activity Database

    Bureš, Tomáš; Krijt, F.; Plášil, F.; Hnětynka, P.; Jiráček, Z.

    New York,: ACM, 2015, Article No. 17. ISBN 978-1-4503-3393-1. [ECSAW '15. European Conference on Software Architecture Workshops. Dubrovnik (HR), 07.09.2015-08.09.2015] Institutional support: RVO:67985807 Keywords : distributed coordination * architectural adaptation * ensemble-based component systems * component model * emergent architecture * component ensembles * autonomic systems Subject RIV: JC - Computer Hardware ; Software

  20. Application of the Clustering Method in Molecular Dynamics Simulation of the Diffusion Coefficient

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Using molecular dynamics (MD) simulation, the diffusion of oxygen, methane, ammonia and carbon dioxide in water was simulated in the canonical NVT ensemble, and the diffusion coefficient was analyzed by the clustering method. By comparing to the conventional method (using the Einstein model) and the differentiation-interval variation method, we found that the results obtained by the clustering method used in this study are more close to the experimental values. This method proved to be more reasonable than the other two methods.

  1. Canonical analysis based on mutual information

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2015-01-01

    Canonical correlation analysis (CCA) is an established multi-variate statistical method for finding similarities between linear combinations of (normally two) sets of multivariate observations. In this contribution we replace (linear) correlation as the measure of association between the linear...... combinations with the information theoretical measure mutual information (MI). We term this type of analysis canonical information analysis (CIA). MI allows for the actual joint distribution of the variables involved and not just second order statistics. While CCA is ideal for Gaussian data, CIA facilitates...... analysis of variables with different genesis and therefore different statistical distributions and different modalities. As a proof of concept we give a toy example. We also give an example with one (weather radar based) variable in the one set and eight spectral bands of optical satellite data in the...

  2. Periodicity, the Canon and Sport

    Directory of Open Access Journals (Sweden)

    Thomas F. Scanlon

    2015-10-01

    Full Text Available The topic according to this title is admittedly a broad one, embracing two very general concepts of time and of the cultural valuation of artistic products. Both phenomena are, in the present view, largely constructed by their contemporary cultures, and given authority to a great extent from the prestige of the past. The antiquity of tradition brings with it a certain cachet. Even though there may be peripheral debates in any given society which question the specifics of periodization or canonicity, individuals generally accept the consensus designation of a sequence of historical periods and they accept a list of highly valued artistic works as canonical or authoritative. We will first examine some of the processes of periodization and of canon-formation, after which we will discuss some specific examples of how these processes have worked in the sport of two ancient cultures, namely Greece and Mesoamerica.

  3. Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique

    Directory of Open Access Journals (Sweden)

    Jan D. Keller

    2008-12-01

    Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.

  4. Existence of log canonical closures

    CERN Document Server

    Hacon, Christopher D

    2011-01-01

    Let $f:X\\to U$ be a projective morphism of normal varieties and $(X,\\Delta)$ a dlt pair. We prove that if there is an open set $U^0\\subset U$, such that $(X,\\Delta)\\times_U U^0$ has a good minimal model over $U^0$ and the images of all the non-klt centers intersect $U^0$, then $(X,\\Delta)$ has a good minimal model over $U$. As consequences we show the existence of log canonical compactifications for open log canonical pairs, and the fact that the moduli functor of stable schemes satisfies the valuative criterion for properness.

  5. Gauge Theory by canonical Transformations

    CERN Document Server

    Koenigstein, Adrian; Stoecker, Horst; Struckmeier, Juergen; Vasak, David; Hanauske, Matthias

    2016-01-01

    Electromagnetism, the strong and the weak interaction are commonly formulated as gauge theories in a Lagrangian description. In this paper we present an alternative formal derivation of U(1)-gauge theory in a manifestly covariant Hamilton formalism. We make use of canonical transformations as our guiding tool to formalize the gauging procedure. The introduction of the gauge field, its transformation behaviour and a dynamical gauge field Lagrangian/Hamiltonian are unavoidable consequences of this formalism, whereas the form of the free gauge Lagrangian/Hamiltonian depends on the selection of the gauge dependence of the canonically conjugate gauge fields.

  6. Case studies in canonical stewardship.

    Science.gov (United States)

    Cafardi, N P; Hite, J

    1985-11-01

    In facing the challenges that confront Catholic health care today, it is important to know which civil law forms will assist in preserving the Church's ministry. The proper meshing of civil law and canon law thus provides a vehicle to strengthen the apostolate's work. The case studies presented here suggest several means of applying the principles in the new Code of Canon Law to three potentially problematic situations: the merger of a Catholic and non-Catholic hospital, the leasing of a Catholic hospital to an operating company, and the use of the multicorporate format. PMID:10274590

  7. On the statistical mechanics of an adiabatic ensemble

    Directory of Open Access Journals (Sweden)

    S.N.Andreev

    2004-01-01

    Full Text Available Different descriptions of an adiabatic process based on statistical thermodynamics and statistical mechanics are discussed. Equality of the so-called adiabatic and isolated susceptibilities and its generalization as well as adiabatic invariants are essentially used to describe adiabatic processes in the framework of quantum and classical statistical mechanics. It is shown that distribution function in adiabatic ensemble differs from a quasi-equilibrium canonical form provided the heat capacity of the system is not constant in adiabatic process.

  8. Transition from Poisson to circular unitary ensemble

    Indian Academy of Sciences (India)

    Vinayak; Akhilesh Pandey

    2009-09-01

    Transitions to universality classes of random matrix ensembles have been useful in the study of weakly-broken symmetries in quantum chaotic systems. Transitions involving Poisson as the initial ensemble have been particularly interesting. The exact two-point correlation function was derived by one of the present authors for the Poisson to circular unitary ensemble (CUE) transition with uniform initial density. This is given in terms of a rescaled symmetry breaking parameter Λ. The same result was obtained for Poisson to Gaussian unitary ensemble (GUE) transition by Kunz and Shapiro, using the contour-integral method of Brezin and Hikami. We show that their method is applicable to Poisson to CUE transition with arbitrary initial density. Their method is also applicable to the more general ℓ CUE to CUE transition where CUE refers to the superposition of ℓ independent CUE spectra in arbitrary ratio.

  9. Spectral diagonal ensemble Kalman filters

    CERN Document Server

    Kasanický, Ivan; Vejmelka, Martin

    2015-01-01

    A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.

  10. Spectral diagonal ensemble Kalman filters

    Directory of Open Access Journals (Sweden)

    I. Kasanický

    2015-01-01

    Full Text Available A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT and discrete wavelet transform (DWT are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.

  11. 基于支持向量机无限集成学习方法的遥感图像分类%Remotely sensed imagery classification by SVM-based Infinite Ensemble Learning method

    Institute of Scientific and Technical Information of China (English)

    杨娜; 秦志远; 张俊

    2013-01-01

    基于支持向量机的无限集成学习方法(SVM-based IEL)是机器学习领域新兴起的一种集成学习方法.本文将SVM-based IEL引入遥感图像的分类领域,并同时将SVM、Bagging、AdaBoost和SVM-based IEL等方法应用于遥感图像分类.实验表明:Bagging方法可以提高遥感图像的分类精度,而AdaBoost却降低了遥感图像的分类精度;同时,与SVM、有限集成的学习方法相比,SVM-based IEL方法具有可以显著地提高遥感图像的分类精度、分类效率的优势.%Support-vector-machines-based Infinite Ensemble Learning method ( SVM-based IEL) is one of the ensemble learning methods in the field of machine learning. In this paper, the SVM-based IEL was applied to the classification of remotely sensed imagery besides classic ensemble learning methods such as Bagging, AdaBoost and SVM etc. SVM was taken as the base classifier in Bagging, AdaBoost The experiments showed that the classic ensemble learning methods have different performances compared to SVM. In detail , the Bagging was capable of enhancing the classification accuracy but the AdaBoost was decreasing the classification accuracy. Furthermore, the experiments suggested that compared to SVM and classic ensemble learning methods, SVM-based IEL has many merits such as increasing both of the classification accuracy and classification efficiency.

  12. A non-destructive surface burn detection method for ferrous metals based on acoustic emission and ensemble empirical mode decomposition: from laser simulation to grinding process

    International Nuclear Information System (INIS)

    Grinding is usually done in the final finishing of a component. As a result, the surface quality of finished products, e.g., surface roughness, hardness and residual stress, are affected by the grinding procedure. However, the lack of methods for monitoring of grinding makes it difficult to control the quality of the process. This paper focuses on the monitoring approaches for the surface burn phenomenon in grinding. A non-destructive burn detection method based on acoustic emission (AE) and ensemble empirical mode decomposition (EEMD) was proposed for this purpose. To precisely extract the AE features caused by phase transformation during burn formation, artificial burn was produced to mimic grinding burn by means of laser irradiation, since laser-induced burn involves less mechanical and electrical noise. The burn formation process was monitored by an AE sensor. The frequency band ranging from 150 to 400 kHz was believed to be related to surface burn formation in the laser irradiation process. The burn-sensitive frequency band was further used to instruct feature extraction during the grinding process based on EEMD. Linear classification results evidenced a distinct margin between samples with and without surface burn. This work provides a practical means for grinding burn detection. (paper)

  13. Ising model on random networks and the canonical tensor model

    International Nuclear Information System (INIS)

    We introduce a statistical system on random networks of trivalent vertices for the purpose of studying the canonical tensor model, which is a rank-three tensor model in the canonical formalism. The partition function of the statistical system has a concise expression in terms of integrals, and has the same symmetries as the kinematical ones of the canonical tensor model. We consider the simplest non-trivial case of the statistical system corresponding to the Ising model on random networks, and find that its phase diagram agrees with what is implied by regrading the Hamiltonian vector field of the canonical tensor model with N=2 as a renormalization group flow. Along the way, we obtain an explicit exact expression of the free energy of the Ising model on random networks in the thermodynamic limit by the Laplace method. This paper provides a new example connecting a model of quantum gravity and a random statistical system

  14. Non diffusive corrections to the long scale behavior of ensembles of turbulent magnetic field lines. Application of the functional method

    International Nuclear Information System (INIS)

    The problem of the transversal spreading of the magnetic field lines in a turbulent plasma is investigated analytically in order to obtain a statistical characterization at large spatial scales. We develop a functional integral method which allows to calculate in a systematic way statistical averages of physical quantities which depend on the fluctuating field. The known magnetic diffusion coefficient for the shearless case is corrected with a term which arises from the assumption of a finite transversal correlation length. For the case with magnetic shear the functional method provides the appropriate frame for a perturbative approach based on series of diagrams

  15. Quantum statistical ensemble for emissive correlated systems

    Science.gov (United States)

    Shakirov, Alexey M.; Shchadilova, Yulia E.; Rubtsov, Alexey N.

    2016-06-01

    Relaxation dynamics of complex quantum systems with strong interactions towards the steady state is a fundamental problem in statistical mechanics. The steady state of subsystems weakly interacting with their environment is described by the canonical ensemble which assumes the probability distribution for energy to be of the Boltzmann form. The emergence of this probability distribution is ensured by the detailed balance of the transitions induced by the interaction with the environment. Here we consider relaxation of an open correlated quantum system brought into contact with a reservoir in the vacuum state. We refer to such a system as emissive since particles irreversibly evaporate into the vacuum. The steady state of the system is a statistical mixture of the stable eigenstates. We found that, despite the absence of the detailed balance, the stationary probability distribution over these eigenstates is of the Boltzmann form in each N -particle sector. A quantum statistical ensemble corresponding to the steady state is characterized by different temperatures in the different sectors, in contrast to the Gibbs ensemble. We investigate the transition rates between the eigenstates to understand the emergence of the Boltzmann distribution and find their exponential dependence on the transition energy. We argue that this property of transition rates is generic for a wide class of emissive quantum many-body systems.

  16. Romanticism, Sexuality, and the Canon.

    Science.gov (United States)

    Rowe, Kathleen K.

    1990-01-01

    Traces the Romanticism in the work and persona of film director Jean-Luc Godard. Examines the contradictions posed by Godard's politics and representations of sexuality. Asserts, that by bringing an ironic distance to the works of such canonized directors, viewers can take pleasure in those works despite their contradictions. (MM)

  17. Linear interpolation method in ensemble Kohn-Sham and range-separated density-functional approximations for excited states

    DEFF Research Database (Denmark)

    Senjean, Bruno; Knecht, Stefan; Jensen, Hans Jørgen Aa;

    2015-01-01

    equiensembles. It is shown that such a linear interpolation method (LIM) can be rationalized and that it effectively introduces weight dependence effects. As proof of principle, the LIM has been applied to He, Be, and H2 in both equilibrium and stretched geometries as well as the stretched HeH+ molecule. Very...

  18. Ensemble learning incorporating uncertain registration.

    Science.gov (United States)

    Simpson, Ivor J A; Woolrich, Mark W; Andersson, Jesper L R; Groves, Adrian R; Schnabel, Julia A

    2013-04-01

    This paper proposes a novel approach for improving the accuracy of statistical prediction methods in spatially normalized analysis. This is achieved by incorporating registration uncertainty into an ensemble learning scheme. A probabilistic registration method is used to estimate a distribution of probable mappings between subject and atlas space. This allows the estimation of the distribution of spatially normalized feature data, e.g., grey matter probability maps. From this distribution, samples are drawn for use as training examples. This allows the creation of multiple predictors, which are subsequently combined using an ensemble learning approach. Furthermore, extra testing samples can be generated to measure the uncertainty of prediction. This is applied to separating subjects with Alzheimer's disease from normal controls using a linear support vector machine on a region of interest in magnetic resonance images of the brain. We show that our proposed method leads to an improvement in discrimination using voxel-based morphometry and deformation tensor-based morphometry over bootstrap aggregating, a common ensemble learning framework. The proposed approach also generates more reasonable soft-classification predictions than bootstrap aggregating. We expect that this approach could be applied to other statistical prediction tasks where registration is important. PMID:23288332

  19. Canonical particle tracking in undulator fields

    International Nuclear Information System (INIS)

    A new algebraic mapping routine for particle tracking across wiggler and undulator fields in presented. It is based on a power series expansion of the generating function to guarantee fully canonical transformations. This method is 10 to 100 times faster than integration routines, applied in tracking codes like BETA or RACETRACK. The tracking method presented is not restricted to wigglers and undulators, it can be applied to other magnetic fields as well such as fringing fields of quadrupoles or dipoles if the suggested expansion converges

  20. Ensembles on Random Patches

    OpenAIRE

    Louppe, Gilles; Geurts, Pierre

    2012-01-01

    In this paper, we consider supervised learning under the assumption that the available memory is small compared to the dataset size. This general framework is relevant in the context of big data, distributed databases and embedded systems. We investigate a very simple, yet effective, ensemble framework that builds each individual model of the ensemble from a random patch of data obtained by drawing random subsets of both instances and features from the whole dataset. We carry out an extensive...

  1. Orbital magnetism in ensembles of ballistic billiards

    International Nuclear Information System (INIS)

    The magnetic response of ensembles of small two-dimensional structures at finite temperatures is calculated. Using semiclassical methods and numerical calculation it is demonstrated that only short classical trajectories are relevant. The magnetic susceptibility is enhanced in regular systems, where these trajectories appear in families. For ensembles of squares large paramagnetic susceptibility is obtained, in good agreement with recent measurements in the ballistic regime. (authors). 20 refs., 2 figs

  2. Controlling balance in an ensemble Kalman filter

    OpenAIRE

    G. A. Gottwald

    2014-01-01

    We present a method to control unbalanced fast dynamics in an ensemble Kalman filter by introducing a weak constraint on the imbalance in a spatially sparse observational network. We show that the balance constraint produces significantly more balanced analyses than ensemble Kalman filters without balance constraints and than filters implementing incremental analysis updates (IAU). Furthermore, our filter with the weak constraint on imbalance produces good rms error statisti...

  3. 基于集成学习的核电站故障诊断方法%Fault Diagnosis Method for Nuclear Power Plant Based on Ensemble Learning

    Institute of Scientific and Technical Information of China (English)

    慕昱; 夏虹; 刘永阔

    2012-01-01

    核电站系统复杂,需要采集和监测的变量较多,给核电站的故障诊断增加了困难.针对该问题提出集成学习算法,对核电站的失水事故、给水管道破裂、蒸汽发生器U型管破裂和主蒸汽管道破裂等4种典型故障进行训练学习,并分别在正常情况下和参数缺失情况下进行仿真实验.仿真结果表明,该算法在参数缺失的情况下仍能得到较好的诊断结果,具有良好的容错能力和泛化能力.%Nuclear power plant (NPP) is a very complex system, which needs to collect and monitor vast parameters, so it's hard to diagnose the faults of NPP. An ensemble learning method was proposed according to the problem. And the method was applied to learn from training samples which were the typical faults of nuclear power plant, i. e. , loss of coolant accident (LOCA) , feed water pipe rupture, steam generator tube rupture (SGTR), main steam pipe rupture. And the simulation results were carried out on the condition of normal and invalid and absent parameters respectively. The simulation results show that this method can get a good result on the condition of invalid and absent parameters. The method shows very good generalization performance and fault tolerance.

  4. Using Data Assimilation Method Via an Ensemble Kalman Filter to Predict Adsorptive Solute Cr(Ⅵ) Transfer from Soil into Surface Runoff

    Science.gov (United States)

    Tong, J.

    2014-12-01

    With the development of modern agriculture, large amount of fertilizer and pesticide outflow from farming land causes great wastes and contributes to serious pollution of surface water and groundwater, which threatens ecological environment and human life. In this paper, laboratory experiments are conducted to simulate adsorbed Cr(VI) transfer from soil into runoff. A two-layer in-mixing analytical model is developed to to analyze laboratory experimental results. A data assimilation (DA) method via the ensemble Kalman filter (EnKF) is used to update parameters and improve predictions. In comparison with the observed data, DA results are much better than forward model predictions. Based on the used rainfall and relevant physical principles, the updated value of the incomplete mixing coefficient is about 7.4 times of the value of the incomplete mixing coefficient in experiment 1 and about 14.0 times in experiment 2, which indicates the loss of Cr(VI) in soil solute is mainly due to infiltration, rather than surface runoff. With the increase of soil adsorption ability and the mixing layer depth, the loss of soil solute will decrease. These results provide information for preventing and reducing the agricultural nonpoint source pollution.

  5. Generalized Bulgac-Kusnezov Methods for Sampling of the Gibbs-Boltzmann Measure

    OpenAIRE

    Leimkuhler, Benedict

    2010-01-01

    A wide family of methods is described for sampling in the canonical ensemble. The Bulgac-Kusnezov method is generalized to include a more complicated coupling structure and stochastic perturbations. It is shown that a controlled fluctuation of the potential surface or force field in a molecular model may be used as part of a sampling method (instead of the more standard friction or driving term). Numerical experiments demonstrate that the family includes methods that are effective for recover...

  6. Basic Canonical Brackets Without Canonical Conjugate Momenta: Supersymmetric Harmonic Oscillator

    CERN Document Server

    Shukla, A; Malik, R P

    2014-01-01

    We exploit the ideas of spin-statistics theorem, normal-ordering and the key concepts behind the symmetry principles to derive the canonical (anti)commutators for the case of a one (0 + 1)-dimensional (1D) supersymmetric (SUSY) harmonic oscillator without taking the help of the mathematical definition of the canonical conjugate momenta with respect to the bosonic and fermionic variables of this toy model for the Hodge theory (where the continuous and discrete symmetries of the theory provide the physical realizations of the de Rham cohomological operators of differential geometry). In our present endeavor, it is the full set of continuous symmetries and their corresponding generators that lead to the derivation of basic (anti)commutators amongst the creation and annihilation operators that appear in the normal mode expansions of the dynamical variables of our theory.

  7. Statistical thermodynamics in relativistic particle and ion physics: Canonical or grand canonical

    International Nuclear Information System (INIS)

    We consider relativistic statistical thermodynamics of an ideal Boltzmann gas consisting of the particles K, Λ, A, Σ and their antiparticles. Baryon number (B) and strangeness (S) are conserved. While any relativistic gas is necessarily grand canonical with respect to particle numbers, conservation laws can be treated canonically or grand canonically. We construct the partition function for canonical BxS conservation and compare it with the grand canonical one. It is found that the grand canonical partition function is equivalent to a large B approximation of the canonical one. The relative difference between canonical and grand canonical quantities seems to decrease like const/B (two numerical examples) and from this a simple thumb rule for computing canonical quantities from grand canonical ones is guessed. For precise calculations, an integral representation is given. (orig.)

  8. Canonical and non-canonical pathways of osteoclast formation

    OpenAIRE

    Knowles, H.J.; Athanasou, N A

    2009-01-01

    Physiological and pathological bone resorption is mediated by osteoclasts, multinucleated cells which are formed by the fusion of monocyte / macrophage precursors. The canonical pathway of osteoclast formation requires the presence of the receptor activator for NFkB ligand (RANKL) and macrophage colony stimulating factor (M-CSF). Noncanonical pathways of osteoclast formation have been described in which cytokines / growth factors can substitute for RANKL or M-CSF to...

  9. Kato expansion in quantum canonical perturbation theory

    Science.gov (United States)

    Nikolaev, Andrey

    2016-06-01

    This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson's ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. We compare the efficiency of the corresponding computational algorithm with the efficiencies of the Van Vleck and Magnus methods for high perturbative orders.

  10. Canonical proper time quantum gravitation

    Science.gov (United States)

    Lindesay, James

    2015-05-01

    At the root of the tensions involved in modeling the quantum dynamics of gravitating systems are the subtleties of quantum locality. Quantum mechanics describes physical phenomena using a theory of non-local phase relationships (non-local in the sense that quantum states maintain a space-like coherence that is acausal). However, the principle of equivalence in general relativity asserts that freely falling frames are locally inertial frames of reference. Thus, gravitating systems are often described using constituents that are freely falling, undergoing geodesic motion defining well localized trajectories. The canonical proper time formulation of relativistic dynamics is particularly useful for describing such inertial constituents using the coordinates of non-inertial observers. The physics of the simplest of gravitating inertial quantum systems, consistent with presented experimental evidence, will be examined. Subsequently, descriptions of both weakly and strongly gravitating quantum systems will be developed using canonical proper gravitation.

  11. Canonical computations of cerebral cortex.

    Science.gov (United States)

    Miller, Kenneth D

    2016-04-01

    The idea that there is a fundamental cortical circuit that performs canonical computations remains compelling though far from proven. Here we review evidence for two canonical operations within sensory cortical areas: a feedforward computation of selectivity; and a recurrent computation of gain in which, given sufficiently strong external input, perhaps from multiple sources, intracortical input largely, but not completely, cancels this external input. This operation leads to many characteristic cortical nonlinearities in integrating multiple stimuli. The cortical computation must combine such local processing with hierarchical processing across areas. We point to important changes in moving from sensory cortex to motor and frontal cortex and the possibility of substantial differences between cortex in rodents vs. species with columnar organization of selectivity. PMID:26868041

  12. A Method for Automated Classification of Parkinson's Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI.

    Science.gov (United States)

    Banerjee, Monami; Okun, Michael S; Vaillancourt, David E; Vemuri, Baba C

    2016-01-01

    Parkinson's disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  13. A Method for Automated Classification of Parkinson’s Disease Diagnosis Using an Ensemble Average Propagator Template Brain Map Estimated from Diffusion MRI

    Science.gov (United States)

    Banerjee, Monami; Okun, Michael S.; Vaillancourt, David E.; Vemuri, Baba C.

    2016-01-01

    Parkinson’s disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID

  14. Three Dimensional Canonical Quantum Gravity

    OpenAIRE

    Matschull, Hans-Juergen

    1995-01-01

    General aspects of vielbein representation, ADM formulation and canonical quantization of gravity are reviewed using pure gravity in three dimensions as a toy model. The classical part focusses on the role of observers in general relativity, which will later be identified with quantum observers. A precise definition of gauge symmetries and a classification of inequivalent solutions of Einstein's equations in dreibein formalism is given as well. In the quantum part the construction of the phys...

  15. Resistant Multiple Sparse Canonical Correlation

    OpenAIRE

    Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna

    2014-01-01

    Canonical Correlation Analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficu...

  16. Triticeae resources in Ensembl Plants.

    Science.gov (United States)

    Bolser, Dan M; Kerhornou, Arnaud; Walts, Brandon; Kersey, Paul

    2015-01-01

    Recent developments in DNA sequencing have enabled the large and complex genomes of many crop species to be determined for the first time, even those previously intractable due to their polyploid nature. Indeed, over the course of the last 2 years, the genome sequences of several commercially important cereals, notably barley and bread wheat, have become available, as well as those of related wild species. While still incomplete, comparison with other, more completely assembled species suggests that coverage of genic regions is likely to be high. Ensembl Plants (http://plants.ensembl.org) is an integrative resource organizing, analyzing and visualizing genome-scale information for important crop and model plants. Available data include reference genome sequence, variant loci, gene models and functional annotation. For variant loci, individual and population genotypes, linkage information and, where available, phenotypic information are shown. Comparative analyses are performed on DNA and protein sequence alignments. The resulting genome alignments and gene trees, representing the implied evolutionary history of the gene family, are made available for visualization and analysis. Driven by the case of bread wheat, specific extensions to the analysis pipelines and web interface have recently been developed to support polyploid genomes. Data in Ensembl Plants is accessible through a genome browser incorporating various specialist interfaces for different data types, and through a variety of additional methods for programmatic access and data mining. These interfaces are consistent with those offered through the Ensembl interface for the genomes of non-plant species, including those of plant pathogens, pests and pollinators, facilitating the study of the plant in its environment. PMID:25432969

  17. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  18. Data assimilation with the weighted ensemble Kalman filter

    OpenAIRE

    Papadakis, Nicolas; Mémin, Etienne; Cuzol, Anne; Gengembre, Nicolas

    2010-01-01

    In this paper, two data assimilation methods based on sequential Monte Carlo sampling are studied and compared: the ensemble Kalman filter and the particle filter. Each of these techniques has its own advantages and drawbacks. In this work, we try to get the best of each method by combining them. The proposed algorithm, called the weighted ensemble Kalman filter, consists to rely on the Ensemble Kalman Filter updates of samples in order to define a proposal distribution for the particle filte...

  19. Molecular Dynamics and Monte Carlo simulations in the microcanonical ensemble: Quantitative comparison and reweighting techniques

    Science.gov (United States)

    Schierz, Philipp; Zierenberg, Johannes; Janke, Wolfhard

    2015-10-01

    Molecular Dynamics (MD) and Monte Carlo (MC) simulations are the most popular simulation techniques for many-particle systems. Although they are often applied to similar systems, it is unclear to which extent one has to expect quantitative agreement of the two simulation techniques. In this work, we present a quantitative comparison of MD and MC simulations in the microcanonical ensemble. For three test examples, we study first- and second-order phase transitions with a focus on liquid-gas like transitions. We present MD analysis techniques to compensate for conservation law effects due to linear and angular momentum conservation. Additionally, we apply the weighted histogram analysis method to microcanonical histograms reweighted from MD simulations. By this means, we are able to estimate the density of states from many microcanonical simulations at various total energies. This further allows us to compute estimates of canonical expectation values.

  20. Molecular Dynamics and Monte Carlo simulations in the microcanonical ensemble: Quantitative comparison and reweighting techniques.

    Science.gov (United States)

    Schierz, Philipp; Zierenberg, Johannes; Janke, Wolfhard

    2015-10-01

    Molecular Dynamics (MD) and Monte Carlo (MC) simulations are the most popular simulation techniques for many-particle systems. Although they are often applied to similar systems, it is unclear to which extent one has to expect quantitative agreement of the two simulation techniques. In this work, we present a quantitative comparison of MD and MC simulations in the microcanonical ensemble. For three test examples, we study first- and second-order phase transitions with a focus on liquid-gas like transitions. We present MD analysis techniques to compensate for conservation law effects due to linear and angular momentum conservation. Additionally, we apply the weighted histogram analysis method to microcanonical histograms reweighted from MD simulations. By this means, we are able to estimate the density of states from many microcanonical simulations at various total energies. This further allows us to compute estimates of canonical expectation values. PMID:26450299

  1. Reconciliation of Statistical Mechanics and Astro-Physical Statistics. The errors of conventional canonical thermostatistics

    CERN Document Server

    Gross, D H E

    2005-01-01

    Conventional thermo-statistics address infinite homogeneous systems within the canonical ensemble. (Only in this case this is equivalent to the fundamental microcanonical ensemble.) However, some 170 years ago the original motivation of thermodynamics was the description of steam engines, i.e. boiling water. Its essential physics is the separation of the gas phase from the liquid. Of course, boiling water is inhomogeneous and as such cannot be treated by conventional thermo-statistics. Then it is not astonishing, that a phase transition of first order is signaled canonically by a Yang-Lee singularity. Thus it is only treated correctly by microcanonical Boltzmann-Planck statistics. It turns out that the Boltzmann-Planck statistics is much richer and gives fundamental insight into statistical mechanics and especially into entropy. This can be done to a far extend rigorously and analytically. As no extensivity, no thermodynamic limit, no concavity, no homogeneity is needed, it also applies to astro-physical syst...

  2. Canonical path integral quantization of Einstein's gravitational field

    OpenAIRE

    Muslih, Sami I.

    2000-01-01

    The connection between the canonical and the path integral formulations of Einstein's gravitational field is discussed using the Hamilton - Jacobi method. Unlike conventional methods, it is shown that our path integral method leads to obtain the measure of integration with no $\\delta$- functions, no need to fix any gauge and so no ambiguous deteminants will appear.

  3. Particle Swarm Optimization Based Selective Ensemble of Online Sequential Extreme Learning Machine

    OpenAIRE

    Yang Liu; Bo He; Diya Dong; Yue Shen; Tianhong Yan; Rui Nian; Amaury Lendasse

    2015-01-01

    A novel particle swarm optimization based selective ensemble (PSOSEN) of online sequential extreme learning machine (OS-ELM) is proposed. It is based on the original OS-ELM with an adaptive selective ensemble framework. Two novel insights are proposed in this paper. First, a novel selective ensemble algorithm referred to as particle swarm optimization selective ensemble is proposed, noting that PSOSEN is a general selective ensemble method which is applicable to any learning algorithms, inclu...

  4. Calibrating ensemble reliability whilst preserving spatial structure

    Directory of Open Access Journals (Sweden)

    Jonathan Flowerdew

    2014-03-01

    Full Text Available Ensemble forecasts aim to improve decision-making by predicting a set of possible outcomes. Ideally, these would provide probabilities which are both sharp and reliable. In practice, the models, data assimilation and ensemble perturbation systems are all imperfect, leading to deficiencies in the predicted probabilities. This paper presents an ensemble post-processing scheme which directly targets local reliability, calibrating both climatology and ensemble dispersion in one coherent operation. It makes minimal assumptions about the underlying statistical distributions, aiming to extract as much information as possible from the original dynamic forecasts and support statistically awkward variables such as precipitation. The output is a set of ensemble members preserving the spatial, temporal and inter-variable structure from the raw forecasts, which should be beneficial to downstream applications such as hydrological models. The calibration is tested on three leading 15-d ensemble systems, and their aggregation into a simple multimodel ensemble. Results are presented for 12 h, 1° scale over Europe for a range of surface variables, including precipitation. The scheme is very effective at removing unreliability from the raw forecasts, whilst generally preserving or improving statistical resolution. In most cases, these benefits extend to the rarest events at each location within the 2-yr verification period. The reliability and resolution are generally equivalent or superior to those achieved using a Local Quantile-Quantile Transform, an established calibration method which generalises bias correction. The value of preserving spatial structure is demonstrated by the fact that 3×3 averages derived from grid-scale precipitation calibration perform almost as well as direct calibration at 3×3 scale, and much better than a similar test neglecting the spatial relationships. Some remaining issues are discussed regarding the finite size of the output

  5. Imprinting and recalling cortical ensembles.

    Science.gov (United States)

    Carrillo-Reid, Luis; Yang, Weijian; Bando, Yuki; Peterka, Darcy S; Yuste, Rafael

    2016-08-12

    Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion. PMID:27516599

  6. Canonical group quantization and boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Florian

    2012-07-16

    In the present thesis, we study quantization of classical systems with non-trivial phase spaces using the group-theoretical quantization technique proposed by Isham. Our main goal is a better understanding of global and topological aspects of quantum theory. In practice, the group-theoretical approach enables direct quantization of systems subject to constraints and boundary conditions in a natural and physically transparent manner -- cases for which the canonical quantization method of Dirac fails. First, we provide a clarification of the quantization formalism. In contrast to prior treatments, we introduce a sharp distinction between the two group structures that are involved and explain their physical meaning. The benefit is a consistent and conceptually much clearer construction of the Canonical Group. In particular, we shed light upon the 'pathological' case for which the Canonical Group must be defined via a central Lie algebra extension and emphasise the role of the central extension in general. In addition, we study direct quantization of a particle restricted to a half-line with 'hard wall' boundary condition. Despite the apparent simplicity of this example, we show that a naive quantization attempt based on the cotangent bundle over the half-line as classical phase space leads to an incomplete quantum theory; the reflection which is a characteristic aspect of the 'hard wall' is not reproduced. Instead, we propose a different phase space that realises the necessary boundary condition as a topological feature and demonstrate that quantization yields a suitable quantum theory for the half-line model. The insights gained in the present special case improve our understanding of the relation between classical and quantum theory and illustrate how contact interactions may be incorporated.

  7. Canonical group quantization and boundary conditions

    International Nuclear Information System (INIS)

    In the present thesis, we study quantization of classical systems with non-trivial phase spaces using the group-theoretical quantization technique proposed by Isham. Our main goal is a better understanding of global and topological aspects of quantum theory. In practice, the group-theoretical approach enables direct quantization of systems subject to constraints and boundary conditions in a natural and physically transparent manner -- cases for which the canonical quantization method of Dirac fails. First, we provide a clarification of the quantization formalism. In contrast to prior treatments, we introduce a sharp distinction between the two group structures that are involved and explain their physical meaning. The benefit is a consistent and conceptually much clearer construction of the Canonical Group. In particular, we shed light upon the 'pathological' case for which the Canonical Group must be defined via a central Lie algebra extension and emphasise the role of the central extension in general. In addition, we study direct quantization of a particle restricted to a half-line with 'hard wall' boundary condition. Despite the apparent simplicity of this example, we show that a naive quantization attempt based on the cotangent bundle over the half-line as classical phase space leads to an incomplete quantum theory; the reflection which is a characteristic aspect of the 'hard wall' is not reproduced. Instead, we propose a different phase space that realises the necessary boundary condition as a topological feature and demonstrate that quantization yields a suitable quantum theory for the half-line model. The insights gained in the present special case improve our understanding of the relation between classical and quantum theory and illustrate how contact interactions may be incorporated.

  8. A Molecular Based Osmotic Ensemble Monte Carlo Simulation Method for Free Energy Solvation Curves and theDirect Calculation of Aqueous Electrolyte Solubility

    Czech Academy of Sciences Publication Activity Database

    Smith, W.R.; Moučka, F.; Lísal, Martin

    Saint Petersburg: Saint Petersburg State University, 2011 - (Gotlib, I.; Victorov, A.; Smirnova, N.), s. 27 ISBN 5-85263-061-6. [European Symposium on Applied Thermodynamics /25./. Saint Petersburg (RU), 24.06.2011-27.06.2011] Institutional research plan: CEZ:AV0Z40720504 Keywords : general methodology * osmotic ensemble monte carlo * water molecules Subject RIV: CF - Physical ; Theoretical Chemistry

  9. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  10. De praeceptis ferendis: good practice in multi-model ensembles

    Directory of Open Access Journals (Sweden)

    I. Kioutsioukis

    2014-06-01

    Full Text Available Ensembles of air quality models have been formally and empirically shown to outperform single models in many cases. Evidence suggests that ensemble error is reduced when the members form a diverse and accurate ensemble. Diversity and accuracy are hence two factors that should be taken care of while designing ensembles in order for them to provide better predictions. There exists a trade-off between diversity and accuracy for which one cannot be gained without expenses of the other. Theoretical aspects like the bias-variance-covariance decomposition and the accuracy-diversity decomposition are linked together and support the importance of creating ensemble that incorporates both the elements. Hence, the common practice of unconditional averaging of models without prior manipulation limits the advantages of ensemble averaging. We demonstrate the importance of ensemble accuracy and diversity through an inter-comparison of ensemble products for which a sound mathematical framework exists, and provide specific recommendations for model selection and weighting for multi model ensembles. To this end we have devised statistical tools that can be used for diagnostic evaluation of ensemble modelling products, complementing existing operational methods.

  11. Black Hole Statistical Mechanics and The Angular Velocity Ensemble

    OpenAIRE

    Thomson, Mitchell; Dyer, Charles C.

    2012-01-01

    An new ensemble - the angular velocity ensemble - is derived using Jaynes' method of maximising entropy subject to prior information constraints. The relevance of the ensemble to black holes is motivated by a discussion of external parameters in statistical mechanics and their absence from the Hamiltonian of general relativity. It is shown how this leads to difficulty in deriving entropy as a function of state and recovering the first law of thermodynamics from the microcanonical and canonica...

  12. Enhanced ensemble-based 4DVar scheme for data assimilation

    OpenAIRE

    Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne

    2015-01-01

    International audience Ensemble based optimal control schemes combine the components of ensemble Kalman filters and variational data assimilation (4DVar). They are trendy because they are easier to implement than 4DVar. In this paper, we evaluate a modified version of an ensemble based optimal control strategy for image data assimilation. This modified method is assessed with a Shallow Water model combined with synthetic data and original incomplete experimental depth sensor observations. ...

  13. Dibaryons as canonically quantized biskyrmions

    CERN Document Server

    Krupovnickas, T; Riska, D O

    2000-01-01

    The characteristic feature of the ground state configuration of the Skyrme model description of nuclei is the absence of recognizable individual nucleons. The ground state of the skyrmion with baryon number 2 is axially symmetric, and is well approximated by a simple rational map, which represents a direct generalization of Skyrme's hedgehog ansatz for the nucleon. If the Lagrangian density is canonically quantized this configuration may support excitations that lie close and possible below the threshold for pion decay, and therefore describe dibaryons. The quantum corrections stabilize these solutions, the mass density of which have the correct exponential fall off at large distances.

  14. Canonical metrics on complex manifold

    Institute of Scientific and Technical Information of China (English)

    YAU Shing-Tung

    2008-01-01

    @@ Complex manifolds are topological spaces that are covered by coordinate charts where the Coordinate changes are given by holomorphic transformations. For example, Riemann surfaces are one dimensional complex manifolds. In order to understand complex manifolds, it is useful to introduce metrics that are compatible with the complex structure. In general, we should have a pair (M, ds2M) where ds2M is the metric. The metric is said to be canonical if any biholomorphisms of the complex manifolds are automatically isometries. Such metrics can naturally be used to describe invariants of the complex structures of the manifold.

  15. Canonical metrics on complex manifold

    Institute of Scientific and Technical Information of China (English)

    YAU; Shing-Tung(Yau; S.-T.)

    2008-01-01

    Complex manifolds are topological spaces that are covered by coordinate charts where the coordinate changes are given by holomorphic transformations.For example,Riemann surfaces are one dimensional complex manifolds.In order to understand complex manifolds,it is useful to introduce metrics that are compatible with the complex structure.In general,we should have a pair(M,ds~2_M)where ds~2_M is the metric.The metric is said to be canonical if any biholomorphisms of the complex manifolds are automatically isometries.Such metrics can naturally be used to describe invariants of the complex structures of the manifold.

  16. Gauge fixing and canonical quantization

    International Nuclear Information System (INIS)

    We study the canonical quantization of non-Abelian gauge fields in the temporal gauge A0 = 0. We impose the constraint condition of Gauss's law by performing a point transformation into any of a large class of noncovariant gauges. The Faddeev and Popov operator arises naturally in this procedure; indeed, we prove the equivalence of all gauges in this class. We discuss the nonexistence of some simple gauges and show how topological considerations reduce the theory to quantum mechanics on an infinite-dimensional periodic hypersurface

  17. Visualizing ensembles in structural biology.

    Science.gov (United States)

    Melvin, Ryan L; Salsbury, Freddie R

    2016-06-01

    Displaying a single representative conformation of a biopolymer rather than an ensemble of states mistakenly conveys a static nature rather than the actual dynamic personality of biopolymers. However, there are few apparent options due to the fixed nature of print media. Here we suggest a standardized methodology for visually indicating the distribution width, standard deviation and uncertainty of ensembles of states with little loss of the visual simplicity of displaying a single representative conformation. Of particular note is that the visualization method employed clearly distinguishes between isotropic and anisotropic motion of polymer subunits. We also apply this method to ligand binding, suggesting a way to indicate the expected error in many high throughput docking programs when visualizing the structural spread of the output. We provide several examples in the context of nucleic acids and proteins with particular insights gained via this method. Such examples include investigating a therapeutic polymer of FdUMP (5-fluoro-2-deoxyuridine-5-O-monophosphate) - a topoisomerase-1 (Top1), apoptosis-inducing poison - and nucleotide-binding proteins responsible for ATP hydrolysis from Bacillus subtilis. We also discuss how these methods can be extended to any macromolecular data set with an underlying distribution, including experimental data such as NMR structures. PMID:27179343

  18. A Localized Ensemble Kalman Smoother

    Science.gov (United States)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  19. Generalized Gibbs ensemble in a nonintegrable system with an extensive number of local symmetries

    Science.gov (United States)

    Hamazaki, Ryusuke; Ikeda, Tatsuhiko N.; Ueda, Masahito

    2016-03-01

    We numerically study the unitary time evolution of a nonintegrable model of hard-core bosons with an extensive number of local Z2 symmetries. We find that the expectation values of local observables in the stationary state are described better by the generalized Gibbs ensemble (GGE) than by the canonical ensemble. We also find that the eigenstate thermalization hypothesis fails for the entire spectrum but holds true within each symmetry sector, which justifies the GGE. In contrast, if the model has only one global Z2 symmetry or a size-independent number of local Z2 symmetries, we find that the stationary state is described by the canonical ensemble. Thus, the GGE is necessary to describe the stationary state even in a nonintegrable system if it has an extensive number of local symmetries.

  20. Face hallucination using orthogonal canonical correlation analysis

    Science.gov (United States)

    Zhou, Huiling; Lam, Kin-Man

    2016-05-01

    A two-step face-hallucination framework is proposed to reconstruct a high-resolution (HR) version of a face from an input low-resolution (LR) face, based on learning from LR-HR example face pairs using orthogonal canonical correlation analysis (orthogonal CCA) and linear mapping. In the proposed algorithm, face images are first represented using principal component analysis (PCA). Canonical correlation analysis (CCA) with the orthogonality property is then employed, to maximize the correlation between the PCA coefficients of the LR and the HR face pairs to improve the hallucination performance. The original CCA does not own the orthogonality property, which is crucial for information reconstruction. We propose using orthogonal CCA, which is proven by experiments to achieve a better performance in terms of global face reconstruction. In addition, in the residual-compensation process, a linear-mapping method is proposed to include both the inter- and intrainformation about manifolds of different resolutions. Compared with other state-of-the-art approaches, the proposed framework can achieve a comparable, or even better, performance in terms of global face reconstruction and the visual quality of face hallucination. Experiments on images with various parameter settings and blurring distortions show that the proposed approach is robust and has great potential for real-world applications.

  1. Bayesian ensemble refinement by replica simulations and reweighting

    CERN Document Server

    Hummer, Gerhard

    2015-01-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We find that the strength of the restraint scales with the number of replicas and we show that this sca...

  2. Partition Function of Interacting Calorons Ensemble

    CERN Document Server

    Deldar, Sedigheh

    2015-01-01

    We present a method for computing the partition function of a caloron ensemble taking into account the interaction of calorons. We focus on caloron-Dirac string interaction and show that the metric that Diakonov and Petrov offered works well in the limit where this interaction occurs. We suggest computing the correlation function of two polyakov loops by applying Ewald's method.

  3. Partition function of interacting calorons ensemble

    Science.gov (United States)

    Deldar, S.; Kiamari, M.

    2016-01-01

    We present a method for computing the partition function of a caloron ensemble taking into account the interaction of calorons. We focus on caloron-Dirac string interaction and show that the metric that Diakonov and Petrov offered, works well in the limit where this interaction occurs. We suggest computing the correlation function of two polyakov loops by applying Ewald's method.

  4. Simulations in generalized ensembles through noninstantaneous switches

    Science.gov (United States)

    Giovannelli, Edoardo; Cardini, Gianni; Chelli, Riccardo

    2015-10-01

    Generalized-ensemble simulations, such as replica exchange and serial generalized-ensemble methods, are powerful simulation tools to enhance sampling of free energy landscapes in systems with high energy barriers. In these methods, sampling is enhanced through instantaneous transitions of replicas, i.e., copies of the system, between different ensembles characterized by some control parameter associated with thermodynamical variables (e.g., temperature or pressure) or collective mechanical variables (e.g., interatomic distances or torsional angles). An interesting evolution of these methodologies has been proposed by replacing the conventional instantaneous (trial) switches of replicas with noninstantaneous switches, realized by varying the control parameter in a finite time and accepting the final replica configuration with a Metropolis-like criterion based on the Crooks nonequilibrium work (CNW) theorem. Here we revise these techniques focusing on their correlation with the CNW theorem in the framework of Markovian processes. An outcome of this report is the derivation of the acceptance probability for noninstantaneous switches in serial generalized-ensemble simulations, where we show that explicit knowledge of the time dependence of the weight factors entering such simulations is not necessary. A generalized relationship of the CNW theorem is also provided in terms of the underlying equilibrium probability distribution at a fixed control parameter. Illustrative calculations on a toy model are performed with serial generalized-ensemble simulations, especially focusing on the different behavior of instantaneous and noninstantaneous replica transition schemes.

  5. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826

  6. Ensemble approach for differentiation of malignant melanoma

    Science.gov (United States)

    Rastgoo, Mojdeh; Morel, Olivier; Marzani, Franck; Garcia, Rafael

    2015-04-01

    Melanoma is the deadliest type of skin cancer, yet it is the most treatable kind depending on its early diagnosis. The early prognosis of melanoma is a challenging task for both clinicians and dermatologists. Due to the importance of early diagnosis and in order to assist the dermatologists, we propose an automated framework based on ensemble learning methods and dermoscopy images to differentiate melanoma from dysplastic and benign lesions. The evaluation of our framework on the recent and public dermoscopy benchmark (PH2 dataset) indicates the potential of proposed method. Our evaluation, using only global features, revealed that ensembles such as random forest perform better than single learner. Using random forest ensemble and combination of color and texture features, our framework achieved the highest sensitivity of 94% and specificity of 92%.

  7. Canonical Entropy and Phase Transition of Rotating Black Hole

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ren; WU Yue-Qin; ZHANG Li-Chun

    2008-01-01

    Recently, the Hawking radiation of a black hole has been studied using the tunnel effect method. The radiation spectrum of a black hole is derived. By discussing the correction to spectrum of the rotating black hole, we obtain the canonical entropy. The derived canonical entropy is equal to the sum of Bekenstein-Hawking entropy and correction term. The correction term near the critical point is different from the one near others. This difference plays an important role in studying the phase transition of the black hole. The black hole thermal capacity diverges at the critical point. However, the canonical entropy is not a complex number at this point. Thus we think that the phase transition created by this critical point is the second order phase transition. The discussed black hole is a five-dimensional Kerr-AdS black hole. We provide a basis for discussing thermodynamic properties of a higher-dimensional rotating black hole.

  8. A multisite seasonal ensemble streamflow forecasting technique

    Science.gov (United States)

    Bracken, Cameron; Rajagopalan, Balaji; Prairie, James

    2010-03-01

    We present a technique for providing seasonal ensemble streamflow forecasts at several locations simultaneously on a river network. The framework is an integration of two recent approaches: the nonparametric multimodel ensemble forecast technique and the nonparametric space-time disaggregation technique. The four main components of the proposed framework are as follows: (1) an index gauge streamflow is constructed as the sum of flows at all the desired spatial locations; (2) potential predictors of the spring season (April-July) streamflow at this index gauge are identified from the large-scale ocean-atmosphere-land system, including snow water equivalent; (3) the multimodel ensemble forecast approach is used to generate the ensemble flow forecast at the index gauge; and (4) the ensembles are disaggregated using a nonparametric space-time disaggregation technique resulting in forecast ensembles at the desired locations and for all the months within the season. We demonstrate the utility of this technique in skillful forecast of spring seasonal streamflows at four locations in the Upper Colorado River Basin at different lead times. Where applicable, we compare the forecasts to the Colorado Basin River Forecast Center's Ensemble Streamflow Prediction (ESP) and the National Resource Conservation Service "coordinated" forecast, which is a combination of the ESP, Statistical Water Supply, a principal component regression technique, and modeler knowledge. We find that overall, the proposed method is equally skillful to existing operational models while tending to better predict wet years. The forecasts from this approach can be a valuable input for efficient planning and management of water resources in the basin.

  9. Bayesian ensemble refinement by replica simulations and reweighting

    Science.gov (United States)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  10. Integral canonical models for Spin Shimura varieties

    OpenAIRE

    Pera, Keerthi Madapusi

    2012-01-01

    We construct regular integral canonical models for Shimura varieties attached to Spin groups at (possibly ramified) odd primes. We exhibit these models as schemes of 'relative PEL type' over integral canonical models of larger Spin Shimura varieties with good reduction. Work of Vasiu-Zink then shows that the classical Kuga-Satake construction extends over the integral model and that the integral models we construct are canonical in a very precise sense. We also construct good compactification...

  11. Kingston Soundpainting Ensemble

    OpenAIRE

    Minors, Helen Julia

    2012-01-01

    This performance is designed to introduce teachers and school musicians to this live multidisciplinary live composing sign language. Led by Dr. Helen Julia Minors (soundpainter, trumpet, voice), the Kingston Soundpainting Ensemble, led by Dr. Minors at Kington University, is representated by a section a varied set of performers, using woodwind, brass, voice and percussion, spanning popular, classical and world styles. This performance consists of: Philip Warda (electronic instruments,...

  12. Effective Visualization of Temporal Ensembles.

    Science.gov (United States)

    Hao, Lihua; Healey, Christopher G; Bass, Steffen A

    2016-01-01

    An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions. PMID:26529728

  13. Uncertainty relations, zero point energy and the linear canonical group

    Science.gov (United States)

    Sudarshan, E. C. G.

    1993-01-01

    The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.

  14. AN ALGORITHM FOR JORDAN CANONICAL FORM OF A QUATERNION MATRIX

    Institute of Scientific and Technical Information of China (English)

    姜同松; 魏木生

    2003-01-01

    In this paper, we first introduce a concept of companion vector, and studythe Jordan canonical forms of quaternion matrices by using the methods of complex representation and companion vector, not only give out a practical algorithm for Jordancanonical form J of a quaternion matrix A, but also provide a practical algorithm forcorresponding nonsingular matrix P with P- 1 AP = J.

  15. Canonical curves with low apolarity

    CERN Document Server

    Ballico, Edoardo; Notari, Roberto

    2010-01-01

    Let $k$ be an algebraically closed field and let $C$ be a non--hyperelliptic smooth projective curve of genus $g$ defined over $k$. Since the canonical model of $C$ is arithmetically Gorenstein, Macaulay's theory of inverse systems allows to associate to $C$ a cubic form $f$ in the divided power $k$--algebra $R$ in $g-2$ variables. The apolarity of $C$ is the minimal number $t$ of linear form in $R$ needed to write $f$ as sum of their divided power cubes. It is easy to see that the apolarity of $C$ is at least $g-2$ and P. De Poi and F. Zucconi classified curves with apolarity $g-2$ when $k$ is the complex field. In this paper, we give a complete, characteristic free, classification of curves $C$ with apolarity $g-1$ (and $g-2$).

  16. The Hydrologic Ensemble Prediction Experiment (HEPEX)

    Science.gov (United States)

    Wood, A. W.; Thielen, J.; Pappenberger, F.; Schaake, J. C.; Hartman, R. K.

    2012-12-01

    The Hydrologic Ensemble Prediction Experiment was established in March, 2004, at a workshop hosted by the European Center for Medium Range Weather Forecasting (ECMWF). With support from the US National Weather Service (NWS) and the European Commission (EC), the HEPEX goal was to bring the international hydrological and meteorological communities together to advance the understanding and adoption of hydrological ensemble forecasts for decision support in emergency management and water resources sectors. The strategy to meet this goal includes meetings that connect the user, forecast producer and research communities to exchange ideas, data and methods; the coordination of experiments to address specific challenges; and the formation of testbeds to facilitate shared experimentation. HEPEX has organized about a dozen international workshops, as well as sessions at scientific meetings (including AMS, AGU and EGU) and special issues of scientific journals where workshop results have been published. Today, the HEPEX mission is to demonstrate the added value of hydrological ensemble prediction systems (HEPS) for emergency management and water resources sectors to make decisions that have important consequences for economy, public health, safety, and the environment. HEPEX is now organised around six major themes that represent core elements of a hydrologic ensemble prediction enterprise: input and pre-processing, ensemble techniques, data assimilation, post-processing, verification, and communication and use in decision making. This poster presents an overview of recent and planned HEPEX activities, highlighting case studies that exemplify the focus and objectives of HEPEX.

  17. El canon de la periferia

    Directory of Open Access Journals (Sweden)

    Karina Beatriz Lemes

    2010-11-01

    Full Text Available Intentaremos mostrar cómo venimos trabajando con la reconstrucción de la memoria literaria de la provincia de Misiones a partir de la recopilación de los manuscritos de sus autores más representativos. Hemos utilizado para nuestra lectura, en cruce con la crítica genética, las relaciones que Fernando Ainsa establece entre canon y periferia,  espacios de la memoria y construcción de la utopía. Ainsa concibe la escritura como proceso genético que en su origen es personal, visceral y solitario, una búsqueda constante de identidad que se enriquece en contacto con el mundo, con la apertura de fronteras. Estas vinculaciones nos han permitido interpretar las prácticas sociales que fundaron actividades estéticas en la distancia de los centros de poder argentinos.This paper shows some findings of our ongoing research project dealing with the recuperation of literary memory in the province of Misiones by analysing a compilation of the literary manuscripts by the most representative authors of this northern region of Argentina. Here, we follow Fernado Ainsa’s notions of canon and periphery, of memory spaces and construction of utopias. Ainsa sees the act of writing as a genetic process for it originates within a personal, visceral, and solitary realm. For Ainsa, writing is also a permanent search for identity which becomes richer when in contact with the world, when frontiers open up. These concepts allow us to interpret the social practices that gave birth to these aesthetic projects far away from Argentina’s power centers.

  18. Reversible Digital Filters Total Parametric Sensitivity Optimization using Non-canonical Hypercomplex Number Systems

    OpenAIRE

    Kalinovsky, Yakiv O.; Boyarinova, Yuliya E.; Khitsko, Iana V.

    2015-01-01

    Digital filter construction method, which is optimal by parametric sensitivity, based on using of non-canonical hypercomplex number systems is proposed and investigated. It is shown that the use of non-canonical hypercomplex number system with greater number of non-zero structure constants in multiplication table can significantly improve the sensitivity of the digital filter.

  19. Theory of Stochastic Canonical Equations of Random Matrix Physics, SOS Law, Elliptical Galactic Law, Sand Clock Law And Heart Law, Life, Sombrero and Halloween Laws

    International Nuclear Information System (INIS)

    Our studies are essentially based on the martingale differences method developed in my previous papers for resolvents of random matrices. This method possesses the self-averaging property of the entries of resolvents of random matrices and, hence, we can deduce the stochastic canonical equation. The lecture contains the most important results from numerous papers and books dealing with the theory of Unitary random matrices and functions of random matrices. We give the REFORM method of proving of all results, avoiding the method of moments. We do not try to describe here all known properties of the eigenvalues and eigenvectors for all classes of random matrices. However, our aim is rather to present the theory of stochastic canonical equations, and to give rigorous proofs of the procedures used to deduce these equations on the base of the author's General Statistical Analysis. We consider special classes of analytic functions of random matrices. The description problem for normalized spectral functions of some analytic functions of random matrices is discussed in detail. Specifically, we present here the new theory: LIFE, which is the abbreviation for Limit Independence of Functions of Ensembles. (author)

  20. Revisiting Interpretation of Canonical Correlation Analysis: A Tutorial and Demonstration of Canonical Commonality Analysis

    Science.gov (United States)

    Nimon, Kim; Henson, Robin K.; Gates, Michael S.

    2010-01-01

    In the face of multicollinearity, researchers face challenges interpreting canonical correlation analysis (CCA) results. Although standardized function and structure coefficients provide insight into the canonical variates produced, they fall short when researchers want to fully report canonical effects. This article revisits the interpretation of…

  1. Control and Synchronization of Neuron Ensembles

    CERN Document Server

    Li, Jr-Shin; Ruths, Justin

    2011-01-01

    Synchronization of oscillations is a phenomenon prevalent in natural, social, and engineering systems. Controlling synchronization of oscillating systems is motivated by a wide range of applications from neurological treatment of Parkinson's disease to the design of neurocomputers. In this article, we study the control of an ensemble of uncoupled neuron oscillators described by phase models. We examine controllability of such a neuron ensemble for various phase models and, furthermore, study the related optimal control problems. In particular, by employing Pontryagin's maximum principle, we analytically derive optimal controls for spiking single- and two-neuron systems, and analyze the applicability of the latter to an ensemble system. Finally, we present a robust computational method for optimal control of spiking neurons based on pseudospectral approximations. The methodology developed here is universal to the control of general nonlinear phase oscillators.

  2. Ensemble Enabled Weighted PageRank

    CERN Document Server

    Luo, Dongsheng; Hu, Renjun; Duan, Liang; Ma, Shuai

    2016-01-01

    This paper describes our solution for WSDM Cup 2016. Ranking the query independent importance of scholarly articles is a critical and challenging task, due to the heterogeneity and dynamism of entities involved. Our approach is called Ensemble enabled Weighted PageRank (EWPR). To do this, we first propose Time-Weighted PageRank that extends PageRank by introducing a time decaying factor. We then develop an ensemble method to assemble the authorities of the heterogeneous entities involved in scholarly articles. We finally propose to use external data sources to further improve the ranking accuracy. Our experimental study shows that our EWPR is a good choice for ranking scholarly articles.

  3. Efficient inference of protein structural ensembles

    CERN Document Server

    Lane, Thomas J; Beauchamp, Kyle A; Pande, Vijay S

    2014-01-01

    It is becoming clear that traditional, single-structure models of proteins are insufficient for understanding their biological function. Here, we outline one method for inferring, from experiments, not only the most common structure a protein adopts (native state), but the entire ensemble of conformations the system can adopt. Such ensemble mod- els are necessary to understand intrinsically disordered proteins, enzyme catalysis, and signaling. We suggest that the most difficult aspect of generating such a model will be finding a small set of configurations to accurately model structural heterogeneity and present one way to overcome this challenge.

  4. CANONICAL EXTENSIONS OF SYMMETRIC LINEAR RELATIONS

    NARCIS (Netherlands)

    Sandovici, Adrian; Davidson, KR; Gaspar, D; Stratila, S; Timotin, D; Vasilescu, FH

    2006-01-01

    The concept of canonical extension of Hermitian operators has been recently introduced by A. Kuzhel. This paper deals with a generalization of this notion to the case of symmetric linear relations. Namely, canonical regular extensions of symmetric linear relations in Hilbert spaces are studied. The

  5. Properties of the linear canonical integral transformation.

    Science.gov (United States)

    Alieva, Tatiana; Bastiaans, Martin J

    2007-11-01

    We provide a general expression and different classification schemes for the general two-dimensional canonical integral transformations that describe the propagation of coherent light through lossless first-order optical systems. Main theorems for these transformations, such as shift, scaling, derivation, etc., together with the canonical integral transforms of selected functions, are derived. PMID:17975592

  6. 37 CFR 10.46 - Canon 3.

    Science.gov (United States)

    2010-07-01

    ... Responsibility § 10.46 Canon 3. A practitioner should assist in preventing the unauthorized practice of law. ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Canon 3. 10.46 Section 10.46 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF...

  7. 37 CFR 10.83 - Canon 7.

    Science.gov (United States)

    2010-07-01

    ... Responsibility § 10.83 Canon 7. A practitioner should represent a client zealously within the bounds of the law. ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Canon 7. 10.83 Section 10.83 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF...

  8. The Current Canon in British Romantics Studies.

    Science.gov (United States)

    Linkin, Harriet Kramer

    1991-01-01

    Describes and reports on a survey of 164 U.S. universities to ascertain what is taught as the current canon of British Romantic literature. Asserts that the canon may now include Mary Shelley with the former standard six major male Romantic poets, indicating a significant emergence of a feminist perspective on British Romanticism in the classroom.…

  9. Automatic counting and recording unit used for dating by the carbon 14 method; Ensemble de comptage et d'impression automatique utilise pour la datation par la methode du carbone 14

    Energy Technology Data Exchange (ETDEWEB)

    Albertinoli, P.; Galliot, J.; Thommeret, J. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires. Centre Scientifique de Monaco, Monte Carlo

    1969-07-01

    A description is given of the unit used by the 'Centre Scientifique de Monaco' for low-level beta counting and fitted for radioactive dating by the Carbon 14 method. Built entirely by the laboratory in 1964, on the basis of electronic techniques then recent, it has worked without failure since that time. The proportional counter, its high-voltage negative supply, and the counting chains with visual and printing records are detailed by means of 38 figures which reproduce the counter and the electronic circuits. These are contained in two standard 5 U.I structures. The low-voltage power supply of the whole unit is carried out by plus 12 volts and minus 12 volts storage batteries, buffered on a charger connected on the 110 V alternative line. The proportional counter described is filled with CO{sub 2} under one atmosphere pressure and permits the dating of carbonaceous samples with a maximum of 30.000 + 1.000 years (background 3.96 c.p.m. ) within a moderate time (72 hours). (authors) [French] L'ensemble de comptage pour radioactivite beta a bas niveau, destine a la datation par la methode du carbone 14, utilise au Centre Scientifique de Monaco, est decrit. Entierement construit au laboratoire en 1964, sur la base de techniques electroniques alors recentes, il fonctionne depuis cette date sans defaillance. Le compteur proportionnel, son alimentation haute tension negative et les chaines de comptage transistorisees a affichage et impression sont detailles par 38 schemas reproduisant le compteur et les divers circuits electroniques. Ceux-ci sont contenus dans deux chassis standard 5 UI. L'alimentation basse tension de l'ensemble est obtenue par batteries plus 12 et moins 12 volts montees en tampon sur chargeur alimente par le reseau. Le compteur proportionnel decrit, rempli de CO2 sous une atmosphere, permet de dater les echantillons carbones avec un maximum de 30.000 + 1.000 ans (bruit de fond: 3,96 c.p.m. ) en un temps raisonnable (72 heures

  10. Total probabilities of ensemble runoff forecasts

    Science.gov (United States)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-04-01

    Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative

  11. Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation

    Science.gov (United States)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume

    2013-01-01

    Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.

  12. A New Selective Neural Network Ensemble Method Based on Error Vectorization and Its Application in High-density Polyethylene (HDPE) Cascade Reaction Process%A New Selective Neural Network Ensemble Method Based on Error Vectorization and Its Application in High-density Polyethylene (HDPE) Cascade Reaction Process

    Institute of Scientific and Technical Information of China (English)

    朱群雄; 赵乃伟; 徐圆

    2012-01-01

    Chemical processes are complex, for which traditional neural network models usually can not lead to satisfactory accuracy. Selective neural network ensemble is an effective way to enhance the generalization accuracy of networks, but there are some problems, e.g., lacking of unified definition of diversity among component neural networks and difficult to improve the accuracy by selecting if the diversities of available networks are small. In this study, the output errors of networks are vectorized, the diversity of networks is defined based on the error vectors, and the size of ensemble is analyzed. Then an error vectorization based selective neural network ensemble (EVSNE) is proposed, in which the error vector of each network can offset that of the other networks by training the component networks orderly. Thus the component networks have large diversity. Experiments and comparisons over standard data sets and actual chemical process data set for production of high-density polyethylene demonstrate that EVSNE performs better in generalization ability.

  13. Multilevel ensemble Kalman filtering

    KAUST Repository

    Hoel, Hakon

    2016-06-14

    This work embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. The resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  14. The Phase Diagram of QCD: Methods and Logic of the Comparison of Experiments and Theory

    International Nuclear Information System (INIS)

    A small review of the phase diagram of QCD is presented. This is followed by a discussion of the conditions under which experimental data on the fluctuations of conserved quantities in heavy-ion collisions can be treated in the grand canonical ensemble. A robust method for detecting the presence of a nearby critical point in experiments is given. There is a short discussion of the need for non-Gaussian error analysis.

  15. The Phase Diagram of QCD: Methods and Logic of the Comparison of Experiments and Theory

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, Sourendu [Department of Theoretical Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005 (India)

    2011-07-15

    A small review of the phase diagram of QCD is presented. This is followed by a discussion of the conditions under which experimental data on the fluctuations of conserved quantities in heavy-ion collisions can be treated in the grand canonical ensemble. A robust method for detecting the presence of a nearby critical point in experiments is given. There is a short discussion of the need for non-Gaussian error analysis.

  16. Ensemble nonequivalence in random graphs with modular structure

    CERN Document Server

    Garlaschelli, Diego; Roccaverde, Andrea

    2016-01-01

    Breaking of equivalence between the microcanonical ensemble and the canonical ensemble, describing a large system subject to hard and soft constraints, respectively, was recently shown to occur in large random graphs. Hard constraints must be met by every graph, soft constraints must be met only on average, subject to maximal entropy. In Squartini et al. (2015) it was shown that ensembles of random graphs are non-equivalent when the degrees of the nodes are constrained, in the sense of a non-zero limiting specific relative entropy as the number of nodes diverges. In that paper, the nodes were placed either on a single layer (uni-partite graphs) or on two layers (bi-partite graphs). In the present paper we consider an arbitrary number of intra-connected and inter-connected layers, thus allowing for modular graphs with a multi-partite, multiplex, block-model or community structure. We give a full classification of ensemble equivalence, proving that breakdown occurs if and only if the number of local constraints...

  17. Minimalist ensemble algorithms for genome-wide protein localization prediction

    Directory of Open Access Journals (Sweden)

    Lin Jhih-Rong

    2012-07-01

    Full Text Available Abstract Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual

  18. Measurement of the quantum superposition state of an imaging ensemble of photons prepared in orbital angular momentum states using a phase-diversity method

    International Nuclear Information System (INIS)

    We propose the use of a phase-diversity technique to estimate the orbital angular momentum (OAM) superposition state of an ensemble of photons that passes through an optical system, proceeding from an extended object. The phase-diversity technique permits the estimation of the optical transfer function (OTF) of an imaging optical system. As the OTF is derived directly from the wave-front characteristics of the observed light, we redefine the phase-diversity technique in terms of a superposition of OAM states. We test this new technique experimentally and find coherent results among different tests, which gives us confidence in the estimation of the photon ensemble state. We find that this technique not only allows us to estimate the square of the amplitude of each OAM state, but also the relative phases among all states, thus providing complete information about the quantum state of the photons. This technique could be used to measure the OAM spectrum of extended objects in astronomy or in an optical communication scheme using OAM states. In this sense, the use of extended images could lead to new techniques in which the communication is further multiplexed along the field.

  19. Crossover ensembles of random matrices and skew-orthogonal polynomials

    International Nuclear Information System (INIS)

    Highlights: → We study crossover ensembles of Jacobi family of random matrices. → We consider correlations for orthogonal-unitary and symplectic-unitary crossovers. → We use the method of skew-orthogonal polynomials and quaternion determinants. → We prove universality of spectral correlations in crossover ensembles. → We discuss applications to quantum conductance and communication theory problems. - Abstract: In a recent paper (S. Kumar, A. Pandey, Phys. Rev. E, 79, 2009, p. 026211) we considered Jacobi family (including Laguerre and Gaussian cases) of random matrix ensembles and reported exact solutions of crossover problems involving time-reversal symmetry breaking. In the present paper we give details of the work. We start with Dyson's Brownian motion description of random matrix ensembles and obtain universal hierarchic relations among the unfolded correlation functions. For arbitrary dimensions we derive the joint probability density (jpd) of eigenvalues for all transitions leading to unitary ensembles as equilibrium ensembles. We focus on the orthogonal-unitary and symplectic-unitary crossovers and give generic expressions for jpd of eigenvalues, two-point kernels and n-level correlation functions. This involves generalization of the theory of skew-orthogonal polynomials to crossover ensembles. We also consider crossovers in the circular ensembles to show the generality of our method. In the large dimensionality limit, correlations in spectra with arbitrary initial density are shown to be universal when expressed in terms of a rescaled symmetry breaking parameter. Applications of our crossover results to communication theory and quantum conductance problems are also briefly discussed.

  20. Ensemble Forecast: A New Approach to Uncertainty and Predictability

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Ensemble techniques have been used to generate daily numerical weather forecasts since the 1990s in numerical centers around the world due to the increase in computation ability. One of the main purposes of numerical ensemble forecasts is to try to assimilate the initial uncertainty (initial error) and the forecast uncertainty (forecast error) by applying either the initial perturbation method or the multi-model/multiphysics method. In fact, the mean of an ensemble forecast offers a better forecast than a deterministic (or control) forecast after a short lead time (3 5 days) for global modelling applications. There is about a 1-2-day improvement in the forecast skill when using an ensemble mean instead of a single forecast for longer lead-time. The skillful forecast (65% and above of an anomaly correlation) could be extended to 8 days (or longer) by present-day ensemble forecast systems. Furthermore, ensemble forecasts can deliver a probabilistic forecast to the users, which is based on the probability density function (PDF)instead of a single-value forecast from a traditional deterministic system. It has long been recognized that the ensemble forecast not only improves our weather forecast predictability but also offers a remarkable forecast for the future uncertainty, such as the relative measure of predictability (RMOP) and probabilistic quantitative precipitation forecast (PQPF). Not surprisingly, the success of the ensemble forecast and its wide application greatly increase the confidence of model developers and research communities.

  1. Filtering single atoms from Rydberg blockaded mesoscopic ensembles

    CERN Document Server

    Petrosyan, David; Mølmer, Klaus

    2015-01-01

    We propose an efficient method to filter out single atoms from trapped ensembles with unknown number of atoms. The method employs stimulated adiabatic passage to reversibly transfer a single atom to the Rydberg state which blocks subsequent Rydberg excitation of all the other atoms within the ensemble. This triggers the excitation of Rydberg blockaded atoms to short lived intermediate states and their subsequent decay to untrapped states. Using an auxiliary microwave field to carefully engineer the dissipation, we obtain a nearly deterministic single-atom source. Our method is applicable to small atomic ensembles in individual microtraps and in lattice arrays.

  2. Construction of High-accuracy Ensemble of Classifiers

    Directory of Open Access Journals (Sweden)

    Hedieh Sajedi

    2014-04-01

    Full Text Available There have been several methods developed to construct ensembles. Some of these methods, such as Bagging and Boosting are meta-learners, i.e. they can be applied to any base classifier. The combination of methods should be selected in order that classifiers cover each other weaknesses. In ensemble, the output of several classifiers is used only when they disagree on some inputs. The degree of disagreement is called diversity of the ensemble. Another factor that plays a significant role in performing an ensemble is accuracy of the basic classifiers. It can be said that all the procedures of constructing ensembles seek to achieve a balance between these two parameters, and successful methods can reach a better balance. The diversity of the members of an ensemble is known as an important factor in determining its generalization error. In this paper, we present a new approach for generating ensembles. The proposed approach uses Bagging and Boosting as the generators of base classifiers. Subsequently, the classifiers are partitioned by means of a clustering algorithm. We introduce a selection phase for construction the final ensemble and three different selection methods are proposed for applying in this phase. In the first proposed selection method, a classifier is selected randomly from each cluster. The second method selects the most accurate classifier from each cluster and the third one selects the nearest classifier to the center of each cluster to construct the final ensemble. The results of the experiments on well-known datasets demonstrate the strength of our proposed approach, especially applying the selection of the most accurate classifiers from clusters and employing Bagging generator.

  3. Complete Hamiltonian analysis of cosmological perturbations at all orders II: Non-canonical scalar field

    CERN Document Server

    Nandi, Debottam

    2016-01-01

    In this work, we present a consistent Hamiltonian analysis of cosmological perturbations for generalized non-canonical scalar fields. In order to do so, we introduce a new phase-space variable that is uniquely defined for different non-canonical scalar fields. We also show that this is the simplest and efficient way of expressing the Hamiltonian. We extend the Hamiltonian approach of [arXiv:1512.02539] to non-canonical scalar field and obtain a new definition of speed of sound in phase-space. In order to invert generalized phase-space Hamilton's equations to Euler-Lagrange equations of motion, we prescribe a general inversion formulae and show that our approach for non-canonical scalar field is consistent. We also obtain the third and fourth order interaction Hamiltonian for generalized non-canonical scalar fields and briefly discuss the extension of our method to generalized Galilean scalar fields.

  4. Excitations and benchmark ensemble density functional theory for two electrons

    International Nuclear Information System (INIS)

    A new method for extracting ensemble Kohn-Sham potentials from accurate excited state densities is applied to a variety of two-electron systems, exploring the behavior of exact ensemble density functional theory. The issue of separating the Hartree energy and the choice of degenerate eigenstates is explored. A new approximation, spin eigenstate Hartree-exchange, is derived. Exact conditions that are proven include the signs of the correlation energy components and the asymptotic behavior of the potential for small weights of the excited states. Many energy components are given as a function of the weights for two electrons in a one-dimensional flat box, in a box with a large barrier to create charge transfer excitations, in a three-dimensional harmonic well (Hooke's atom), and for the He atom singlet-triplet ensemble, singlet-triplet-singlet ensemble, and triplet bi-ensemble

  5. Excitations and benchmark ensemble density functional theory for two electrons

    CERN Document Server

    Pribram-Jones, Aurora; Trail, John R; Burke, Kieron; Needs, Richard J; Ullrich, Carsten A

    2014-01-01

    A new method for extracting ensemble Kohn-Sham potentials from accurate excited state densities is applied to a variety of two electron systems, exploring the behavior of exact ensemble density functional theory. The issue of separating the Hartree energy and the choice of degenerate eigenstates is explored. A new approximation, spin eigenstate Hartree-exchange (SEHX), is derived. Exact conditions that are proven include the signs of the correlation energy components, the virial theorem for both exchange and correlation, and the asymptotic behavior of the potential for small weights of the excited states. Many energy components are given as a function of the weights for two electrons in a one-dimensional flat box, in a box with a large barrier to create charge transfer excitations, in a three-dimensional harmonic well (Hooke's atom), and for the He atom singlet-triplet ensemble, singlet-triplet-singlet ensemble, and triplet bi-ensemble.

  6. Canonical pseudotensors, Sparling's form and Noether currents

    International Nuclear Information System (INIS)

    The canonical energy - momentum and spin pseudotensors of the Einstein theory are studied in two ways. First they are studied in the framework of Lagrangian formalism. It is shown, that for first order Lagrangian and rigid basis description the canonical energy - momentum, the canonical spin, and the Noether current are tensorial quantities, and the canonial energy - momentum and spin tensors satisfy the tensorial Belinfante-Rosenfeld equations. Then the differential geometric unification and reformulation of the previous different pseudotensorial approaches is given. Finally, for any vector field on the spacetime an (m-1) form, called the Noether form is defined. (K.A.) 34 refs

  7. Canonical variables and quasilocal energy in general relativity

    OpenAIRE

    Lau, Stephen

    1993-01-01

    Recently Brown and York have devised a new method for defining quasilocal energy in general relativity. Their method yields expressions for the quasilocal energy and momentum surface densities associated with the two-boundary of a spacelike slice of a spatially bounded spacetime. These expressions are essentially Arnowitt-Deser-Misner variables, but with canonical conjugacy defined with respect to the time history of the two-boundary. This paper introduces Ashtekar-type variables on the time ...

  8. Improving land resource evaluation using fuzzy neural network ensembles

    Science.gov (United States)

    XUE, Y.-J.; HU, Y.-M.; Liu, S.-G.; YANG, J.-F.; CHEN, Q.-C.; BAO, S.-T.

    2007-01-01

    Land evaluation factors often contain continuous-, discrete- and nominal-valued attributes. In traditional land evaluation, these different attributes are usually graded into categorical indexes by land resource experts, and the evaluation results rely heavily on experts' experiences. In order to overcome the shortcoming, we presented a fuzzy neural network ensemble method that did not require grading the evaluation factors into categorical indexes and could evaluate land resources by using the three kinds of attribute values directly. A fuzzy back propagation neural network (BPNN), a fuzzy radial basis function neural network (RBFNN), a fuzzy BPNN ensemble, and a fuzzy RBFNN ensemble were used to evaluate the land resources in Guangdong Province. The evaluation results by using the fuzzy BPNN ensemble and the fuzzy RBFNN ensemble were much better than those by using the single fuzzy BPNN and the single fuzzy RBFNN, and the error rate of the single fuzzy RBFNN or fuzzy RBFNN ensemble was lower than that of the single fuzzy BPNN or fuzzy BPNN ensemble, respectively. By using the fuzzy neural network ensembles, the validity of land resource evaluation was improved and reliance on land evaluators' experiences was considerably reduced. ?? 2007 Soil Science Society of China.

  9. The dynamical-analogue ensemble method for improving operational monthly forecasting%提高月预报业务水平的动力相似集合方法

    Institute of Scientific and Technical Information of China (English)

    任宏利; 张培群; 李维京; 陈丽娟

    2014-01-01

    Focusing on the monthly forecasting problem based on the Atmospheric General Circulation Model (AGCM),a method of the dynamical-analogue ensemble forecasting (DAEF)is proposed to effectively reduce prediction errors and increase prediction skills.This method aims to the intrinsic combination of the dynamical model and statistical-empirical methods,which can establish perturbation members for ensemble forecasting by extracting the historical analogue information of the atmospher-ic general circulation,parameterizing empirically model errors and generating the multi-time-independent analogue forcing.Ap-plying this new ensemble method to the operational AGCM in Beijing Climate Center (BCC AGCM1),a 10-yr monthly forecas-ting experiment under a quasi-operational condition shows encouraging results.Compared with the operational ensemble fore-casts by the BCC AGCM1,the DAEF method is capable to improve effectively prediction skills of the monthly-mean and daily atmospheric circulation forecasts in which the former almost reaches the standard,available in the BCC operation,through ef-fectively improving predictions of the zonal mean,ultra-long waves and long waves of the circulation.The results also show that prediction errors for the DAEF are significantly reduced and its spread of the ensemble members is reasonably increased, indicating an improvement in the relationship between the prediction errors and the spread.This study suggests a big potential application of the DAEF method in the BCC monthly forecasting operation.%针对基于大气环流模式的月预报问题,提出了一种能有效减小预报误差并提高预报技巧的动力相似集合预报新方法。该方法着眼于动力模式与统计经验的内在结合,在模式积分过程中通过提取大气环流历史相似性信息,对模式误差进行参数化处理,形成多个时变的相似强迫量来扰动生成预报的集合成员。将这一集合新方法应用到中国国家气候中

  10. Association Study between Lead and Zinc Accumulation at Different Physiological Systems of Cattle by Canonical Correlation and Canonical Correspondence Analyses

    International Nuclear Information System (INIS)

    Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.

  11. Association Study between Lead and Zinc Accumulation at Different Physiological Systems of Cattle by Canonical Correlation and Canonical Correspondence Analyses

    Science.gov (United States)

    Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar; Karmakar, Sougata; Mazumdar, Debasis

    2010-10-01

    Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.

  12. Representative Ensembles in Statistical Mechanics

    OpenAIRE

    V. I. YUKALOV

    2007-01-01

    The notion of representative statistical ensembles, correctly representing statistical systems, is strictly formulated. This notion allows for a proper description of statistical systems, avoiding inconsistencies in theory. As an illustration, a Bose-condensed system is considered. It is shown that a self-consistent treatment of the latter, using a representative ensemble, always yields a conserving and gapless theory.

  13. PSO-Ensemble Demo Application

    DEFF Research Database (Denmark)

    2004-01-01

    Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output is...

  14. Ensemble of Causal Trees

    International Nuclear Information System (INIS)

    We discuss the geometry of trees endowed with a causal structure using the conventional framework of equilibrium statistical mechanics. We show how this ensemble is related to popular growing network models. In particular we demonstrate that on a class of afine attachment kernels the two models are identical but they can differ substantially for other choice of weights. We show that causal trees exhibit condensation even for asymptotically linear kernels. We derive general formulae describing the degree distribution, the ancestor--descendant correlation and the probability that a randomly chosen node lives at a given geodesic distance from the root. It is shown that the Hausdorff dimension dH of the causal networks is generically infinite. (author)

  15. Ensemble of Causal Trees

    Science.gov (United States)

    Bialas, Piotr

    2003-10-01

    We discuss the geometry of trees endowed with a causal structure using the conventional framework of equilibrium statistical mechanics. We show how this ensemble is related to popular growing network models. In particular we demonstrate that on a class of afine attachment kernels the two models are identical but they can differ substantially for other choice of weights. We show that causal trees exhibit condensation even for asymptotically linear kernels. We derive general formulae describing the degree distribution, the ancestor--descendant correlation and the probability that a randomly chosen node lives at a given geodesic distance from the root. It is shown that the Hausdorff dimension dH of the causal networks is generically infinite.

  16. The Hydrologic Ensemble Prediction Experiment (HEPEX)

    Science.gov (United States)

    Wood, Andy; Wetterhall, Fredrik; Ramos, Maria-Helena

    2015-04-01

    The Hydrologic Ensemble Prediction Experiment was established in March, 2004, at a workshop hosted by the European Center for Medium Range Weather Forecasting (ECMWF), and co-sponsored by the US National Weather Service (NWS) and the European Commission (EC). The HEPEX goal was to bring the international hydrological and meteorological communities together to advance the understanding and adoption of hydrological ensemble forecasts for decision support. HEPEX pursues this goal through research efforts and practical implementations involving six core elements of a hydrologic ensemble prediction enterprise: input and pre-processing, ensemble techniques, data assimilation, post-processing, verification, and communication and use in decision making. HEPEX has grown through meetings that connect the user, forecast producer and research communities to exchange ideas, data and methods; the coordination of experiments to address specific challenges; and the formation of testbeds to facilitate shared experimentation. In the last decade, HEPEX has organized over a dozen international workshops, as well as sessions at scientific meetings (including AMS, AGU and EGU) and special issues of scientific journals where workshop results have been published. Through these interactions and an active online blog (www.hepex.org), HEPEX has built a strong and active community of nearly 400 researchers & practitioners around the world. This poster presents an overview of recent and planned HEPEX activities, highlighting case studies that exemplify the focus and objectives of HEPEX.

  17. Subsets of configurations and canonical partition functions

    DEFF Research Database (Denmark)

    Bloch, J.; Bruckmann, F.; Kieburg, M.;

    2013-01-01

    We explain the physical nature of the subset solution to the sign problem in chiral random matrix theory: the subset sum over configurations is shown to project out the canonical determinant with zero quark charge from a given configuration. As the grand canonical chiral random matrix partition f...... function is independent of the chemical potential, the zero-quark-charge sector provides the full result. © 2013 American Physical Society....

  18. Canonical equations of Hamilton with beautiful symmetry

    OpenAIRE

    Liang, Guo; Guo, Qi

    2012-01-01

    The Hamiltonian formulation plays the essential role in constructing the framework of modern physics. In this paper, a new form of canonical equations of Hamilton with the complete symmetry is obtained, which are valid not only for the first-order differential system, but also for the second-order differential system. The conventional form of the canonical equations without the symmetry [Goldstein et al., Classical Mechanics, 3rd ed, Addison-Wesley, 2001] are only for the second-order differe...

  19. Heisenberg Uncertainty Relation for Three Canonical Observables

    OpenAIRE

    Kechrimparis, Spiros; Weigert, Stefan

    2014-01-01

    Uncertainty relations provide fundamental limits on what can be said about the properties of quantum systems. For a quantum particle, the commutation relation of position and momentum observables entails Heisenberg's uncertainty relation. A third observable is presented which satisfies canonical commutation relations with both position and momentum. The resulting triple of pairwise canonical observables gives rise to a Heisenberg-type uncertainty relation for the product of three standard dev...

  20. Canonical studies of the cluster distribution, dynamical evolution, and critical temperature in nuclear multifragmentation processes

    International Nuclear Information System (INIS)

    Partition functions for a canonical and microcanonical ensemble are developed which are then used to describe various properties of excited hadronic systems. Relating multinomial coefficients to a generating function of these partition functions, it is shown that the average value of various moments of cluster sizes are of a quite simple form in terms of canonical partition functions. Specific applications of the results are to partitioning problems as in the partitioning of nucleons into clusters arising from a nuclear collision and to branching processes as in Furry branching. The underlying dynamical evolution of a system is studied by parametrizing the multinomial variables of the theory. A Fokker-Planck equation can be obtained from these evolutionary equations. By relating the parameters and variables of the theory to thermodynamic variables, the thermal properties of excited hadronic systems are studied

  1. A simple grand canonical approach to compute the vapor pressure of bulk and finite size systems

    Energy Technology Data Exchange (ETDEWEB)

    Factorovich, Matías H.; Scherlis, Damián A. [Departamento de Química Inorgánica, Analítica y Química Física/INQUIMAE, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pab. II, Buenos Aires C1428EHA (Argentina); Molinero, Valeria [Department of Chemistry, University of Utah, 315 South 1400 East, Salt Lake City, Utah 84112-0850 (United States)

    2014-02-14

    In this article we introduce a simple grand canonical screening (GCS) approach to accurately compute vapor pressures from molecular dynamics or Monte Carlo simulations. This procedure entails a screening of chemical potentials using a conventional grand canonical scheme, and therefore it is straightforward to implement for any kind of interface. The scheme is validated against data obtained from Gibbs ensemble simulations for water and argon. Then, it is applied to obtain the vapor pressure of the coarse-grained mW water model, and it is shown that the computed value is in excellent accord with the one formally deduced using statistical thermodynamics arguments. Finally, this methodology is used to calculate the vapor pressure of a water nanodroplet of 94 molecules. Interestingly, the result is in perfect agreement with the one predicted by the Kelvin equation for a homogeneous droplet of that size.

  2. Both canonical and non-canonical Wnt signaling independently promote stem cell growth in mammospheres.

    Directory of Open Access Journals (Sweden)

    Alexander M Many

    Full Text Available The characterization of mammary stem cells, and signals that regulate their behavior, is of central importance in understanding developmental changes in the mammary gland and possibly for targeting stem-like cells in breast cancer. The canonical Wnt/β-catenin pathway is a signaling mechanism associated with maintenance of self-renewing stem cells in many tissues, including mammary epithelium, and can be oncogenic when deregulated. Wnt1 and Wnt3a are examples of ligands that activate the canonical pathway. Other Wnt ligands, such as Wnt5a, typically signal via non-canonical, β-catenin-independent, pathways that in some cases can antagonize canonical signaling. Since the role of non-canonical Wnt signaling in stem cell regulation is not well characterized, we set out to investigate this using mammosphere formation assays that reflect and quantify stem cell properties. Ex vivo mammosphere cultures were established from both wild-type and Wnt1 transgenic mice and were analyzed in response to manipulation of both canonical and non-canonical Wnt signaling. An increased level of mammosphere formation was observed in cultures derived from MMTV-Wnt1 versus wild-type animals, and this was blocked by treatment with Dkk1, a selective inhibitor of canonical Wnt signaling. Consistent with this, we found that a single dose of recombinant Wnt3a was sufficient to increase mammosphere formation in wild-type cultures. Surprisingly, we found that Wnt5a also increased mammosphere formation in these assays. We confirmed that this was not caused by an increase in canonical Wnt/β-catenin signaling but was instead mediated by non-canonical Wnt signals requiring the receptor tyrosine kinase Ror2 and activity of the Jun N-terminal kinase, JNK. We conclude that both canonical and non-canonical Wnt signals have positive effects promoting stem cell activity in mammosphere assays and that they do so via independent signaling mechanisms.

  3. Refining inflation using non-canonical scalars

    Energy Technology Data Exchange (ETDEWEB)

    Unnikrishnan, Sanil; Sahni, Varun [Inter-University Centre for Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007 (India); Toporensky, Aleksey, E-mail: sanil@iucaa.ernet.in, E-mail: varun@iucaa.ernet.in, E-mail: atopor@rambler.ru [Sternberg Astronomical Institute, Moscow State University, Universitetsky Prospekt, 13, Moscow 119992 (Russian Federation)

    2012-08-01

    This paper revisits the Inflationary scenario within the framework of scalar field models possessing a non-canonical kinetic term. We obtain closed form solutions for all essential quantities associated with chaotic inflation including slow roll parameters, scalar and tensor power spectra, spectral indices, the tensor-to-scalar ratio, etc. We also examine the Hamilton-Jacobi equation and demonstrate the existence of an inflationary attractor. Our results highlight the fact that non-canonical scalars can significantly improve the viability of inflationary models. They accomplish this by decreasing the tensor-to-scalar ratio while simultaneously increasing the value of the scalar spectral index, thereby redeeming models which are incompatible with the cosmic microwave background (CMB) in their canonical version. For instance, the non-canonical version of the chaotic inflationary potential, V(φ) ∼ λφ{sup 4}, is found to agree with observations for values of λ as large as unity! The exponential potential can also provide a reasonable fit to CMB observations. A central result of this paper is that steep potentials (such as V∝φ{sup −n}) usually associated with dark energy, can drive inflation in the non-canonical setting. Interestingly, non-canonical scalars violate the consistency relation r = −8n{sub T}, which emerges as a smoking gun test for this class of models.

  4. Assimilating Doppler radar radial velocity and reflectivity observations in the weather research and forecasting model by a proper orthogonal-decomposition-based ensemble, three-dimensional variational assimilation method

    Science.gov (United States)

    Pan, Xiaoduo; Tian, Xiangjun; Li, Xin; Xie, Zhenghui; Shao, Aimei; Lu, Chunyan

    2012-09-01

    Doppler radar observations with high spatial and temporal resolution can effectively improve the description of small-scale structures in the initial condition and enhance the mesoscale and microscale model skills of numerical weather prediction (NWP). In this paper, Doppler radar radial velocity and reflectivity are simultaneously assimilated into a weather research and forecasting (WRF) model by a proper orthogonal-decomposition-based ensemble, three-dimensional variational assimilation method (referred to as PODEn3DVar), which therefore forms the PODEn3DVar-based radar assimilation system (referred to as WRF-PODEn3DVar). The main advantages of WRF-PODEn3DVar over the standard WRF-3DVar are that (1) the PODEn3DVar provides flow-dependent covariances through the evolving ensemble of short-range forecasts, and (2) the PODEn3DVar analysis can be obtained directly without an iterative process, which significantly simplifies the assimilation. Results from real data assimilation experiments with the WRF model show that WRF-PODEn3DVar simulation yields better rainfall forecasting than radar retrieval, and radar retrieval is better than the standard WRF-3DVar assimilation, probably because of the flow-dependence character embedded in the WRF-PODEn3DVar.

  5. Analysis of multivariate genotype - environment data using Nonlinear Canonical Correlation Analysis

    OpenAIRE

    Pinnschmidt, H.O.

    2004-01-01

    Nonlinear Canonical Correlation Analysis (NCCA) is a method well suited for visualising the main features in multivariate data of various scales. NCCA is useful for obtaining an overall orientation of genotype properties and environment characteristics.

  6. Construction of canonical constants of motion for non-local field theories

    International Nuclear Information System (INIS)

    The paper proposes an unambiguous procedure along the lines of Rzewuski's method of derivation of conservation laws for non-local interactions, leading to the simplest constants of motion which are called canonical. (author)

  7. Description du canon à gaz DEMETER et des chaînes de mesures associées

    OpenAIRE

    Chartagnac, P.; Jimenez, B

    1984-01-01

    L'ensemble expérimental qui est décrit dans cet article est destiné à l'étude du comportement des solides en régime de choc plan. Il est constitué du canon à gaz comprimé DEMETER et de plusieurs chaînes de mesures. Le canon utilisant de l'air ou de l'hélium comme gaz moteur, peut propulser des « projectiles » de 110 mm de diamètre avec une vitesse continûment programmable de 100 m/s à 1 150 m/s et une reproductibilité de 1 %. Les chaînes de mesures implantées sur le canon et reliées à un calc...

  8. Bayesian Tracking of Emerging Epidemics Using Ensemble Optimal Statistical Interpolation

    OpenAIRE

    Cobb, Loren; Krishnamurthy, Ashok; Mandel, Jan; Beezley, Jonathan D.

    2014-01-01

    We present a preliminary test of the Ensemble Optimal Statistical Interpolation (EnOSI) method for the statistical tracking of an emerging epidemic, with a comparison to its popular relative for Bayesian data assimilation, the Ensemble Kalman Filter (EnKF). The spatial data for this test was generated by a spatial susceptible-infectious-removed (S-I-R) epidemic model of an airborne infectious disease. Both tracking methods in this test employed Poisson rather than Gaussian noise, so as to han...

  9. Sampling the isothermal-isobaric ensemble by Langevin dynamics

    OpenAIRE

    Gao, Xingyu; Fang, Jun; Wang, Han

    2016-01-01

    We present a new method of conducting molecular dynamics simulation in isothermal-isobaric ensemble based on Langevin equations of motion. The stochastic coupling to all particle and cell degrees of freedoms is introduced in a correct way, in the sense that the stationary configurational distribution is proved to be in consistent with that of the isothermal-isobaric ensemble. In order to apply the proposed method in computer simulations, a second order symmetric numerical integration scheme i...

  10. Scalable Ensemble Learning and Computationally Efficient Variance Estimation

    OpenAIRE

    LeDell, Erin

    2015-01-01

    Ensemble machine learning methods are often used when the true prediction function is not easily approximated by a single algorithm. The Super Learner algorithm is an ensemble method that has been theoretically proven to represent an asymptotically optimal system for learning. The Super Learner, also known as stacking, combines multiple, typically diverse, base learning algorithms into a single, powerful prediction function through a secondary learning process called metalearning. Although...

  11. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis

    Science.gov (United States)

    Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.

    2009-01-01

    An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.

  12. Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions

    Science.gov (United States)

    Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.

    2009-01-01

    This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in

  13. Unified expression for the calculation of thermal conductivity in the canonical ensemble

    Science.gov (United States)

    Chialvo, Ariel A.; Cummings, Peter T.

    A proof of the theoretical equivalence between the E. Helfand, 1960, Phys. Rev., 119, 1 and the D. McQuarrie, 1976, Statistical Mechanics (Harper & Row), Chap. 21 equations for the calculation of thermal conductivity via the Einsteintype relations is presented here. Some theoretical implications of that equivalence are also discussed, such as the unification of the thermal conductivity expressions into one similar to that given for linear transport coefficients by F. C. Andrews, 1967, J. Chem. Phys., 47, 3161.

  14. Non-equivalence of the microcanonical and canonical ensembles in a bosonic Josephson junction

    Directory of Open Access Journals (Sweden)

    L.A. González-García

    2011-01-01

    Full Text Available Se investigan la propiedades termodinámicas de una junta de Josephson bos´onica en la aproximación cuántica de dos modos, en particular, se estudian los promedios térmicos de propiedades de uno y dos cuerpos abajo y arriba de la transición de deslocalización a estado auto atrapado. Esta dependencia en la temperatura se determina usando el hecho que en equilibrio los ensambles canónico y microcanónico deberían ser equivalentes. Primero se establece la robustez del estado de equilibrio estudiando una propiedad de un cuerpo y mostrando numéricamente que cualquier estado arbitrario localizado en energía alcanza un estado estacionario o de equilibrio. La comparación entre promedios de propiedades de uno y dos cuerpos en los esquemas canónico y microcanónico revela discrepancias, exhibiendo así la no equivalencia entre ensambles. Dichas diferencias en los promedios pueden atribuirse al hecho que el espacio de Hilbert del sistema se escala como su tamaño N y consecuentemente la entropía no se escala con N. Adicionalmente, se encuentra como consecuencia natural de estudiar a la junta de Josephson bosónica en la aproximación de dos modos, la existencia de temperaturas negativas. Dicho resultado puede ser generalizado para redes ópticas finitas.

  15. Grand canonical ensemble, multi-particle wave functions and scattering data

    CERN Document Server

    Bruckmann, Falk; Kloiber, Thomas; Sulejmanpasic, Tin

    2015-01-01

    We show that information about scattering data of a quantum field theory can be obtained from studying the system at finite density and low temperatures. In particular we consider models formulated on the lattice which can be exactly dualized to theories of conserved charge fluxes on lattice links. Apart from eliminating the complex action problem at nonzero chemical potential mu, these dualizations allow for a particle world line interpretation of the dual fluxes from which one can extract data about the 2-particle wave function. As an example we perform dual Monte Carlo simulations of the 2-dimensional O(3) model at nonzero mu and finite volume, whose non-perturbative spectrum consists of a massive triplet of particles. At nonzero mu particles are induced in the system, which at sufficiently low temperature give rise to sectors of fixed particle number. We show that the scattering phase shifts can be obtained either from the critical chemical potential values separating the sectors or directly from the wave...

  16. Nonlinear Higher-Order Thermo-Hydrodynamics: Generalized Approach in a Nonequilibrium Ensemble Formalism

    OpenAIRE

    Vasconcellos, Áurea R.; Ramos, J. Galvão; Luzzi, Roberto

    2004-01-01

    Construction of a nonlinear higher-order thermo-hydrodynamics, including correlations, in the framework of a Generalized Nonequilibrium Statistical Grand-Canonical Ensemble is presented. In that way it is provided a particular formalism for the coupling and simultaneous treatment of the kinetics and hydrodynamic levels of description. It is based on a complete thermostatistical approach in terms of the densities of energy and of matter and their fluxes of all orders, as well as on their direc...

  17. Grand-canonical and canonical solution of self-avoiding walks with up to three monomers per site on the Bethe lattice.

    Science.gov (United States)

    Oliveira, Tiago J; Stilck, Jürgen F; Serra, Pablo

    2009-10-01

    We solve a model of polymers represented by self-avoiding walks on a lattice, which may visit the same site up to three times in the grand-canonical formalism on the Bethe lattice. This may be a model for the collapse transition of polymers where only interactions between monomers at the same site are considered. The phase diagram of the model is very rich, displaying coexistence and critical surfaces, critical, critical end point, and tricritical lines, as well as a multicritical point. From the grand-canonical results, we present an argument to obtain the properties of the model in the canonical ensemble, and compare our results with simulations in the literature. We do actually find extended and collapsed phases, but the transition between them, composed by a line of critical end points and a line of tricritical points, separated by the multicritical point, is always continuous. This result is at variance with the simulations for the model, which suggest that part of the line should be a discontinuous transition. Finally, we discuss the connection of the present model with the standard model for the collapse of polymers (self-avoiding, self-attracting walks), where the transition between the extended and collapsed phases is a tricritical point. PMID:19905330

  18. The Ensembl gene annotation system.

    Science.gov (United States)

    Aken, Bronwen L; Ayling, Sarah; Barrell, Daniel; Clarke, Laura; Curwen, Valery; Fairley, Susan; Fernandez Banet, Julio; Billis, Konstantinos; García Girón, Carlos; Hourlier, Thibaut; Howe, Kevin; Kähäri, Andreas; Kokocinski, Felix; Martin, Fergal J; Murphy, Daniel N; Nag, Rishi; Ruffier, Magali; Schuster, Michael; Tang, Y Amy; Vogel, Jan-Hinnerk; White, Simon; Zadissa, Amonida; Flicek, Paul; Searle, Stephen M J

    2016-01-01

    The Ensembl gene annotation system has been used to annotate over 70 different vertebrate species across a wide range of genome projects. Furthermore, it generates the automatic alignment-based annotation for the human and mouse GENCODE gene sets. The system is based on the alignment of biological sequences, including cDNAs, proteins and RNA-seq reads, to the target genome in order to construct candidate transcript models. Careful assessment and filtering of these candidate transcripts ultimately leads to the final gene set, which is made available on the Ensembl website. Here, we describe the annotation process in detail.Database URL: http://www.ensembl.org/index.html. PMID:27337980

  19. Sampling Motif-Constrained Ensembles of Networks

    Science.gov (United States)

    Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.

    2015-10-01

    The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  20. Sampling motif-constrained ensembles of networks

    CERN Document Server

    Fischer, Rico; Peixoto, Tiago P; Altmann, Eduardo G

    2015-01-01

    The statistical significance of network properties is conditioned on null models which satisfy spec- ified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency, or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this paper we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, net- works with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  1. Ensemble Models with Trees and Rules

    OpenAIRE

    Akdemir, Deniz

    2011-01-01

    In this article, we have proposed several approaches for post processing a large ensemble of prediction models or rules. The results from our simulations show that the post processing methods we have considered here are promising. We have used the techniques developed here for estimation of quantitative traits from markers, on the benchmark "Bostob Housing"data set and in some simulations. In most cases, the produced models had better prediction performance than, for example, the ones produce...

  2. 自适应EEMD方法在心电信号处理中的应用%Application of Adaptive Ensemble Empirical Mode Decomposition Method to Electrocardiogram Signal Processing

    Institute of Scientific and Technical Information of China (English)

    陈略; 唐歌实; 訾艳阳; 冯卓楠; 李康

    2011-01-01

    针对总体平均经验模式分解(EEMD)中参数自动获取问题,提出了一种自适应EEMD方法.首先通过分析白噪声影响经验模式分解效果,建立了EEMD方法中加入白噪声大小的可依据准则,对不同信号可自适应获取加入白噪声大小与总体平均次数两个重要参数,进而得到一种自适应EEMD算法.最后将其应用于心电信号处理中,成功进行心电信号消噪与心率特征提取,验证了该算法的有效性,为复杂背景条件下的航天员心电信号处理提供了一种有效方法.%To solve the problem of parameters automatically obtaining in ensemble empirical mode decomposition (EEMD), a new method called the adaptive EEMD is proposed in this paper. Firstly, the essence of how white noise affects the effect of empirical mode decomposition is revealed. Then a criterion of adding white noise in EEMD method is proposed, which can be used to adaptively obtain two key parameters in EEMD: the added white noise magnitude and the ensemble times. Finally, adaptive EEMD algorithm is put forward, and it is applied to electrocardiogram (ECG) signal processing experiment. The results show that ECG signal denoising and heart rate characteristic extraction is successfully accomplished, the validity of adaptive EEMD is verified. Meanwhile, it provides an effective method for cosmonaut ECG signal processing under a complex background condition.

  3. Canonical reduction for dilatonic gravity in 3+1 dimensions

    CERN Document Server

    Scott, T C; Mann, R B; Fee, G J

    2016-01-01

    We generalize the 1+1-dimensional gravity formalism of Ohta and Mann to 3+1 dimensions by developing the canonical reduction of a proposed formalism applied to a system coupled with a set of point particles. This is done via the Arnowitt-Deser-Misner method and by eliminating the resulting constraints and imposing coordinate conditions. The reduced Hamiltonian is completely determined in terms of the particles' canonical variables (coordinates, dilaton field and momenta). It is found that the equation governing the dilaton field under suitable gauge and coordinate conditions, including the absence of transverse-traceless metric components, is a logarithmic Schroedinger equation. Thus, although different, the 3+1 formalism retains some essential features of the earlier 1+1 formalism, in particular the means of obtaining a quantum theory for dilatonic gravity.

  4. Quantum canonical transformations in star-product formalism

    International Nuclear Information System (INIS)

    We study construction of the star-product version of three basic quantum canonical transformations which are known as the generators of the full canonical algebra. By considering the fact that star-product of c-number phase-space functions is in complete isomorphism to Hilbert-space operator algebra, it is shown that while the constructions of gauge and point transformations are immediate, generator of the interchanging transformation deforms this isomorphism. As an alternative approach, we study all of them within the deformed form. How to transform any c-number function under linear-nonlinear transformations and the intertwining method are shown within this argument as the complementary subjects of the text

  5. Prediction of Weather Impacted Airport Capacity using Ensemble Learning

    Science.gov (United States)

    Wang, Yao Xun

    2011-01-01

    Ensemble learning with the Bagging Decision Tree (BDT) model was used to assess the impact of weather on airport capacities at selected high-demand airports in the United States. The ensemble bagging decision tree models were developed and validated using the Federal Aviation Administration (FAA) Aviation System Performance Metrics (ASPM) data and weather forecast at these airports. The study examines the performance of BDT, along with traditional single Support Vector Machines (SVM), for airport runway configuration selection and airport arrival rates (AAR) prediction during weather impacts. Testing of these models was accomplished using observed weather, weather forecast, and airport operation information at the chosen airports. The experimental results show that ensemble methods are more accurate than a single SVM classifier. The airport capacity ensemble method presented here can be used as a decision support model that supports air traffic flow management to meet the weather impacted airport capacity in order to reduce costs and increase safety.

  6. Protein Remote Homology Detection Based on an Ensemble Learning Approach

    Science.gov (United States)

    Chen, Junjie; Liu, Bingquan; Huang, Dong

    2016-01-01

    Protein remote homology detection is one of the central problems in bioinformatics. Although some computational methods have been proposed, the problem is still far from being solved. In this paper, an ensemble classifier for protein remote homology detection, called SVM-Ensemble, was proposed with a weighted voting strategy. SVM-Ensemble combined three basic classifiers based on different feature spaces, including Kmer, ACC, and SC-PseAAC. These features consider the characteristics of proteins from various perspectives, incorporating both the sequence composition and the sequence-order information along the protein sequences. Experimental results on a widely used benchmark dataset showed that the proposed SVM-Ensemble can obviously improve the predictive performance for the protein remote homology detection. Moreover, it achieved the best performance and outperformed other state-of-the-art methods. PMID:27294123

  7. Ensemble-based Probabilistic Forecasting at Horns Rev

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    2009-01-01

    probabilistic forecasts, the resolution of which may be maximized by using meteorological ensemble predictions as input. The paper concentrates on the test case of the Horns Rev wind form over a period of approximately 1 year, in order to describe, apply and discuss a complete ensemble-based probabilistic...... forecasting methodology. In a first stage, ensemble forecasts of meteorological variables are converted to power through a suitable power curve model. This modelemploys local polynomial regression, and is adoptively estimated with an orthogonal fitting method. The obtained ensemble forecasts of wind power are...... then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which ore recursively estimated in order to maximize the overall skill of obtained predictive distributions. Such a methodology...

  8. Inter-comparison study of the ENSEMBLE project

    International Nuclear Information System (INIS)

    During the days of the Chernobyl accident, the European national long-range dispersion forecasts would differ because of differences in national models, and differences in weather prediction methods. ENSEMBLE project was launched for a reconciliation and harmonization of the disparate long-range dispersion forecasts. Responsible European emergency organizations in addition to Canadian, Japanese, Korean and US agencies have participated in ENSEMBLE. KAERI joined the inter-comparison study for the exercise on the 901-001 scenario in ENSEMBLE. KAERI was assigned KR1 as a national code and 53 as a model number. The model of KAERI was compared with the other models of the participants in ENSEMBLE. The comparative results are presented with the scatter plots and statistical methods in this paper. (author)

  9. The entropy of network ensembles

    OpenAIRE

    Bianconi, Ginestra

    2008-01-01

    In this paper we generalize the concept of random networks to describe networks with non trivial features by a statistical mechanics approach. This framework is able to describe ensembles of undirected, directed as well as weighted networks. These networks might have not trivial community structure or, in the case of networks embedded in a given space, non trivial distance dependence of the link probability. These ensembles are characterized by their entropy which evaluate the cardinality of ...

  10. Deformed Ginibre ensembles and integrable systems

    Energy Technology Data Exchange (ETDEWEB)

    Orlov, A.Yu., E-mail: orlovs@ocean.ru

    2014-01-17

    We consider three Ginibre ensembles (real, complex and quaternion-real) with deformed measures and relate them to known integrable systems by presenting partition functions of these ensembles in form of fermionic expectation values. We also introduce double deformed Dyson–Wigner ensembles and compare their fermionic representations with those of Ginibre ensembles.

  11. Universal canonical entropy for gravitating systems

    Indian Academy of Sciences (India)

    Ashok Chatterjee; Parthasarathi Majumdar

    2004-10-01

    The thermodynamics of general relativistic systems with boundary, obeying a Hamiltonian constraint in the bulk, is determined solely by the boundary quantum dynamics, and hence by the area spectrum. Assuming, for large area of the boundary, (a) an area spectrum as determined by non-perturbative canonical quantum general relativity (NCQGR), (b) an energy spectrum that bears a power law relation to the area spectrum, (c) an area law for the leading order microcanonical entropy, leading thermal fluctuation corrections to the canonical entropy are shown to be logarithmic in area with a universal coefficient. Since the microcanonical entropy also has universal logarithmic corrections to the area law (from quantum space-time fluctuations, as found earlier) the canonical entropy then has a universal form including logarithmic corrections to the area law. This form is shown to be independent of the index appearing in assumption (b). The index, however, is crucial in ascertaining the domain of validity of our approach based on thermal equilibrium.

  12. Covariant Gauge Fixing and Canonical Quantization

    CERN Document Server

    McKeon, D G C

    2011-01-01

    Theories that contain first class constraints possess gauge invariance which results in the necessity of altering the measure in the associated quantum mechanical path integral. If the path integral is derived from the canonical structure of the theory, then the choice of gauge conditions used in constructing Faddeev's measure cannot be covariant. This shortcoming is normally overcome either by using the "Faddeev-Popov" quantization procedure, or by the approach of Batalin-Fradkin-Fradkina-Vilkovisky, and then demonstrating that these approaches are equivalent to the path integral constructed from the canonical approach with Faddeev's measure. We propose in this paper an alternate way of defining the measure for the path integral when it is constructed using the canonical procedure for theories containing first class constraints and that this new approach can be used in conjunction with covariant gauges. This procedure follows the Faddeev-Popov approach, but rather than working with the form of the gauge tran...

  13. A Canonical Analysis of the Massless Superparticle

    CERN Document Server

    McKeon, D G C

    2012-01-01

    The canonical structure of the action for a massless superparticle is considered in d = 2 + 1 and d = 3 + 1 dimensions. This is done by examining the contribution to the action of each of the components of the spinor {\\theta} present; no attempt is made to maintain manifest covariance. Upon using the Dirac Bracket to eliminate the second class constraints arising from the canonical momenta associated with half of these components, we find that the remaining components have canonical momenta that are all first class constraints. From these first class constraints, it is possible to derive the generator of half of the local Fermionic {\\kappa}-symmetry of Siegel; which half is contingent upon the choice of which half of the momenta associated with the components of {\\theta} are taken to be second class constraints. The algebra of the generator of this Fermionic symmetry transformation is examined.

  14. Functional linear regression via canonical analysis

    CERN Document Server

    He, Guozhong; Wang, Jane-Ling; Yang, Wenjing; 10.3150/09-BEJ228

    2011-01-01

    We study regression models for the situation where both dependent and independent variables are square-integrable stochastic processes. Questions concerning the definition and existence of the corresponding functional linear regression models and some basic properties are explored for this situation. We derive a representation of the regression parameter function in terms of the canonical components of the processes involved. This representation establishes a connection between functional regression and functional canonical analysis and suggests alternative approaches for the implementation of functional linear regression analysis. A specific procedure for the estimation of the regression parameter function using canonical expansions is proposed and compared with an established functional principal component regression approach. As an example of an application, we present an analysis of mortality data for cohorts of medflies, obtained in experimental studies of aging and longevity.

  15. Eigenstate Gibbs Ensemble in Integrable Quantum Systems

    CERN Document Server

    Nandy, Sourav; Das, Arnab; Dhar, Abhishek

    2016-01-01

    The Eigenstate Thermalization Hypothesis implies that for a thermodynamically large system in one of its eigenstates, the reduced density matrix describing any finite subsystem is determined solely by a set of {\\it relevant} conserved quantities. In a generic system, only the energy plays that role and hence eigenstates appear locally thermal. Integrable systems, on the other hand, possess an extensive number of such conserved quantities and hence the reduced density matrix requires specification of an infinite number of parameters (Generalized Gibbs Ensemble). However, here we show by unbiased statistical sampling of the individual eigenstates with a given finite energy density, that the local description of an overwhelming majority of these states of even such an integrable system is actually Gibbs-like, i.e. requires only the energy density of the eigenstate. Rare eigenstates that cannot be represented by the Gibbs ensemble can also be sampled efficiently by our method and their local properties are then s...

  16. Ensemble annealing of complex physical systems

    CERN Document Server

    Habeck, Michael

    2015-01-01

    Algorithms for simulating complex physical systems or solving difficult optimization problems often resort to an annealing process. Rather than simulating the system at the temperature of interest, an annealing algorithm starts at a temperature that is high enough to ensure ergodicity and gradually decreases it until the destination temperature is reached. This idea is used in popular algorithms such as parallel tempering and simulated annealing. A general problem with annealing methods is that they require a temperature schedule. Choosing well-balanced temperature schedules can be tedious and time-consuming. Imbalanced schedules can have a negative impact on the convergence, runtime and success of annealing algorithms. This article outlines a unifying framework, ensemble annealing, that combines ideas from simulated annealing, histogram reweighting and nested sampling with concepts in thermodynamic control. Ensemble annealing simultaneously simulates a physical system and estimates its density of states. The...

  17. Ensemble Forecasting of Major Solar Flares

    CERN Document Server

    Guerra, J A; Uritsky, V M

    2015-01-01

    We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte Carlo-type algorithm by applying a decision threshold $P_{th}$ to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the probabilities and events time series from 13 recent solar active regions (2012 - 2014), we found that a linear combination of probabilities can improve both probabilistic and categorical forecasts. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of $P_{th}$. According to the maximum values of HSS, a performance-based weights ...

  18. Rényi entropy, abundance distribution, and the equivalence of ensembles

    Science.gov (United States)

    Mora, Thierry; Walczak, Aleksandra M.

    2016-05-01

    Distributions of abundances or frequencies play an important role in many fields of science, from biology to sociology, as does the Rényi entropy, which measures the diversity of a statistical ensemble. We derive a mathematical relation between the abundance distribution and the Rényi entropy, by analogy with the equivalence of ensembles in thermodynamics. The abundance distribution is mapped onto the density of states, and the Rényi entropy to the free energy. The two quantities are related in the thermodynamic limit by a Legendre transform, by virtue of the equivalence between the micro-canonical and canonical ensembles. In this limit, we show how the Rényi entropy can be constructed geometrically from rank-frequency plots. This mapping predicts that non-concave regions of the rank-frequency curve should result in kinks in the Rényi entropy as a function of its order. We illustrate our results on simple examples, and emphasize the limitations of the equivalence of ensembles when a thermodynamic limit is not well defined. Our results help choose reliable diversity measures based on the experimental accuracy of the abundance distributions in particular frequency ranges.

  19. Current commutator anomalies and chiral anomalies in the canonical formalism

    International Nuclear Information System (INIS)

    Without recourse to the Bjorken-Johnson-Low (BJL) method, current-current and current-electric-field commutator anomalies are evaluated in chiral gauge theories in two- and four-dimensional spacetime with the help of a gauge covariant regularization method. The results are consistent with previous analyses through the BJL method, and partially confirmed Faddeev's conjecture on the commutator anomalies of the Gauss law constraint operators within the canonical formalism. The chiral anomalies of the current divergence are derived from these commutator anomalies in the Weyl gauge where current-electric-field commutator anomalies play important roles

  20. From Classical to Quantum: New Canonical Tools for the Dynamics of Gravity

    OpenAIRE

    Höhn, P.A.

    2012-01-01

    In a gravitational context, canonical methods offer an intuitive picture of the dynamics and simplify an identification of the degrees of freedom. Nevertheless, extracting dynamical information from background independent approaches to quantum gravity is a highly non-trivial challenge. In this thesis, the conundrum of (quantum) gravitational dynamics is approached from two different directions by means of new canonical tools. This thesis is accordingly divided into two parts: In the first par...

  1. On a Canonical Formulation of String Theory in Massive Background Fields

    OpenAIRE

    I.L. Buchbinder; Pershin, V. D.; Toder, G. B.

    1996-01-01

    We propose a method of constructing a gauge invariant canonical formulation for non-gauge classical theory which depends on a set of parameters. Requirement of closure for algebra of operators generating quantum gauge transformations leads to restrictions on parameters of the theory. This approach is then applied to bosonic string theory coupled to massive background fields. It is shown that within the proposed canonical formulation the correct linear equations of motion for background fields...

  2. The canonical effect in statistical models for relativistic heavy ion collisions

    OpenAIRE

    Keranen, A.; Becattini, F.

    2001-01-01

    Enforcing exact conservation laws instead of average ones in statistical thermal models for relativistic heavy ion reactions gives raise to so called canonical effect, which can be used to explain some enhancement effects when going from elementary (e.g. pp) or small (pA) systems towards large AA systems. We review the recently developed method for computation of canonical statistical thermodynamics, and give an insight when this is needed in analysis of experimental data.

  3. The canonical effect in statistical models for relativistic heavy ion collisions

    CERN Document Server

    Keränen, A

    2002-01-01

    Enforcing exact conservation laws instead of average ones in statistical thermal models for relativistic heavy ion reactions gives raise to so called canonical effect, which can be used to explain some enhancement effects when going from elementary (e.g. pp) or small (pA) systems towards large AA systems. We review the recently developed method for computation of canonical statistical thermodynamics, and give an insight when this is needed in analysis of experimental data.

  4. On the canonical forms of the multi-dimensional averaged brackets

    OpenAIRE

    Maltsev, A. Ya.

    2015-01-01

    We consider here special Poisson brackets given by the "averaging" of local multi-dimensional Poisson brackets in the Whitham method. For the brackets of this kind it is natural to ask about their canonical forms, which can be obtained after transformations preserving the "physical meaning" of the field variables. We show here that the averaged bracket can always be written in the canonical form after a transformation of "Hydrodynamic Type" in the case of absence of annihilators of initial br...

  5. The canonical effect in statistical models for relativistic heavy ion collisions

    International Nuclear Information System (INIS)

    Enforcing exact conservation laws instead of average laws in statistical thermal models for relativistic heavy ion reactions gives rise to the so-called canonical effect, which can be used to explain some enhancement effects when going from elementary (e.g. pp) or small (pA) systems towards large AA systems. We review the recently developed method for the computation of canonical statistical thermodynamics, and give an insight into when this is needed in the analysis of experimental data. (author)

  6. Non-canonical two-field inflation to order $\\xi^2$

    CERN Document Server

    Wang, Yun-Chao

    2016-01-01

    In non-canonical two-field inflation models, deviations from the canonical model can be captured by a parameter $\\xi$. We show this parameter is usually one half of the slow-roll order and analytically calculate the primordial power spectra to the precision of order $\\xi^2$. The super-horizon perturbations are studied with an improved method, which gives a correction of order $\\xi$. Three typical examples demonstrate that our analytical formulae of power spectra fit well with numerical simulation.

  7. Canonical quantization of generally covariant systems

    International Nuclear Information System (INIS)

    Kretschmann (1917) argued that general relativity does not satisfy any relativity principle and that it is actually a theory of absolute space-time. The issues raised by Kretschmann, that of Hamiltonian dynamics and of canonical quantization of generally covariant systems, are discussed. The questions raised are: what is the role of space-time diffeomorphisms in Hamiltonian dynamics of generally covariant systems, what is the role of isometries in Hamiltonian dynamics of such systems and what happens to both problems in canonical quantization. (author)

  8. Canonical transformations and Hamiltonian evolutionary systems

    International Nuclear Information System (INIS)

    In many Lagrangian field theories, one has a Poisson bracket defined on the space of local functionals. We find necessary and sufficient conditions for a transformation on the space of local functionals to be canonical in three different cases. These three cases depend on the specific dimensions of the vector bundle of the theory and the associated Hamiltonian differential operator. We also show how a canonical transformation transforms a Hamiltonian evolutionary system and its conservation laws. Finally, we illustrate these ideas with three examples.

  9. Hierarchical ensemble-based data fusion for structural health monitoring

    International Nuclear Information System (INIS)

    In structural health monitoring, damage detection results always have uncertainty because of three factors: measurement noise, modeling error and environment changes. Data fusion can lead to the improved accuracy of a classification decision as compared to a decision based on any individual data source alone. Ensemble approaches constitute a relatively new breed of algorithms used for data fusion. In this paper, we introduced a hierarchical ensemble scheme to the data fusion field. The hierarchical ensemble scheme was based on the Dempster–Shafer (DS) theory and the Rotation Forest (RF) method, it was called a hierarchical ensemble because the RF method itself was an ensemble method. The DS theory was used to combine the output of RF based on different data sources. The validation accuracy of the RF model was considered in the improvement of the performance of the hierarchical ensemble. Health monitoring of a small-scale two-story frame structure with different damages subject to shaking table tests was used as an example to validate the efficiency of the proposed scheme. The experimental results indicated that the proposed scheme will improve the identification accuracy and increase the reliability of identification

  10. Localization of atomic ensembles via superfluorescence

    OpenAIRE

    Macovei, M.; Evers, J.; Keitel, C. H.; Zubairy, M. S.

    2006-01-01

    The sub-wavelength localization of an ensemble of atoms concentrated to a small volume in space is investigated. The localization relies on the interaction of the ensemble with a standing wave laser field. The light scattered in the interaction of standing wave field and atom ensemble depends on the position of the ensemble relative to the standing wave nodes. This relation can be described by a fluorescence intensity profile, which depends on the standing wave field parameters, the ensemble ...

  11. Combining 2-m temperature nowcasting and short range ensemble forecasting

    Directory of Open Access Journals (Sweden)

    A. Kann

    2011-12-01

    Full Text Available During recent years, numerical ensemble prediction systems have become an important tool for estimating the uncertainties of dynamical and physical processes as represented in numerical weather models. The latest generation of limited area ensemble prediction systems (LAM-EPSs allows for probabilistic forecasts at high resolution in both space and time. However, these systems still suffer from systematic deficiencies. Especially for nowcasting (0–6 h applications the ensemble spread is smaller than the actual forecast error. This paper tries to generate probabilistic short range 2-m temperature forecasts by combining a state-of-the-art nowcasting method and a limited area ensemble system, and compares the results with statistical methods. The Integrated Nowcasting Through Comprehensive Analysis (INCA system, which has been in operation at the Central Institute for Meteorology and Geodynamics (ZAMG since 2006 (Haiden et al., 2011, provides short range deterministic forecasts at high temporal (15 min–60 min and spatial (1 km resolution. An INCA Ensemble (INCA-EPS of 2-m temperature forecasts is constructed by applying a dynamical approach, a statistical approach, and a combined dynamic-statistical method. The dynamical method takes uncertainty information (i.e. ensemble variance from the operational limited area ensemble system ALADIN-LAEF (Aire Limitée Adaptation Dynamique Développement InterNational Limited Area Ensemble Forecasting which is running operationally at ZAMG (Wang et al., 2011. The purely statistical method assumes a well-calibrated spread-skill relation and applies ensemble spread according to the skill of the INCA forecast of the most recent past. The combined dynamic-statistical approach adapts the ensemble variance gained from ALADIN-LAEF with non-homogeneous Gaussian regression (NGR which yields a statistical mbox{correction} of the first and second moment (mean bias and dispersion for Gaussian distributed continuous

  12. A canonical theory of dynamic decision-making

    Directory of Open Access Journals (Sweden)

    John eFox

    2013-04-01

    Full Text Available Decision-making behaviour is studied in many very different fields, from medicine and economics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI and other technical disciplines. However the conceptualisation of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision making with respect to other high-level cognitive capabilities like problem-solving, planning and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuro-psychology, artificial intelligence, and decision engineering.

  13. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations.

    Science.gov (United States)

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A; Siepmann, J Ilja

    2015-09-21

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor-liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T(c) = 1.3128 ± 0.0016, ρ(c) = 0.316 ± 0.004, and p(c) = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ(t) ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r(cut) = 3.5σ yield T(c) and p(c) that are higher by 0.2% and 1.4% than simulations with r(cut) = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r(cut) = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with

  14. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: siepmann@umn.edu [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)

    2015-09-21

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard

  15. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    International Nuclear Information System (INIS)

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields Tc = 1.3128 ± 0.0016, ρc = 0.316 ± 0.004, and pc = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρt ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using rcut = 3.5σ yield Tc and pc that are higher by 0.2% and 1.4% than simulations with rcut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that rcut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various ranges of the

  16. Grand canonical Monte Carlo using solvent repacking: Application to phase behavior of hard disk mixtures

    International Nuclear Information System (INIS)

    A new “solvent repacking Monte Carlo” strategy for performing grand canonical ensemble simulations in condensed phases is introduced and applied to the study of hard-disk systems. The strategy is based on the configuration-bias approach, but uses an auxiliary biasing potential to improve the efficiency of packing multiple solvent particles in the cavity formed by removing one large solute. The method has been applied to study the coexistence of ordered and isotropic phases in three binary mixtures of hard disks with a small mole fraction (xL < 0.02) of the larger “solute” component. A chemical potential of 12.81 ± 0.01 kBT was found to correspond to the freezing transition of the pure hard disk “solvent.” Simulations permitted the study of partitioning of large disks between ordered and isotropic phases, which showed a distinct non-monotonic dependence on size; the isotropic phase was enriched approximately 10-fold, 20-fold, and 5-fold over the coexisting ordered phases at diameter ratios d = 1.4, 2.5, and 3, respectively. Mixing of large and small disks within both phases near coexistence was strongly non-ideal in spite of the dilution. Structures of systems near coexistence were analyzed to determine correlations between large disks’ positions within each phase, the orientational correlation length of small disks within the fluid phases, and the nature of translational order in the ordered phase. The analyses indicate that the ordered phase coexists with an isotropic phase resembling a nanoemulsion of ordered domains of small disks, with large disks enriched at the disordered domain interfaces

  17. Phase-selective entrainment of nonlinear oscillator ensembles

    Science.gov (United States)

    Zlotnik, Anatoly; Nagao, Raphael; Kiss, István Z.; Li-Shin, Jr.

    2016-03-01

    The ability to organize and finely manipulate the hierarchy and timing of dynamic processes is important for understanding and influencing brain functions, sleep and metabolic cycles, and many other natural phenomena. However, establishing spatiotemporal structures in biological oscillator ensembles is a challenging task that requires controlling large collections of complex nonlinear dynamical units. In this report, we present a method to design entrainment signals that create stable phase patterns in ensembles of heterogeneous nonlinear oscillators without using state feedback information. We demonstrate the approach using experiments with electrochemical reactions on multielectrode arrays, in which we selectively assign ensemble subgroups into spatiotemporal patterns with multiple phase clusters. The experimentally confirmed mechanism elucidates the connection between the phases and natural frequencies of a collection of dynamical elements, the spatial and temporal information that is encoded within this ensemble, and how external signals can be used to retrieve this information.

  18. The influence of canon law on ius commune in its formative period

    Directory of Open Access Journals (Sweden)

    Mehmeti Sami

    2015-12-01

    Full Text Available In the Medieval period, Roman law and canon law formed ius commune or the common European law. The similarity between Roman and canon law was that they used the same methods and the difference was that they relied on different authoritative texts. In their works canonists and civilists combined the ancient Greek achievements in philosophy with the Roman achievements in the field of law. Canonists were the first who carried out research on the distinctions between various legal sources and systematized them according to a hierarchical order. The Medieval civilists sought solutions in canon law for a large number of problems that Justinian’s Codification did not hinge on or did it only superficially. Solutions offered by canon law were accepted not only in the civil law of Continental Europe, but also in the English law.

  19. Ensemble forecasting of major solar flares: First results

    Science.gov (United States)

    Guerra, J. A.; Pulkkinen, A.; Uritsky, V. M.

    2015-10-01

    We present the results from the first ensemble prediction model for major solar flares (M and X classes). The primary aim of this investigation is to explore the construction of an ensemble for an initial prototyping of this new concept. Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte Carlo-type algorithm that applies a decision threshold Pth to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the data for 13 recent solar active regions between years 2012 and 2014, we found that linear combination methods can improve the overall probabilistic prediction and improve the categorical prediction for certain values of decision thresholds. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of Pth. According to the maximum values of HSS, a performance-based weights calculated by averaging over the sample, performed similarly to a equally weighted model. The values Pth for which the ensemble forecast performs the best are 25% for M-class flares and 15% for X-class flares. When the human-adjusted probabilities from NOAA are excluded from the ensemble, the ensemble performance in terms of the Heidke score is reduced.

  20. Evolutionary Ensemble for In Silico Prediction of Ames Test Mutagenicity

    Science.gov (United States)

    Chen, Huanhuan; Yao, Xin

    Driven by new regulations and animal welfare, the need to develop in silico models has increased recently as alternative approaches to safety assessment of chemicals without animal testing. This paper describes a novel machine learning ensemble approach to building an in silico model for the prediction of the Ames test mutagenicity, one of a battery of the most commonly used experimental in vitro and in vivo genotoxicity tests for safety evaluation of chemicals. Evolutionary random neural ensemble with negative correlation learning (ERNE) [1] was developed based on neural networks and evolutionary algorithms. ERNE combines the method of bootstrap sampling on training data with the method of random subspace feature selection to ensure diversity in creating individuals within an initial ensemble. Furthermore, while evolving individuals within the ensemble, it makes use of the negative correlation learning, enabling individual NNs to be trained as accurate as possible while still manage to maintain them as diverse as possible. Therefore, the resulting individuals in the final ensemble are capable of cooperating collectively to achieve better generalization of prediction. The empirical experiment suggest that ERNE is an effective ensemble approach for predicting the Ames test mutagenicity of chemicals.

  1. Microcanonical ensemble extensive thermodynamics of Tsallis statistics

    International Nuclear Information System (INIS)

    The microscopic foundation of the generalized equilibrium statistical mechanics based on the Tsallis entropy is given by using the Gibbs idea of statistical ensembles of the classical and quantum mechanics.The equilibrium distribution functions are derived by the thermodynamic method based upon the use of the fundamental equation of thermodynamics and the statistical definition of the functions of the state of the system. It is shown that if the entropic index ξ = 1/q - 1 in the microcanonical ensemble is an extensive variable of the state of the system, then in the thermodynamic limit z bar = 1/(q - 1)N = const the principle of additivity and the zero law of thermodynamics are satisfied. In particular, the Tsallis entropy of the system is extensive and the temperature is intensive. Thus, the Tsallis statistics completely satisfies all the postulates of the equilibrium thermodynamics. Moreover, evaluation of the thermodynamic identities in the microcanonical ensemble is provided by the Euler theorem. The principle of additivity and the Euler theorem are explicitly proved by using the illustration of the classical microcanonical ideal gas in the thermodynamic limit

  2. Kuidas Canon suureks kasvas / Andres Eilart

    Index Scriptorium Estoniae

    Eilart, Andres

    2004-01-01

    Jaapani kaamerate ja büroomasinate tootja Canon Groupi arengust, tegevusest kolmes regioonis - USA-s, Euroopas ja Aasias ning ettevõtte pikaajalise edu põhjustest - ärifilosoofiast ning ajastatud tootearendusest. Vt. samas: Firma esialgne nimi oli Kwanon; Konkurendid koonduvad

  3. On the canonical treatment of Lagrangian constraints

    International Nuclear Information System (INIS)

    The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a special Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge

  4. Canonical Quantization of Higher-Order Lagrangians

    Directory of Open Access Journals (Sweden)

    Khaled I. Nawafleh

    2011-01-01

    Full Text Available After reducing a system of higher-order regular Lagrangian into first-order singular Lagrangian using constrained auxiliary description, the Hamilton-Jacobi function is constructed. Besides, the quantization of the system is investigated using the canonical path integral approximation.

  5. On the generalized Lorenz canonical form

    Czech Academy of Sciences Publication Activity Database

    Čelikovský, Sergej; Guanrong, Ch.

    2005-01-01

    Roč. 26, č. 5 (2005), s. 1271-1276. ISSN 0960-0779 R&D Projects: GA ČR GA102/05/0011; GA MŠk 1P05LA262 Institutional research plan: CEZ:AV0Z10750506 Keywords : chaos * synchronization * canonical form Subject RIV: BC - Control Systems Theory Impact factor: 1.938, year: 2005

  6. Infants' Recognition of Objects Using Canonical Color

    Science.gov (United States)

    Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.

    2010-01-01

    We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…

  7. Conservation laws of semidiscrete canonical Hamiltonian equations

    International Nuclear Information System (INIS)

    There are many evolution partial differential equations which can be cast into Hamiltonian form. Conservation laws of these equations are related to one-parameter Hamiltonian symmetries admitted by the PDEs. The same result holds for semidiscrete Hamiltonian equations. In this paper we consider semidiscrete canonical Hamiltonian equations. Using symmetries, we find conservation laws for the semidiscretized nonlinear wave equation and Schroedinger equation. (author)

  8. Kelvin's Canonical Circulation Theorem in Hall Magnetohydrodynamics

    CERN Document Server

    Shivamoggi, B K

    2016-01-01

    The purpose of this paper is to show that, thanks to the restoration of the legitimate connection between the current density and the plasma flow velocity in Hall magnetohydrodynamics (MHD), Kelvin's Circulation Theorem becomes valid in Hall MHD. The ion-flow velocity in the usual circulation integral is now replaced by the canonical ion-flow velocity.

  9. Estimating preselected and postselected ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Massar, Serge [Laboratoire d' Information Quantique, C.P. 225, Universite libre de Bruxelles (U.L.B.), Av. F. D. Rooselvelt 50, B-1050 Bruxelles (Belgium); Popescu, Sandu [H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Hewlett-Packard Laboratories, Stoke Gifford, Bristol BS12 6QZ (United Kingdom)

    2011-11-15

    In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.

  10. Joint state and parameter estimation with an iterative ensemble Kalman smoother

    OpenAIRE

    M. Bocquet; Sakov, P.

    2013-01-01

    International audience Both ensemble filtering and variational data assimilation methods have proven useful in the joint estimation of state variables and parameters of geophysical models. Yet, their respective benefits and drawbacks in this task are distinct. An ensemble variational method, known as the iterative ensemble Kalman smoother (IEnKS) has recently been introduced. It is based on an adjoint model-free variational, but flow-dependent, scheme. As such, the IEnKS is a candidate too...

  11. A Spectral Canonical Electrostatic Algorithm

    CERN Document Server

    Webb, Stephen D

    2015-01-01

    Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton's Principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm's energy- and momentum-conserving properties.

  12. Canonical brackets of a toy model for the Hodge theory without its canonical conjugate momenta

    OpenAIRE

    D Shukla; Bhanja, T.; Malik, R. P.

    2014-01-01

    We consider the toy model of a rigid rotor as an example of the Hodge theory within the framework of the Becchi-Rouet-Stora-Tyutin (BRST) formalism and show that the internal symmetries of this theory lead to the derivation of canonical brackets amongst the creation and annihilation operators of the dynamical variables where the definition of the canonical conjugate momenta is not required. We invoke only the spin-statistics theorem, normal ordering and basic concepts of continuous symmetries...

  13. Heteroscedastic Extended Logistic Regression for Post-Processing of Ensemble Guidance

    Science.gov (United States)

    Messner, Jakob W.; Mayr, Georg J.; Wilks, Daniel S.; Zeileis, Achim

    2014-05-01

    To achieve well-calibrated probabilistic weather forecasts, numerical ensemble forecasts are often statistically post-processed. One recent ensemble-calibration method is extended logistic regression which extends the popular logistic regression to yield full probability distribution forecasts. Although the purpose of this method is to post-process ensemble forecasts, usually only the ensemble mean is used as predictor variable, whereas the ensemble spread is neglected because it does not improve the forecasts. In this study we show that when simply used as ordinary predictor variable in extended logistic regression, the ensemble spread only affects the location but not the variance of the predictive distribution. Uncertainty information contained in the ensemble spread is therefore not utilized appropriately. To solve this drawback we propose a new approach where the ensemble spread is directly used to predict the dispersion of the predictive distribution. With wind speed data and ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) we show that using this approach, the ensemble spread can be used effectively to improve forecasts from extended logistic regression.

  14. Ensemble teleportation under suboptimal conditions

    International Nuclear Information System (INIS)

    The possibility of teleportation is certainly the most interesting consequence of quantum non-separability. In the present paper, the feasibility of teleportation is examined on the basis of the rigorous ensemble interpretation of quantum mechanics if non-ideal constraints are imposed on the teleportation scheme. Importance is attached both to the case of noisy Einstein-Podolsky-Rosen (EPR) ensembles and to the conditions under which automatic teleportation is still possible. The success of teleportation is discussed using a new fidelity measure which avoids the weaknesses of previous proposals

  15. The Ensembl Variant Effect Predictor.

    Science.gov (United States)

    McLaren, William; Gil, Laurent; Hunt, Sarah E; Riat, Harpreet Singh; Ritchie, Graham R S; Thormann, Anja; Flicek, Paul; Cunningham, Fiona

    2016-01-01

    The Ensembl Variant Effect Predictor is a powerful toolset for the analysis, annotation, and prioritization of genomic variants in coding and non-coding regions. It provides access to an extensive collection of genomic annotation, with a variety of interfaces to suit different requirements, and simple options for configuring and extending analysis. It is open source, free to use, and supports full reproducibility of results. The Ensembl Variant Effect Predictor can simplify and accelerate variant interpretation in a wide range of study designs. PMID:27268795

  16. Improving ensemble forecasting with q-norm bred vectors

    Science.gov (United States)

    Pazo, Diego; Lopez, Juan Manuel; Rodriguez, Miguel Angel

    2016-04-01

    Error breeding is a popular and simple method to generate initial perturbations for use in ensemble forecasting that is used for operational purposes in many weather/climate centres worldwide. There is a widespread belief among practitioners that the type of norm used in the periodic normalizations of BVs does not have an effect on the performance of ensemble forecasting systems. However, we have recently reported that BVs constructed with different norms have indeed very different dynamical and spatial properties. In particular, BVs constructed with the 0-norm or geometric norm has nice properties (e.g. enhancement of the ensemble diversity), which in principle render it more adequate to construct ensembles than other norm types like the Euclidean one. These advantages are clearly demonstrated here in a simple experiment of ensemble forecasting for the Lorenz-96 model with ensembles of BVs. Our simple numerical assimilation experiment shows how the increased statistical diversity of geometric BVs leads to improved scores regarding forecasting capabilities as compared with BVs constructed with the standard Euclidean norm.

  17. Effects of ensembles on methane hydrate nucleation kinetics.

    Science.gov (United States)

    Zhang, Zhengcai; Liu, Chan-Juan; Walsh, Matthew R; Guo, Guang-Jun

    2016-06-21

    By performing molecular dynamics simulations to form a hydrate with a methane nano-bubble in liquid water at 250 K and 50 MPa, we report how different ensembles, such as the NPT, NVT, and NVE ensembles, affect the nucleation kinetics of the methane hydrate. The nucleation trajectories are monitored using the face-saturated incomplete cage analysis (FSICA) and the mutually coordinated guest (MCG) order parameter (OP). The nucleation rate and the critical nucleus are obtained using the mean first-passage time (MFPT) method based on the FS cages and the MCG-1 OPs, respectively. The fitting results of MFPT show that hydrate nucleation and growth are coupled together, consistent with the cage adsorption hypothesis which emphasizes that the cage adsorption of methane is a mechanism for both hydrate nucleation and growth. For the three different ensembles, the hydrate nucleation rate is quantitatively ordered as follows: NPT > NVT > NVE, while the sequence of hydrate crystallinity is exactly reversed. However, the largest size of the critical nucleus appears in the NVT ensemble, rather than in the NVE ensemble. These results are helpful for choosing a suitable ensemble when to study hydrate formation via computer simulations, and emphasize the importance of the order degree of the critical nucleus. PMID:27222203

  18. A new approach to derive Pfaffian structures for random matrix ensembles

    International Nuclear Information System (INIS)

    Correlation functions for matrix ensembles with orthogonal and unitary-symplectic rotation symmetry are more complicated to calculate than in the unitary case. The supersymmetry method and the orthogonal polynomials are two techniques to tackle this task. Recently, we presented a new method to average ratios of characteristic polynomials over matrix ensembles invariant under the unitary group. Here, we extend this approach to ensembles with orthogonal and unitary-symplectic rotation symmetry. We show that Pfaffian structures can be derived for a wide class of orthogonal and unitary-symplectic rotation invariant ensembles in a unifying way. This also includes those for which this structure was not known previously, as the real Ginibre ensemble and the Gaussian real chiral ensemble with two independent matrices as well.

  19. Communication: Generalized canonical purification for density matrix minimization

    Science.gov (United States)

    Truflandier, Lionel A.; Dianzinga, Rivo M.; Bowler, David R.

    2016-03-01

    A Lagrangian formulation for the constrained search for the N-representable one-particle density matrix based on the McWeeny idempotency error minimization is proposed, which converges systematically to the ground state. A closed form of the canonical purification is derived for which no a posteriori adjustment on the trace of the density matrix is needed. The relationship with comparable methods is discussed, showing their possible generalization through the hole-particle duality. The appealing simplicity of this self-consistent recursion relation along with its low computational complexity could prove useful as an alternative to diagonalization in solving dense and sparse matrix eigenvalue problems.

  20. Multimodel ensembles of wheat growth

    DEFF Research Database (Denmark)

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold;

    2015-01-01

    such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24–38% for...

  1. Spectral Diagonal Ensemble Kalman Filters

    Czech Academy of Sciences Publication Activity Database

    Kasanický, Ivan; Mandel, Jan; Vejmelka, Martin

    2015-01-01

    Roč. 22, č. 4 (2015), s. 485-497. ISSN 1023-5809 R&D Projects: GA ČR GA13-34856S Grant ostatní: NSF(US) DMS -1216481 Institutional support: RVO:67985807 Keywords : data assimilation * ensemble Kalman filter * spectral representation Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.987, year: 2014

  2. Spectral Diagonal Ensemble Kalman Filters

    Czech Academy of Sciences Publication Activity Database

    Kasanický, Ivan; Mandel, Jan; Vejmelka, Martin

    2015-01-01

    Roč. 22, č. 4 (2015), s. 485-497. ISSN 1023-5809 R&D Projects: GA ČR GA13-34856S Grant ostatní: NSF(US) DMS-1216481 Institutional support: RVO:67985807 Keywords : data assimilation * ensemble Kalman filter * spectral representation Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.987, year: 2014

  3. Dunkl Operators and Canonical Invariants of Reflection Groups

    OpenAIRE

    Arkady Berenstein; Yurii Burman

    2008-01-01

    Using Dunkl operators, we introduce a continuous family of canonical invariants of finite reflection groups. We verify that the elementary canonical invariants of the symmetric group are deformations of the elementary symmetric polynomials. We also compute the canonical invariants for all dihedral groups as certain hypergeometric functions.

  4. DNA pattern recognition using canonical correlation algorithm

    Indian Academy of Sciences (India)

    B K Sarkar; Chiranjib Chakraborty

    2015-10-01

    We performed canonical correlation analysis as an unsupervised statistical tool to describe related views of the same semantic object for identifying patterns. A pattern recognition technique based on canonical correlation analysis (CCA) was proposed for finding required genetic code in the DNA sequence. Two related but different objects were considered: one was a particular pattern, and other was test DNA sequence. CCA found correlations between two observations of the same semantic pattern and test sequence. It is concluded that the relationship possesses maximum value in the position where the pattern exists. As a case study, the potential of CCA was demonstrated on the sequence found from HIV-1 preferred integration sites. The subsequences on the left and right flanking from the integration site were considered as the two views, and statistically significant relationships were established between these two views to elucidate the viral preference as an important factor for the correlation.

  5. Genre, canon et monstruosités

    OpenAIRE

    2010-01-01

    Le numéro sept de notre revue réunit les collaborations d’Isabelle López García (posthume), transition intéressante entre canon littéraire, canon social et figuration des corps rendus monstrueux par le fléau du sida ; de Richard Cleminson, spécialiste de l’histoire de la sexualité (en annexe), ainsi que celles de plusieurs chercheures et chercheurs présent.e.s aux journées d’étude entre Toulouse et Tours : Cecilia González analyse le dialogue entre les “anciens et les modernes” à trav...

  6. Statistical Mechanics of Linear and Nonlinear Time-Domain Ensemble Learning

    OpenAIRE

    Miyoshi, Seiji; Okada, Masato

    2006-01-01

    Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear mod...

  7. Global Ensemble Forecast System (GEFS) [1 Deg.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...

  8. Rewriting Canonical Love Stories from the Peripheries

    OpenAIRE

    Yang, Karen Ya-Chu

    2013-01-01

    In her article "Rewriting Canonical Love Stories from the Peripheries" Karen Ya-Chu Yang compares postcolonial and postmodern intertextuality in Taiwanese and the Caribbean texts. Hsien-Yung Pai's "Wandering in the Garden, Waking from a Dream" (1966) and Tien-Hsin Chu's "Breakfast at Tiffany's" (1997) are two short stories which depict identity crises of first generation and second generation 外省人 (waishen gren, mainland immigrants). In these two texts disillusionment towards the center's roma...

  9. A canonical representation for aggregated Markov processes

    OpenAIRE

    Larget, Bret

    1998-01-01

    A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization ...

  10. Bayesian Model Averaging for Ensemble-Based Estimates of Solvation Free Energies

    CERN Document Server

    Gosink, Luke J; Reehl, Sarah M; Whitney, Paul D; Mobley, David L; Baker, Nathan A

    2016-01-01

    This paper applies the Bayesian Model Averaging (BMA) statistical ensemble technique to estimate small molecule solvation free energies. There is a wide range methods for predicting solvation free energies, ranging from empirical statistical models to ab initio quantum mechanical approaches. Each of these methods are based on a set of conceptual assumptions that can affect a method's predictive accuracy and transferability. Using an iterative statistical process, we have selected and combined solvation energy estimates using an ensemble of 17 diverse methods from the SAMPL4 blind prediction study to form a single, aggregated solvation energy estimate. The ensemble design process evaluates the statistical information in each individual method as well as the performance of the aggregate estimate obtained from the ensemble as a whole. Methods that possess minimal or redundant information are pruned from the ensemble and the evaluation process repeats until aggregate predictive performance can no longer be improv...

  11. Canonical Energy is Quantum Fisher Information

    CERN Document Server

    Lashkari, Nima

    2015-01-01

    In quantum information theory, Fisher Information is a natural metric on the space of perturbations to a density matrix, defined by calculating the relative entropy with the unperturbed state at quadratic order in perturbations. In gravitational physics, Canonical Energy defines a natural metric on the space of perturbations to spacetimes with a Killing horizon. In this paper, we show that the Fisher information metric for perturbations to the vacuum density matrix of a ball-shaped region B in a holographic CFT is dual to the canonical energy metric for perturbations to a corresponding Rindler wedge R_B of Anti-de-Sitter space. Positivity of relative entropy at second order implies that the Fisher information metric is positive definite. Thus, for physical perturbations to anti-de-Sitter spacetime, the canonical energy associated to any Rindler wedge must be positive. This second-order constraint on the metric extends the first order result from relative entropy positivity that physical perturbations must sat...

  12. Juridic status: canonical provisions, possible applications.

    Science.gov (United States)

    Morrisey, F G

    1986-09-01

    The 1983 Code of Canon Law presents the basic legislation regarding juridic persons, which are entities brought into existence to assist in carrying out the Church's mission. Juridic persons by nature are perpetual and are not directly identified with their members. The private juridic person, a concept introduced in the 1983 code, operates collegially on behalf of its members or noncollegially on behalf of the things that constitute it. A ministry that receives private juridic status does not share as integrally in the Church's name. The latter therefore has more duties to fulfill in regard to observance of Church law, particularly that concerning the administration of temporal goods. The goods of a private juridic person, in contrast, are not ecclesiastical and thus are not subject to canon law. Instead, the private juridic persons' statutes provide norms for their administration. Canon law in establishing juridic persons enables the ministries they represent to last beyond the lives of those who initiated the ministries. Juridic persons offer both security and possibilities for concerted apostolic activity in the Church. PMID:10277620

  13. Exploring the calibration of a wind forecast ensemble for energy applications

    Science.gov (United States)

    Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne

    2015-04-01

    In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw

  14. Quantifying Monte Carlo uncertainty in ensemble Kalman filter

    Energy Technology Data Exchange (ETDEWEB)

    Thulin, Kristian; Naevdal, Geir; Skaug, Hans Julius; Aanonsen, Sigurd Ivar

    2009-01-15

    This report is presenting results obtained during Kristian Thulin PhD study, and is a slightly modified form of a paper submitted to SPE Journal. Kristian Thulin did most of his portion of the work while being a PhD student at CIPR, University of Bergen. The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated fram the updated ensemble, but because of the low rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with smaller ensemble size to obtain truly independent samples from the posterior distribution. This allows a point-wise confidence interval for the posterior cumulative distribution function (CDF) to be constructed. We present a methodology for finding an optimal combination of ensemble batch size (n) and number of EnKF runs (m) while keeping the total number of ensemble members ( m x n) constant. The optimal combination of n and m is found through minimizing the integrated mean square error (MSE) for the CDFs and we choose to define an EnKF run with 10.000 ensemble members as having zero Monte Carlo error. The methodology is tested on a simplistic, synthetic 2D model, but should be applicable also to larger, more realistic models. (author). 12 refs., figs.,tabs

  15. A novel hybrid ensemble learning paradigm for nuclear energy consumption forecasting

    International Nuclear Information System (INIS)

    Highlights: ► A hybrid ensemble learning paradigm integrating EEMD and LSSVR is proposed. ► The hybrid ensemble method is useful to predict time series with high volatility. ► The ensemble method can be used for both one-step and multi-step ahead forecasting. - Abstract: In this paper, a novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EEMD) and least squares support vector regression (LSSVR) is proposed for nuclear energy consumption forecasting, based on the principle of “decomposition and ensemble”. This hybrid ensemble learning paradigm is formulated specifically to address difficulties in modeling nuclear energy consumption, which has inherently high volatility, complexity and irregularity. In the proposed hybrid ensemble learning paradigm, EEMD, as a competitive decomposition method, is first applied to decompose original data of nuclear energy consumption (i.e. a difficult task) into a number of independent intrinsic mode functions (IMFs) of original data (i.e. some relatively easy subtasks). Then LSSVR, as a powerful forecasting tool, is implemented to predict all extracted IMFs independently. Finally, these predicted IMFs are aggregated into an ensemble result as final prediction, using another LSSVR. For illustration and verification purposes, the proposed learning paradigm is used to predict nuclear energy consumption in China. Empirical results demonstrate that the novel hybrid ensemble learning paradigm can outperform some other popular forecasting models in both level prediction and directional forecasting, indicating that it is a promising tool to predict complex time series with high volatility and irregularity.

  16. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    Science.gov (United States)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data

  17. A NETWORK TRAFFIC CLASSIFICATION METHOD FOR MULTIPLE CLASSIFIERS SELECTIVE ENSEMBLE%一种多分类器选择性集成的网络流量分类方法

    Institute of Scientific and Technical Information of China (English)

    李平红; 陶晓玲; 王勇

    2014-01-01

    针对多分类器集成方法产生的流量分类器在泛化能力方面的局限性,提出一种选择性集成网络流量分类框架,以满足流量分类对分类器高效的需求.基于此框架,提出一种多分类器选择性集成的网络流量分类方法MCSE(Multiple Classifiers Selective Ensemble network traffic classification method),解决多分类器的选取问题.该方法首先利用半监督学习技术提升基分类器的精度,然后改进不一致性度量方法对分类器差异性的度量策略,降低多分类器集成方法实现网络流量分类的复杂性,有效减少选择最优分类器的计算开销.实验表明,与Bagging算法和GASEN算法相比,MCSE方法能更充分利用基分类器间的互补性,具有更高效的流量分类性能.

  18. Ciliary IFT80 balances canonical versus non-canonical hedgehog signalling for osteoblast differentiation

    Science.gov (United States)

    Yuan, Xue; Cao, Jay; He, Xiaoning; Serra, Rosa; Qu, Jun; Cao, Xu; Yang, Shuying

    2016-01-01

    Intraflagellar transport proteins (IFT) are required for hedgehog (Hh) signalling transduction that is essential for bone development, however, how IFT proteins regulate Hh signalling in osteoblasts (OBs) remains unclear. Here we show that deletion of ciliary IFT80 in OB precursor cells (OPC) in mice results in growth retardation and markedly decreased bone mass with impaired OB differentiation. Loss of IFT80 blocks canonical Hh–Gli signalling via disrupting Smo ciliary localization, but elevates non-canonical Hh–Gαi–RhoA–stress fibre signalling by increasing Smo and Gαi binding. Inhibition of RhoA and ROCK activity partially restores osteogenic differentiation of IFT80-deficient OPCs by inhibiting non-canonical Hh–RhoA–Cofilin/MLC2 signalling. Cytochalasin D, an actin destabilizer, dramatically restores OB differentiation of IFT80-deficient OPCs by disrupting actin stress fibres and promoting cilia formation and Hh–Gli signalling. These findings reveal that IFT80 is required for OB differentiation by balancing between canonical Hh–Gli and non-canonical Hh–Gαi–RhoA pathways and highlight IFT80 as a therapeutic target for craniofacial and skeletal abnormalities. PMID:26996322

  19. Changes in Canon Cosmetic Standards after Rhinoplasty and Its Association with Patients Satisfaction Level

    Directory of Open Access Journals (Sweden)

    S. Mohammad Motamed-al-Shariati

    2012-04-01

    Full Text Available Background: Rhinoplasty is one of the most common plastic surgeries. Although patient satisfaction is still the main prerequisite for success, but this method of determining the outcome of surgery is qualitative. A quantitative method is required to compare the results of rhinoplasty surgery results.Materials and Methods: In this pilot study, Canon cosmetics standards were measured in 15 patients undergoing rhinoplasty before and after the surgery. The changes in these standards were presented quantitatively. In addition, the patients’ satisfaction from the surgery was examined through questionnaires. Data were analyzed using statistical SPSS-11 software, dependent t-test and Pearson correlation coefficient.Results: 15 patients were examined in a 6-month period; all patients were female and their average age was 23. The results showed that rhinoplasty makes changes in 5 out of 9 standards of Canon. The lowest patient satisfaction score was 17 and the highest was 24. The average satisfaction score was 22/3, score reduction was shown after rhinoplasty in all Canon standards except for standard 7 and 8 (p <0/05. There was no statistically significant relationship between changes in Canon standards before and after rhinoplasty surgery and patient satisfaction.Conclusion: The results showed that even if Canon standards change after the surgery, patients’ satisfaction depends on other factors rather than the mathematical calculation of changes in face component. In other words, although symmetry is desirable, it is not equivalent to beauty.

  20. Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter

    KAUST Repository

    Luo, Xiaodong

    2011-12-01

    A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.

  1. Reservoir History Matching Using Ensemble Kalman Filters with Anamorphosis Transforms

    KAUST Repository

    Aman, Beshir M.

    2012-12-01

    This work aims to enhance the Ensemble Kalman Filter performance by transforming the non-Gaussian state variables into Gaussian variables to be a step closer to optimality. This is done by using univariate and multivariate Box-Cox transformation. Some History matching methods such as Kalman filter, particle filter and the ensemble Kalman filter are reviewed and applied to a test case in the reservoir application. The key idea is to apply the transformation before the update step and then transform back after applying the Kalman correction. In general, the results of the multivariate method was promising, despite the fact it over-estimated some variables.

  2. Energy Landscapes of Dynamic Ensembles of Rolling Triplet Repeat Bulge Loops: Implications for DNA Expansion Associated with Disease States

    OpenAIRE

    Völker, Jens; Gindikin, Vera; Klump, Horst H.; Plum, G. Eric; Breslauer, Kenneth J.

    2012-01-01

    DNA repeat domains can form ensembles of canonical and noncanonical states, including stable and metastable DNA secondary structures. Such sequence-induced structural diversity creates complex conformational landscapes for DNA processing pathways, including those triplet expansion events that accompany replication, recombination, and/or repair. Here we demonstrate further levels of conformational complexity within repeat domains. Specifically, we show that bulge loop structures within an exte...

  3. An adaptive additive inflation scheme for Ensemble Kalman Filters

    Science.gov (United States)

    Sommer, Matthias; Janjic, Tijana

    2016-04-01

    Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.

  4. Data-adaptive unfolding of nonergodic spectra: Two-Body Random Ensemble

    International Nuclear Information System (INIS)

    The statistics of spectral fluctuations is sensitive to the unfolding procedure that separates global from local properties. Previously, we presented a parameter-free and data- adaptive unfolding method that we demonstrated to be highly effective for standard random- matrix ensembles from Random Matrix Theory (RMT). More general ensembles often break the ergodicity property, which leads to ambiguities between individual-spectrum averaged and ensemble-averaged fluctuation measures. Here, we apply our data-adaptive unfolding to a nonergodic Two-Body Random Ensemble (TBRE). In the present approach, both fluctuation measures can be calculated simultaneously within the same unfolding step, and possible arbitrarities introduced by traditional unfolding procedures are avoided

  5. Canonical Relational Quantum Mechanics from Information Theory

    OpenAIRE

    Munkhammar, Joakim

    2011-01-01

    In this paper we construct a theory of quantum mechanics based on Shannon information theory. We define a few principles regarding information-based frames of reference, including explicitly the concept of information covariance, and show how an ensemble of all possible physical states can be setup on the basis of the accessible information in the local frame of reference. In the next step the Bayesian principle of maximum entropy is utilized in order to constrain the dynamics. We then show, ...

  6. Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics

    CERN Document Server

    Scheuerer, Michael

    2013-01-01

    Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...

  7. Study on ETKF-Based Initial Perturbation Scheme for GRAPES Global Ensemble Prediction

    Institute of Scientific and Technical Information of China (English)

    MA Xulin; XUE Jishan; LU Weisong

    2009-01-01

    Initial perturbation scheme is one of the important problems for ensemble prediction. In this paper,ensemble initial perturbation scheme for Global/Regional Assimilation and PrEdiction System (GRAPES)global ensemble prediction is developed in terms of the ensemble transform Kalman filter (ETKF) method.A new GRAPES global ensemble prediction system (GEPS) is also constructed. The spherical simplex 14-member ensemble prediction experiments, using the simulated observation network and error character-lstics of simulated observations and innovation-based inflation, are carried out for about two months. The structure characters and perturbation amplitudes of the ETKF initial perturbations and the perturbation growth characters are analyzed, and their qualities and abilities for the ensemble initial perturbations are given.The preliminary experimental results indicate that the ETKF-based GRAPES ensemble initial perturba- tions could identify main normal structures of analysis error variance and reflect the perturbation amplitudes.The initial perturbations and the spread are reasonable. The initial perturbation variance, which is approx-imately equal to the forecast error variance, is found to respond to changes in the observational spatial variations with simulated observational network density. The perturbations generated through the simplex method are also shown to exhibit a very high degree of consistency between initial analysis and short-range forecast perturbations. The appropriate growth and spread of ensemble perturbations can be maintained up to 96-h lead time. The statistical results for 52-day ensemble forecasts show that the forecast scores of ensemble average for the Northern Hemisphere are higher than that of the control forecast. Provided that using more ensemble members, a real-time observational network and a more appropriate inflation factor,better effects of the ETKF-based initial scheme should be shown.

  8. A Comparison of ETKF and Downscaling in a Regional Ensemble Prediction System

    Directory of Open Access Journals (Sweden)

    Hanbin Zhang

    2015-03-01

    Full Text Available Based on the operational regional ensemble prediction system (REPS in China Meteorological Administration (CMA, this paper carried out comparison of two initial condition perturbation methods: an ensemble transform Kalman filter (ETKF and a dynamical downscaling of global ensemble perturbations. One month consecutive tests are implemented to evaluate the performance of both methods in the operational REPS environment. The perturbation characteristics are analyzed and ensemble forecast verifications are conducted; furthermore, a TC case is investigated. The main conclusions are as follows: the ETKF perturbations contain more power at small scales while the ones derived from downscaling contain more power at large scales, and the relative difference of the two types of perturbations on scales become smaller with forecast lead time. The growth of downscaling perturbations is more remarkable, and the downscaling perturbations have larger magnitude than ETKF perturbations at all forecast lead times. However, the ETKF perturbation variance can represent the forecast error variance better than downscaling. Ensemble forecast verification shows slightly higher skill of downscaling ensemble over ETKF ensemble. A TC case study indicates that the overall performance of the two systems are quite similar despite the slightly smaller error of DOWN ensemble than ETKF ensemble at long range forecast lead times.

  9. Ensemble Kalman Filtering without a Model

    Science.gov (United States)

    Hamilton, Franz; Berry, Tyrus; Sauer, Timothy

    2016-01-01

    Methods of data assimilation are established in physical sciences and engineering for the merging of observed data with dynamical models. When the model is nonlinear, methods such as the ensemble Kalman filter have been developed for this purpose. At the other end of the spectrum, when a model is not known, the delay coordinate method introduced by Takens has been used to reconstruct nonlinear dynamics. In this article, we merge these two important lines of research. A model-free filter is introduced based on the filtering equations of Kalman and the data-driven modeling of Takens. This procedure replaces the model with dynamics reconstructed from delay coordinates, while using the Kalman update formulation to reconcile new observations. We find that this combination of approaches results in comparable efficiency to parametric methods in identifying underlying dynamics, and may actually be superior in cases of model error.

  10. Symanzik flow on HISQ ensembles

    CERN Document Server

    Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R

    2013-01-01

    We report on a scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The lattice scale $w_0/a$, originally proposed by the BMW collaboration, is computed using Symanzik flow at four lattice spacings ranging from 0.15 to 0.06 fm. With a Taylor series ansatz, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We give a preliminary determination of the scale $w_0$ in physical units, along with associated systematic errors, and compare with results from other groups. We also present a first estimate of autocorrelation lengths as a function of flowtime for these ensembles.

  11. Precipitation ensembles conforming to natural variations derived from a regional climate model using a new bias correction scheme

    Science.gov (United States)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2016-05-01

    This study presents a novel bias correction scheme for regional climate model (RCM) precipitation ensembles. A primary advantage of using model ensembles for climate change impact studies is that the uncertainties associated with the systematic error can be quantified through the ensemble spread. Currently, however, most of the conventional bias correction methods adjust all the ensemble members to one reference observation. As a result, the ensemble spread is degraded during bias correction. Since the observation is only one case of many possible realizations due to the climate natural variability, a successful bias correction scheme should preserve the ensemble spread within the bounds of its natural variability (i.e. sampling uncertainty). To demonstrate a new bias correction scheme conforming to RCM precipitation ensembles, an application to the Thorverton catchment in the south-west of England is presented. For the ensemble, 11 members from the Hadley Centre Regional Climate Model (HadRM3-PPE) data are used and monthly bias correction has been done for the baseline time period from 1961 to 1990. In the typical conventional method, monthly mean precipitation of each of the ensemble members is nearly identical to the observation, i.e. the ensemble spread is removed. In contrast, the proposed method corrects the bias while maintaining the ensemble spread within the natural variability of the observations.

  12. State Ensembles and Quantum Entropy

    Science.gov (United States)

    Kak, Subhash

    2016-06-01

    This paper considers quantum communication involving an ensemble of states. Apart from the von Neumann entropy, it considers other measures one of which may be useful in obtaining information about an unknown pure state and another that may be useful in quantum games. It is shown that under certain conditions in a two-party quantum game, the receiver of the states can increase the entropy by adding another pure state.

  13. Simple Deep Random Model Ensemble

    OpenAIRE

    ZHANG, XIAO-LEI; Wu, Ji

    2013-01-01

    Representation learning and unsupervised learning are two central topics of machine learning and signal processing. Deep learning is one of the most effective unsupervised representation learning approach. The main contributions of this paper to the topics are as follows. (i) We propose to view the representative deep learning approaches as special cases of the knowledge reuse framework of clustering ensemble. (ii) We propose to view sparse coding when used as a feature encoder as the consens...

  14. On the use and computation of the Jordan canonical form in system theory

    Science.gov (United States)

    Sridhar, B.; Jordan, D.

    1974-01-01

    This paper investigates various aspects of the application of the Jordan canonical form of a matrix in system theory and develops a computational approach to determining the Jordan form for a given matrix. Applications include pole placement, controllability and observability studies, serving as an intermediate step in yielding other canonical forms, and theorem proving. The computational method developed in this paper is both simple and efficient. The method is based on the definition of a generalized eigenvector and a natural extension of Gauss elimination techniques. Examples are included for demonstration purposes.

  15. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  16. The Application of Canonical Correlation to Two-Dimensional Contingency Tables

    OpenAIRE

    Alberto F. Restori; Gary S. Katz; Howard B. Lee

    2010-01-01

    This paper re-introduces and demonstrates the use of Mickeys (1970) canonical correlation method in analyzing large two-dimensional contingency tables. This method of analysis supplements the traditional analysis using the Pearson chi-square. Examples and a MATLAB source listing are provided.

  17. The Application of Canonical Correlation to Two-Dimensional Contingency Tables

    Directory of Open Access Journals (Sweden)

    Alberto F. Restori

    2010-03-01

    Full Text Available This paper re-introduces and demonstrates the use of Mickey’s (1970 canonical correlation method in analyzing large two-dimensional contingency tables. This method of analysis supplements the traditional analysis using the Pearson chi-square. Examples and a MATLAB source listing are provided.

  18. Canonical Notch activation in osteocytes causes osteopetrosis.

    Science.gov (United States)

    Canalis, Ernesto; Bridgewater, David; Schilling, Lauren; Zanotti, Stefano

    2016-01-15

    Activation of Notch1 in cells of the osteoblastic lineage inhibits osteoblast differentiation/function and causes osteopenia, whereas its activation in osteocytes causes a distinct osteopetrotic phenotype. To explore mechanisms responsible, we established the contributions of canonical Notch signaling (Rbpjκ dependent) to osteocyte function. Transgenics expressing Cre recombinase under the control of the dentin matrix protein-1 (Dmp1) promoter were crossed with Rbpjκ conditional mice to generate Dmp1-Cre(+/-);Rbpjκ(Δ/Δ) mice. These mice did not have a skeletal phenotype, indicating that Rbpjκ is dispensable for osteocyte function. To study the Rbpjκ contribution to Notch activation, Rosa(Notch) mice, where a loxP-flanked STOP cassette is placed between the Rosa26 promoter and the NICD coding sequence, were crossed with Dmp1-Cre transgenic mice and studied in the context (Dmp1-Cre(+/-);Rosa(Notch);Rbpjκ(Δ/Δ)) or not (Dmp1-Cre(+/-);Rosa(Notch)) of Rbpjκ inactivation. Dmp1-Cre(+/-);Rosa(Notch) mice exhibited increased femoral trabecular bone volume and decreased osteoclasts and bone resorption. The phenotype was reversed in the context of the Rbpjκ inactivation, demonstrating that Notch canonical signaling was accountable for the phenotype. Notch activation downregulated Sost and Dkk1 and upregulated Axin2, Tnfrsf11b, and Tnfsf11 mRNA expression, and these effects were not observed in the context of the Rbpjκ inactivation. In conclusion, Notch activation in osteocytes suppresses bone resorption and increases bone volume by utilization of canonical signals that also result in the inhibition of Sost and Dkk1 and upregulation of Wnt signaling. PMID:26578715

  19. Path ensembles for conformational transitions in adenylate kinase using weighted--ensemble path sampling

    CERN Document Server

    Bhatt, Divesh

    2009-01-01

    We perform first path sampling simulations of conformational transitions of semi--atomistic protein models. We generate an ensemble of pathways for conformational transitions between open and closed forms of adenylate kinase using weighted ensemble path sampling method. Such an ensemble of pathways is critical in determining the important regions of configuration space sampled during a transition. To different semi--atomistic models are used: one is a pure Go model, whereas the other includes level of residue specificity via use of Miyajawa--Jernigan type interactions and hydrogen bonding. For both the models, we find that the open form of adenylate kinase is more flexible and the the transition from open to close is significantly faster than the reverse transition. We find that the transition occurs via the AMP binding domain snapping shut at a fairly fast time scale. On the other hand, the flexible lid domain fluctuates significantly and the shutting of the AMP binding domain does not depend upon the positi...

  20. Evidence of non-canonical NOTCH signaling

    DEFF Research Database (Denmark)

    Traustadóttir, Gunnhildur Ásta; Jensen, Charlotte H; Thomassen, Mads;

    2016-01-01

    Dlk1(+/+) and Dlk1(-/-) mouse tissues at E16.5, we demonstrated that several NOTCH signaling pathways indeed are affected by DLK1 during tissue development, and this was supported by a lower activation of NOTCH1 protein in Dlk1(+/+) embryos. Likewise, but using a distinct Dlk1-manipulated (si......Canonical NOTCH signaling, known to be essential for tissue development, requires the Delta-Serrate-LAG2 (DSL) domain for NOTCH to interact with its ligand. However, despite lacking DSL, Delta-like 1 homolog (DLK1), a protein that plays a significant role in mammalian development, has been...