Stabilizing Canonical-Ensemble Calculations in the Auxiliary-Field Monte Carlo Method
Gilbreth, C N
2014-01-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Derivation of Mayer Series from Canonical Ensemble
Xian-Zhi, Wang
2016-02-01
Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.
Quantum statistical model of nuclear multifragmentation in the canonical ensemble method
Energy Technology Data Exchange (ETDEWEB)
Toneev, V.D.; Ploszajczak, M. [Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (France); Parvant, A.S. [Institute of Applied Physics, Moldova Academy of Sciences, MD Moldova (Ukraine); Parvant, A.S. [Joint Institute for Nuclear Research, Bogoliubov Lab. of Theoretical Physics, Dubna (Russian Federation)
1999-07-01
A quantum statistical model of nuclear multifragmentation is proposed. The recurrence equation method used the canonical ensemble makes the model solvable and transparent to physical assumptions and allows to get results without involving the Monte Carlo technique. The model exhibits the first order phase transition. Quantum statistics effects are clearly seen on the microscopic level of occupation numbers but are almost washed out for global thermodynamic variables and the averaged observables studied. In the latter case, the recurrence relations for multiplicity distributions of both intermediate-mass and all fragments are derived and the specific changes in the shape of multiplicity distributions in the narrow region of the transition temperature is stressed. The temperature domain favorable to search for the HBT effect is noted. (authors)
Quantum statistical model of nuclear multifragmentation in the canonical ensemble method
International Nuclear Information System (INIS)
A quantum statistical model of nuclear multifragmentation is proposed. The recurrence equation method used the canonical ensemble makes the model solvable and transparent to physical assumptions and allows to get results without involving the Monte Carlo technique. The model exhibits the first order phase transition. Quantum statistics effects are clearly seen on the microscopic level of occupation numbers but are almost washed out for global thermodynamic variables and the averaged observables studied. In the latter case, the recurrence relations for multiplicity distributions of both intermediate-mass and all fragments are derived and the specific changes in the shape of multiplicity distributions in the narrow region of the transition temperature is stressed. The temperature domain favorable to search for the HBT effect is noted. (authors)
Extending the parQ transition matrix method to grand canonical ensembles.
Haber, René; Hoffmann, Karl Heinz
2016-06-01
Phase coexistence properties as well as other thermodynamic features of fluids can be effectively determined from the grand canonical density of states (DOS). We present an extension of the parQ transition matrix method in combination with the efasTM method as a very fast approach for determining the grand canonical DOS from the transition matrix. The efasTM method minimizes the deviation from detailed balance in the transition matrix using a fast Krylov-based equation solver. The method allows a very effective use of state space transition data obtained by different exploration schemes. An application to a Lennard-Jones system produces phase coexistence properties of the same quality as reference data. PMID:27415394
Canonical Ensemble Model for Black Hole Radiation
Indian Academy of Sciences (India)
Jingyi Zhang
2014-09-01
In this paper, a canonical ensemble model for the black hole quantum tunnelling radiation is introduced. In this model the probability distribution function corresponding to the emission shell is calculated to second order. The formula of pressure and internal energy of the thermal system is modified, and the fundamental equation of thermodynamics is also discussed.
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been
Triality and the grand canonical ensemble in QCD
International Nuclear Information System (INIS)
QCD in the usual finite temperature formulation is using the grand canonical ensemble with chemical potential zero. We demonstrate that this description may give wrong predictions. QCD in the canonical formulation does not explicitly break Z(3) symmetry. It behaves in this sense like pure gluonic QCD. There are no metastable states in the canonical ensemble description as predicted in the grand canonical ensemble formalism. ((orig.))
Multiplicity fluctuations in heavy-ion collisions using canonical and grand-canonical ensemble
Energy Technology Data Exchange (ETDEWEB)
Garg, P. [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Science, Simrol (India); Mishra, D.K.; Netrakanti, P.K.; Mohanty, A.K. [Bhabha Atomic Research Center, Nuclear Physics Division, Mumbai (India)
2016-02-15
We report the higher-order cumulants and their ratios for baryon, charge and strangeness multiplicity in canonical and grand-canonical ensembles in ideal thermal model including all the resonances. When the number of conserved quanta is small, an explicit treatment of these conserved charges is required, which leads to a canonical description of the system and the fluctuations are significantly different from the grand-canonical ensemble. Cumulant ratios of total-charge and net-charge multiplicity as a function of collision energies are also compared in grand-canonical ensemble. (orig.)
Multiplicity fluctuations in heavy ion collisions using canonical and grand canonical ensemble
Garg, P; Netrakanti, P K; Mohanty, A K
2015-01-01
We report the higher order cumulants and their ratios for baryon, charge and strangeness multiplicity in canonical and grand-canonical ensembles in ideal thermal model including all the resonances. When the number of conserved quanta is small, an explicit treatment of these conserved charges is required, which leads to a canonical description of the system and the fluctuations are significantly different from the grand canonical ensemble. Cumulant ratios of total charge and net-charge multiplicity as a function of collision energies are also compared in grand canonical ensemble.
Critical adsorption and critical Casimir forces in the canonical ensemble
Gross, Markus; Vasilyev, Oleg; Gambassi, Andrea; Dietrich, S.
2016-08-01
Critical properties of a liquid film between two planar walls are investigated in the canonical ensemble, within which the total number of fluid particles, rather than their chemical potential, is kept constant. The effect of this constraint is analyzed within mean-field theory (MFT) based on a Ginzburg-Landau free-energy functional as well as via Monte Carlo simulations of the three-dimensional Ising model with fixed total magnetization. Within MFT and for finite adsorption strengths at the walls, the thermodynamic properties of the film in the canonical ensemble can be mapped exactly onto a grand canonical ensemble in which the corresponding chemical potential plays the role of the Lagrange multiplier associated with the constraint. However, due to a nonintegrable divergence of the mean-field order parameter profile near a wall, the limit of infinitely strong adsorption turns out to be not well-defined within MFT, because it would necessarily violate the constraint. The critical Casimir force (CCF) acting on the two planar walls of the film is generally found to behave differently in the canonical and grand canonical ensembles. For instance, the canonical CCF in the presence of equal preferential adsorption at the two walls is found to have the opposite sign and a slower decay behavior as a function of the film thickness compared to its grand canonical counterpart. We derive the stress tensor in the canonical ensemble and find that it has the same expression as in the grand canonical case, but with the chemical potential playing the role of the Lagrange multiplier associated with the constraint. The different behavior of the CCF in the two ensembles is rationalized within MFT by showing that, for a prescribed value of the thermodynamic control parameter of the film, i.e., density or chemical potential, the film pressures are identical in the two ensembles, while the corresponding bulk pressures are not.
Generalized Gibbs canonical ensemble: A possible physical scenario
Velazquez, L.
2007-01-01
After reviewing some fundamental results derived from the introduction of the generalized Gibbs canonical ensemble, such as the called thermodynamic uncertainty relation, it is described a physical scenario where such a generalized ensemble naturally appears as a consequence of a modification of the energetic interchange mechanism between the interest system and its surrounding, which could be relevant within the framework of long-range interacting systems.
Geometric integrator for simulations in the canonical ensemble
Tapias, Diego; Bravetti, Alessandro
2016-01-01
In this work we introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble. In particular, we consider the equations arising from the so-called density dynamics algorithm with any possible type of thermostat and provide an integrator that preserves the invariant distribution. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of the system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results.
Extending canonical Monte Carlo methods
Velazquez, L.; Curilef, S.
2010-02-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.
National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...
Climate Prediction Center(CPC)Ensemble Canonical Correlation Analysis Forecast of Temperature
National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) temperature forecast is a 90-day (seasonal) outlook of US surface temperature anomalies. The ECCA uses Canonical...
Canonical ensemble in non-extensive statistical mechanics, q > 1
Ruseckas, Julius
2016-09-01
The non-extensive statistical mechanics has been used to describe a variety of complex systems. The maximization of entropy, often used to introduce the non-extensive statistical mechanics, is a formal procedure and does not easily lead to physical insight. In this article we investigate the canonical ensemble in the non-extensive statistical mechanics by considering a small system interacting with a large reservoir via short-range forces and assuming equal probabilities for all available microstates. We concentrate on the situation when the reservoir is characterized by generalized entropy with non-extensivity parameter q > 1. We also investigate the problem of divergence in the non-extensive statistical mechanics occurring when q > 1 and show that there is a limit on the growth of the number of microstates of the system that is given by the same expression for all values of q.
Canonical ensemble in non-extensive statistical mechanics
Ruseckas, Julius
2016-04-01
The framework of non-extensive statistical mechanics, proposed by Tsallis, has been used to describe a variety of systems. The non-extensive statistical mechanics is usually introduced in a formal way, using the maximization of entropy. In this paper we investigate the canonical ensemble in the non-extensive statistical mechanics using a more traditional way, by considering a small system interacting with a large reservoir via short-range forces. The reservoir is characterized by generalized entropy instead of the Boltzmann-Gibbs entropy. Assuming equal probabilities for all available microstates we derive the equations of the non-extensive statistical mechanics. Such a procedure can provide deeper insight into applicability of the non-extensive statistics.
Energy Technology Data Exchange (ETDEWEB)
Parvan, A.S. [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Horia Hulubei National Institute of Physics and Nuclear Engineering, Department of Theoretical Physics, Bucharest (Romania); Moldova Academy of Sciences, Institute of Applied Physics, Chisinau (Moldova, Republic of)
2015-09-15
In the present paper, the Tsallis statistics in the grand canonical ensemble was reconsidered in a general form. The thermodynamic properties of the nonrelativistic ideal gas of hadrons in the grand canonical ensemble was studied numerically and analytically in a finite volume and the thermodynamic limit. It was proved that the Tsallis statistics in the grand canonical ensemble satisfies the requirements of the equilibrium thermodynamics in the thermodynamic limit if the thermodynamic potential is a homogeneous function of the first order with respect to the extensive variables of state of the system and the entropic variable z = 1/(q - 1) is an extensive variable of state. The equivalence of canonical, microcanonical and grand canonical ensembles for the nonrelativistic ideal gas of hadrons was demonstrated. (orig.)
National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) precipitation forecast is a 90-day (seasonal) outlook of US surface precipitation anomalies. The ECCA uses...
Non-extended phase space thermodynamics of Lovelock AdS black holes in the grand canonical ensemble
International Nuclear Information System (INIS)
Recently, extended phase space thermodynamics of Lovelock AdS black holes has been of great interest. To provide insight from a different perspective and gain a unified phase transition picture, the non-extended phase space thermodynamics of (n+1)-dimensional charged topological Lovelock AdS black holes is investigated in detail in the grand canonical ensemble. Specifically, the specific heat at constant electric potential is calculated and the phase transition in the grand canonical ensemble is discussed. To probe the impact of the various parameters, we utilize the control variate method and solve the phase transition condition equation numerically for the cases k = 1,-1. There are two critical points for the case n = 6, k = 1, while there is only one for the other cases. For k = 0, there exists no phase transition point. To figure out the nature of the phase transition in the grand canonical ensemble, we carry out an analytic check of the analog form of the Ehrenfest equations proposed by Banerjee et al. It is shown that Lovelock AdS black holes in the grand canonical ensemble undergo a second-order phase transition. To examine the phase structure in the grand canonical ensemble, we utilize the thermodynamic geometry method and calculate both the Weinhold metric and the Ruppeiner metric. It is shown that for both analytic and graphical results that the divergence structure of the Ruppeiner scalar curvature coincides with that of the specific heat. Our research provides one more example that Ruppeiner metric serves as a wonderful tool to probe the phase structures of black holes. (orig.)
Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R
2012-01-31
This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion "borrows" energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991
Courtney, Owen T
2016-01-01
Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks, to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three or more linked nodes. Moreover we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing respectively the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff...
A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and Its Applications
Tamura, H.; Ito, K. R.
2006-04-01
We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of the canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle density converges to a finite value, the boson/fermion processes are obtained. This argument is a realization of the equivalence of ensembles, since resulting processes are considered to describe a grand canonical ensemble of points. Random point processes corresponding to para-particles of order two are discussed as an application of the formulation. Statistics of a system of composite particles at zero temperature are also considered as a model of determinantal random point processes.
Extending canonical Monte Carlo methods: II
International Nuclear Information System (INIS)
We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2(δE2) compatible with negative heat capacities, C α, as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14–0.18
Study of critical dynamics in fluids via molecular dynamics in canonical ensemble.
Roy, Sutapa; Das, Subir K
2015-12-01
With the objective of understanding the usefulness of thermostats in the study of dynamic critical phenomena in fluids, we present results for transport properties in a binary Lennard-Jones fluid that exhibits liquid-liquid phase transition. Various collective transport properties, calculated from the molecular dynamics (MD) simulations in canonical ensemble, with different thermostats, are compared with those obtained from MD simulations in microcanonical ensemble. It is observed that the Nosé-Hoover and dissipative particle dynamics thermostats are useful for the calculations of mutual diffusivity and shear viscosity. The Nosé-Hoover thermostat, however, as opposed to the latter, appears inadequate for the study of bulk viscosity. PMID:26687057
A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and its Applications
Tamura, H.; Ito, K. R.
2005-01-01
We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle density...
A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and Its Applications
Tamura, Hiroshi; Ito, Keiichi R.
2006-01-01
We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of the canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle den...
THERMODYNAMICS OF GLOBAL MONOPOLE ANTI-DE-SITTER BLACK HOLE IN GRAND CANONICAL ENSEMBLE
Institute of Scientific and Technical Information of China (English)
陈菊华; 荆继良; 王永久
2001-01-01
In this paper, we investigate the thermodynamics of the global monopole anti-de-Sitter black hole in the grand canonical ensemble following the York's formalism. The black hole is enclosed in a cavity with a finite radius where the temperature and potential are fixed. We have studied some thermodynamical properties, i.e. the reduced action,thermal energy and entropy. By investigating the stability of the solutions, we find stable solutions and instantons.
Isobar of an ideal Bose gas within the grand canonical ensemble
Jeon, Imtak; Kim, Sang-Woo; Park, Jeong-Hyuck
2011-01-01
We investigate the isobar of an ideal Bose gas confined in a cubic box within the grand canonical ensemble, for a large yet finite number of particles, N. After solving the equation of the spinodal curve, we derive precise formulae for the supercooling and the superheating temperatures which reveal an N^{-1/3} or N^{-1/4} power correction to the known Bose-Einstein condensation temperature in the thermodynamic limit. Numerical computations confirm the accuracy of our analytical approximation,...
Pattern classification using ensemble methods
Rokach, Lior
2009-01-01
Researchers from various disciplines such as pattern recognition, statistics, and machine learning have explored the use of ensemble methodology since the late seventies. Thus, they are faced with a wide variety of methods, given the growing interest in the field. This book aims to impose a degree of order upon this diversity by presenting a coherent and unified repository of ensemble methods, theories, trends, challenges and applications. The book describes in detail the classical methods, as well as the extensions and novel approaches developed recently. Along with algorithmic descriptions o
Hori method for generalized canonical systems
da Silva Fernandes, Sandro
2009-01-01
In this paper, some special features on the canonical version of Hori method, when it is applied to generalized canonical systems (systems of differential equations described by a Hamiltonian function linear in the momenta), are presented. Two different procedures, based on a new approach for the integration theory recently presented for the canonical version, are proposed for determining the new Hamiltonian and the generating function for systems whose differential equations for the coordinates describe a periodic system with one fast phase. These procedures are equivalent and they are directly related to the canonical transformations defined by the general solution of the integrable kernel of the Hamiltonian. They provide the same near-identity transformation for the coordinates obtained through the non-canonical version of Hori method. It is also shown that these procedures are connected to the classic averaging principle through a canonical transformation. As examples, asymptotic solutions of a non-linear oscillations problem and of the elliptic perturbed problem are discussed.
Using lattice methods in non-canonical quantum statistics
International Nuclear Information System (INIS)
We define a natural coarse-graining procedure which can be applied to any closed equilibrium quantum system described by a density matrix ensemble and we show how the coarse-graining leads to the Gaussian and canonical ensembles. After this motivation, we present two ways of evaluating the Gaussian expectation values with lattice simulations. The first one is computationally demanding but general, whereas the second employs only canonical expectation values but it is applicable only for systems which are almost thermodynamical
Neirotti, J P; Freeman, D L; Doll, J D; Freeman, David L.
2000-01-01
The heat capacity and isomer distributions of the 38 atom Lennard-Jones cluster have been calculated in the canonical ensemble using parallel tempering Monte Carlo methods. A distinct region of temperature is identified that corresponds to equilibrium between the global minimum structure and the icosahedral basin of structures. This region of temperatures occurs below the melting peak of the heat capacity and is accompanied by a peak in the derivative of the heat capacity with temperature. Parallel tempering is shown to introduce correlations between results at different temperatures. A discussion is given that compares parallel tempering with other related approaches that ensure ergodic simulations.
Courtney, Owen T.; Bianconi, Ginestra
2016-06-01
Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three, or more linked nodes. Moreover, we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing, respectively, the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff of simplicial complexes that for simplicial complexes of dimension d =1 reduces to the structural cutoff of simple networks. Finally, we provide a numerical analysis of the natural correlations emerging in the configuration model of simplicial complexes without structural cutoff.
Ensemble Methods Foundations and Algorithms
Zhou, Zhi-Hua
2012-01-01
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a
Linkage-specific conformational ensembles of non-canonical polyubiquitin chains.
Castañeda, Carlos A; Chaturvedi, Apurva; Camara, Christina M; Curtis, Joseph E; Krueger, Susan; Fushman, David
2016-02-17
Polyubiquitination is a critical protein post-translational modification involved in a variety of processes in eukaryotic cells. The molecular basis for selective recognition of the polyubiquitin signals by cellular receptors is determined by the conformations polyubiquitin chains adopt; this has been demonstrated for K48- and K63-linked chains. Recent studies of the so-called non-canonical chains (linked via K6, K11, K27, K29, or K33) suggest they play important regulatory roles in growth, development, and immune system pathways, but biophysical studies are needed to elucidate the physical/structural basis of their interactions with receptors. A first step towards this goal is characterization of the conformations these chains adopt in solution. We assembled diubiquitins (Ub2) comprised of every lysine linkage. Using solution NMR measurements, small-angle neutron scattering (SANS), and in silico ensemble generation, we determined population-weighted conformational ensembles that shed light on the structure and dynamics of the non-canonical polyubiquitin chains. We found that polyubiquitin is conformationally heterogeneous, and each chain type exhibits unique conformational ensembles. For example, K6-Ub2 and K11-Ub2 (at physiological salt concentration) are in dynamic equilibrium between at least two conformers, where one exhibits a unique Ub/Ub interface, distinct from that observed in K48-Ub2 but similar to crystal structures of these chains. Conformers for K29-Ub2 and K33-Ub2 resemble recent crystal structures in the ligand-bound state. Remarkably, a number of diubiquitins adopt conformers similar to K48-Ub2 or K63-Ub2, suggesting potential overlap of biological function among different lysine linkages. These studies highlight the potential power of determining function from elucidation of conformational states. PMID:26422168
Extending canonical Monte Carlo methods: II
Velazquez, L.; Curilef, S.
2010-04-01
We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2langδE2rang compatible with negative heat capacities, C < 0. Now, we improve this methodology by including the finite size effects that reduce the precision of a direct determination of the microcanonical caloric curve β(E) = ∂S(E)/∂E, as well as by carrying out a better implementation of the MC schemes. We show that, despite the modifications considered, the extended canonical MC methods lead to an impressive overcoming of the so-called supercritical slowing down observed close to the region of the temperature driven first-order phase transition. In this case, the size dependence of the decorrelation time τ is reduced from an exponential growth to a weak power-law behavior, \\tau (N)\\propto N^{\\alpha } , as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14-0.18.
DEFF Research Database (Denmark)
Sloth, Peter
1993-01-01
The grand canonical ensemble has been used to study the evaluation of single ion activity coefficients in homogeneous ionic fluids. In this work, the Coulombic interactions are truncated according to the minimum image approximation, and the ions are assumed to be placed in a structureless......, homogeneous dielectric continuum. Grand canonical ensemble Monte Carlo calculation results for two primitive model electrolyte solutions are presented. Also, a formula involving the second moments of the total correlation functions is derived from fluctuation theory, which applies for the derivatives of the...... individual ionic activity coefficients with respect to the total ionic concentration. This formula has previously been proposed on the basis of somewhat different considerations....
Phase structures of 4D stringy charged black holes in canonical ensemble
Jia, Qiang; Lu, J. X.; Tan, Xiao-Jun
2016-08-01
We study the thermodynamics and phase structures of the asymptotically flat dilatonic black holes in 4 dimensions, placed in a cavity a la York, in string theory for an arbitrary dilaton coupling. We consider these charged black systems in canonical ensemble for which the temperature at the wall of and the charge inside the cavity are fixed. We find that the dilaton coupling plays the key role in the underlying phase structures. The connection of these black holes to higher dimensional brane systems via diagonal (double) and/or direct dimensional reductions indicates that the phase structures of the former may exhaust all possible ones of the latter, which are more difficult to study, under conditions of similar settings. Our study also shows that a diagonal (double) dimensional reduction preserves the underlying phase structure while a direct dimensional reduction has the potential to change it.
Phase structures of 4D stringy charged black holes in canonical ensemble
Jia, Qiang; Tan, Xiao-Jun
2016-01-01
We study the thermodynamics and phase structures of the asymptotically flat dilatonic black holes in 4 dimensions, placed in a cavity {\\it a la} York, in string theory for an arbitrary dilaton coupling. We consider these charged black systems in canonical ensemble for which the temperature at the wall of and the charge inside the cavity are fixed. We find that the dilaton coupling plays the key role in the underlying phase structures. The connection of these black holes to higher dimensional brane systems via diagonal (double) and/or direct dimensional reductions indicates that the phase structures of the former may exhaust all possible ones of the latter, which are more difficult to study, under conditions of similar settings. Our study also shows that a diagonal (double) dimensional reduction preserves the underlying phase structure while a direct dimensional reduction has the potential to change it.
Li, Gu-Qiang; Mo, Jie-Xiong
2016-06-01
The phase transition of a four-dimensional charged AdS black hole solution in the R +f (R ) gravity with constant curvature is investigated in the grand canonical ensemble, where we find novel characteristics quite different from that in the canonical ensemble. There exists no critical point for T -S curve while in former research critical point was found for both the T -S curve and T -r+ curve when the electric charge of f (R ) black holes is kept fixed. Moreover, we derive the explicit expression for the specific heat, the analog of volume expansion coefficient and isothermal compressibility coefficient when the electric potential of f (R ) AdS black hole is fixed. The specific heat CΦ encounters a divergence when 0 b . This finding also differs from the result in the canonical ensemble, where there may be two, one or no divergence points for the specific heat CQ . To examine the phase structure newly found in the grand canonical ensemble, we appeal to the well-known thermodynamic geometry tools and derive the analytic expressions for both the Weinhold scalar curvature and Ruppeiner scalar curvature. It is shown that they diverge exactly where the specific heat CΦ diverges.
Li, Gu-Qiang
2016-01-01
The phase transition of four-dimensional charged AdS black hole solution in the $R+f(R)$ gravity with constant curvature is investigated in the grand canonical ensemble, where we find novel characteristics quite different from that in canonical ensemble. There exists no critical point for $T-S$ curve while in former research critical point was found for both the $T-S$ curve and $T-r_+$ curve when the electric charge of $f(R)$ black holes is kept fixed. Moreover, we derive the explicit expression for the specific heat, the analog of volume expansion coefficient and isothermal compressibility coefficient when the electric potential of $f(R)$ AdS black hole is fixed. The specific heat $C_\\Phi$ encounters a divergence when $0b$. This finding also differs from the result in the canonical ensemble, where there may be two, one or no divergence points for the specific heat $C_Q$. To examine the phase structure newly found in the grand canonical ensemble, we appeal to the well-known thermodynamic geometry tools and de...
DEFF Research Database (Denmark)
Sloth, Peter
1990-01-01
Density profiles and partition coefficients are obtained for hard-sphere fluids inside hard, spherical pores of different sizes by grand canonical ensemble Monte Carlo calculations. The Monte Carlo results are compared to the results obtained by application of different kinds of integral equation...
Canonical vs. micro-canonical sampling methods in a 2D Ising model
International Nuclear Information System (INIS)
Canonical and micro-canonical Monte Carlo algorithms were implemented on a 2D Ising model. Expressions for the internal energy, U, inverse temperature, Z, and specific heat, C, are given. These quantities were calculated over a range of temperature, lattice sizes, and time steps. Both algorithms accurately simulate the Ising model. To obtain greater than three decimal accuracy from the micro-canonical method requires that the more complicated expression for Z be used. The overall difference between the algorithms is small. The physics of the problem under study should be the deciding factor in determining which algorithm to use. 13 refs., 6 figs., 2 tabs
Indian Academy of Sciences (India)
W. X. Zhong
2014-09-01
In this paper, we use the canonical ensemble model to discuss the radiation of a Schwarzschild–de Sitter black hole on the black hole horizon. Using this model, we calculate the probability distribution from function of the emission shell. And the statistical meaning which compare with the distribution function is used to investigate the black hole tunnelling radiation spectrum.We also discuss the mechanism of information flowing from the black hole.
International Nuclear Information System (INIS)
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space
Ensemble methods for noise in classification problems
Verbaeten, Sofie; Van Assche, Anneleen
2003-01-01
Ensemble methods combine a set of classifiers to construct a new classifier that is (often) more accurate than any of its component classifiers. In this paper, we use ensemble methods to identify noisy training examples. More precisely, we consider the problem of mislabeled training examples in classification tasks, and address this problem by pre-processing the training set, i.e. by identifying and removing outliers from the training set. We study a number of filter techniques that are based...
Rainforth, Tom; Wood, Frank
2015-01-01
We introduce canonical correlation forests (CCFs), a new decision tree ensemble method for classification. Individual canonical correlation trees are binary decision trees with hyperplane splits based on canonical correlation components. Unlike axis-aligned alternatives, the decision surfaces of CCFs are not restricted to the coordinate system of the input features and therefore more naturally represent data with correlation between the features. Additionally we introduce a novel alternative ...
Hamiltonian Dynamics of Bounded Spacetime and Black Hole Entropy Canonical Method
Park, M
2002-01-01
Recently, Carlip proposed a formulation which computes the Bekenstein-Hawking (BH) entropy for the black hole in any dimension. But it has been known that his theory has some technical inconsistencies although his idea has received wide attentions. This paper address a resolution of the problem. By considering a correct gravity action whose variational principle is well defined at the horizon, one can $derive$ the correct Virasoro generator for the surface deformation at the horizon through the canonical method. The grand canonical ensemble, where the horizon and its angular velocity and temperature are fixed, is appropriate for my purpose. From the canonical quantization of the Virasoro algebra, it is found that the existence of the $classical$ Virasoro algebra is crucial to obtain the operator Virasoro algebra which produces the right conformal weights $\\sim A/\\hbar G$ for the semiclassical black hole entropy from the universal Cardy's entropy formula. The correct numerical factor 1/4 is obtained by choosin...
Ensemble Kalman methods for inverse problems
International Nuclear Information System (INIS)
The ensemble Kalman filter (EnKF) was introduced by Evensen in 1994 (Evensen 1994 J. Geophys. Res. 99 10143–62) as a novel method for data assimilation: state estimation for noisily observed time-dependent problems. Since that time it has had enormous impact in many application domains because of its robustness and ease of implementation, and numerical evidence of its accuracy. In this paper we propose the application of an iterative ensemble Kalman method for the solution of a wide class of inverse problems. In this context we show that the estimate of the unknown function that we obtain with the ensemble Kalman method lies in a subspace A spanned by the initial ensemble. Hence the resulting error may be bounded above by the error found from the best approximation in this subspace. We provide numerical experiments which compare the error incurred by the ensemble Kalman method for inverse problems with the error of the best approximation in A, and with variants on traditional least-squares approaches, restricted to the subspace A. In so doing we demonstrate that the ensemble Kalman method for inverse problems provides a derivative-free optimization method with comparable accuracy to that achieved by traditional least-squares approaches. Furthermore, we also demonstrate that the accuracy is of the same order of magnitude as that achieved by the best approximation. Three examples are used to demonstrate these assertions: inversion of a compact linear operator; inversion of piezometric head to determine hydraulic conductivity in a Darcy model of groundwater flow; and inversion of Eulerian velocity measurements at positive times to determine the initial condition in an incompressible fluid. (paper)
International Nuclear Information System (INIS)
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
Kadoura, Ahmad Salim
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.
Novotny, M A; Yuan, S; Miyashita, S; De Raedt, H; Michielsen, K
2016-01-01
We study measures of decoherence and thermalization of a quantum system $S$ in the presence of a quantum environment (bath) $E$. The entirety $S$$+$$E$ is prepared in a canonical thermal state at a finite temperature, that is the entirety is in a steady state. Both our numerical results and theoretical predictions show that measures of the decoherence and the thermalization of $S$ are generally finite, even in the thermodynamic limit, when the entirety $S$$+$$E$ is at finite temperature. Notably, applying perturbation theory with respect to the system-environment coupling strength, we find that under common Hamiltonian symmetries, up to first order in the coupling strength it is sufficient to consider $S$ uncoupled from $E$, but entangled with $E$, to predict decoherence and thermalization measures of $S$. This decoupling allows closed form expressions for perturbative expansions for the measures of decoherence and thermalization in terms of the free energies of $S$ and of $E$. Large-scale numerical results f...
Novotny, M. A.; Jin, F.; Yuan, S.; Miyashita, S.; De Raedt, H.; Michielsen, K.
2016-03-01
We study measures of decoherence and thermalization of a quantum system S in the presence of a quantum environment (bath) E . The entirety S +E is prepared in a canonical-thermal state at a finite temperature; that is, the entirety is in a steady state. Both our numerical results and theoretical predictions show that measures of the decoherence and the thermalization of S are generally finite, even in the thermodynamic limit, when the entirety S +E is at finite temperature. Notably, applying perturbation theory with respect to the system-environment coupling strength, we find that under common Hamiltonian symmetries, up to first order in the coupling strength it is sufficient to consider S uncoupled from E , but entangled with E , to predict decoherence and thermalization measures of S . This decoupling allows closed-form expressions for perturbative expansions for the measures of decoherence and thermalization in terms of the free energies of S and of E . Large-scale numerical results for both coupled and uncoupled entireties with up to 40 quantum spins support these findings.
Electronic chemical response indexes at finite temperature in the canonical ensemble
Energy Technology Data Exchange (ETDEWEB)
Franco-Pérez, Marco, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx; Gázquez, José L., E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Universidad Autónoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, México, D. F. 09340, México (Mexico); Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico); Vela, Alberto, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico)
2015-07-14
Assuming that the electronic energy is given by a smooth function of the number of electrons and within the extension of density functional theory to finite temperature, the first and second order chemical reactivity response functions of the Helmholtz free energy with respect to the temperature, the number of electrons, and the external potential are derived. It is found that in all cases related to the first or second derivatives with respect to the number of electrons or the external potential, there is a term given by the average of the corresponding derivative of the electronic energy of each state (ground and excited). For the second derivatives, including those related with the temperature, there is a thermal fluctuation contribution that is zero at zero temperature. Thus, all expressions reduce correctly to their corresponding chemical reactivity expressions at zero temperature and show that, at room temperature, the corrections are very small. When the assumption that the electronic energy is given by a smooth function of the number of electrons is replaced by the straight lines behavior connecting integer values, as required by the ensemble theorem, one needs to introduce directional derivatives in most cases, so that the temperature dependent expressions reduce correctly to their zero temperature counterparts. However, the main result holds, namely, at finite temperature the thermal corrections to the chemical reactivity response functions are very small. Consequently, the present work validates the usage of reactivity indexes calculated at zero temperature to infer chemical behavior at room and even higher temperatures.
Electronic chemical response indexes at finite temperature in the canonical ensemble
International Nuclear Information System (INIS)
Assuming that the electronic energy is given by a smooth function of the number of electrons and within the extension of density functional theory to finite temperature, the first and second order chemical reactivity response functions of the Helmholtz free energy with respect to the temperature, the number of electrons, and the external potential are derived. It is found that in all cases related to the first or second derivatives with respect to the number of electrons or the external potential, there is a term given by the average of the corresponding derivative of the electronic energy of each state (ground and excited). For the second derivatives, including those related with the temperature, there is a thermal fluctuation contribution that is zero at zero temperature. Thus, all expressions reduce correctly to their corresponding chemical reactivity expressions at zero temperature and show that, at room temperature, the corrections are very small. When the assumption that the electronic energy is given by a smooth function of the number of electrons is replaced by the straight lines behavior connecting integer values, as required by the ensemble theorem, one needs to introduce directional derivatives in most cases, so that the temperature dependent expressions reduce correctly to their zero temperature counterparts. However, the main result holds, namely, at finite temperature the thermal corrections to the chemical reactivity response functions are very small. Consequently, the present work validates the usage of reactivity indexes calculated at zero temperature to infer chemical behavior at room and even higher temperatures
Ensemble Machine Learning Methods and Applications
Ma, Yunqian
2012-01-01
It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics. Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...
The canonical and grand canonical models for nuclear multifragmentation
Indian Academy of Sciences (India)
G Chaudhuri; S Das Gupta
2010-08-01
Many observables seen in intermediate energy heavy-ion collisions can be explained on the basis of statistical equilibrium. Calculations based on statistical equilibrium can be implemented in microcanonical ensemble, canonical ensemble or grand canonical ensemble. This paper deals with calculations with canonical and grand canonical ensembles. A recursive relation developed recently allows calculations with arbitrary precision for many nuclear problems. Calculations are done to study the nature of phase transition in nuclear matter.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-05-08
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Directory of Open Access Journals (Sweden)
S. Roh
2015-05-01
Full Text Available In ensemble Kalman filtering (EnKF, the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-12-03
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Hybrid Intrusion Detection Using Ensemble of Classification Methods
Directory of Open Access Journals (Sweden)
M.Govindarajan
2014-01-01
Full Text Available One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed for homogeneous ensemble classifiers using bagging and heterogeneous ensemble classifiers using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF and Support Vector Machine (SVM as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of real and benchmark data sets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase and combining phase. A wide range of comparative experiments are conducted for real and benchmark data sets of intrusion detection. The accuracy of base classifiers is compared with homogeneous and heterogeneous models for data mining problem. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and also heterogeneous models exhibit better results than homogeneous models for real and benchmark data sets of intrusion detection.
Kocharovsky, V. V.; Kocharovsky, Vl. V.; Tarasov, S. V.
2016-01-01
The analytical theory of Bose-Einstein condensation of an ideal gas in mesoscopic systems has been briefly reviewed in application to traps with arbitrary shapes and dimension. This theory describes the phases of the classical gas and the formed Bose-Einstein condensate, as well as the entire vicinity of the phase transition point. The statistics and thermodynamics of Bose-Einstein condensation have been studied in detail, including their self-similar structure in the critical region, transition to the thermodynamic limit, effect of boundary conditions on the properties of a system, and nonequivalence of the description of Bose-Einstein condensation in different statistical ensembles. The complete classification of universality classes of Bose-Einstein condensation has been given.
An Alternative Method to Predict Performance: Canonical Redundancy Analysis.
Dawson-Saunders, Beth; Doolen, Deane R.
1981-01-01
The relationships between predictors of performance and subsequent measures of clinical performance in medical school were examined for two classes at Southern Illinois University of Medicine. Canonical redundancy analysis was used to evaluate the association between six academic and three biographical preselection characteristics and four…
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms
Rapakoulia, Trisevgeni
2014-04-26
Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.
Canonical density matrix perturbation theory.
Niklasson, Anders M N; Cawkwell, M J; Rubensson, Emanuel H; Rudberg, Elias
2015-12-01
Density matrix perturbation theory [Niklasson and Challacombe, Phys. Rev. Lett. 92, 193001 (2004)] is generalized to canonical (NVT) free-energy ensembles in tight-binding, Hartree-Fock, or Kohn-Sham density-functional theory. The canonical density matrix perturbation theory can be used to calculate temperature-dependent response properties from the coupled perturbed self-consistent field equations as in density-functional perturbation theory. The method is well suited to take advantage of sparse matrix algebra to achieve linear scaling complexity in the computational cost as a function of system size for sufficiently large nonmetallic materials and metals at high temperatures. PMID:26764847
Black Hole Statistical Mechanics and The Angular Velocity Ensemble
Thomson, Mitchell
2012-01-01
An new ensemble - the angular velocity ensemble - is derived using Jaynes' method of maximising entropy subject to prior information constraints. The relevance of the ensemble to black holes is motivated by a discussion of external parameters in statistical mechanics and their absence from the Hamiltonian of general relativity. It is shown how this leads to difficulty in deriving entropy as a function of state and recovering the first law of thermodynamics from the microcanonical and canonical ensembles applied to black holes.
Methods of weyl representation of the phase space and canonical transformations
International Nuclear Information System (INIS)
The author finds the structure of the kernel of a canonical transformation and a differential equation for the symbol of the intertwining operator. The symbol of a general linear canonical transformation is constructed in terms of a Cayley transformation of the symplectic transformation of the phase space. Its singularities and applications to group theory are studied. The Green's functions and spectral projectors of arbitrary quadratic systems are constructed using the classification methods of classical mechanics
Land Cover Mapping Using Ensemble Feature Selection Methods
Gidudu, A; Marwala, T
2008-01-01
Ensemble classification is an emerging approach to land cover mapping whereby the final classification output is a result of a consensus of classifiers. Intuitively, an ensemble system should consist of base classifiers which are diverse i.e. classifiers whose decision boundaries err differently. In this paper ensemble feature selection is used to impose diversity in ensembles. The features of the constituent base classifiers for each ensemble were created through an exhaustive search algorithm using different separability indices. For each ensemble, the classification accuracy was derived as well as a diversity measure purported to give a measure of the inensemble diversity. The correlation between ensemble classification accuracy and diversity measure was determined to establish the interplay between the two variables. From the findings of this paper, diversity measures as currently formulated do not provide an adequate means upon which to constitute ensembles for land cover mapping.
The ensemble switch method for computing interfacial tensions
International Nuclear Information System (INIS)
We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension
The ensemble switch method for computing interfacial tensions
Energy Technology Data Exchange (ETDEWEB)
Schmitz, Fabian; Virnau, Peter [Institute of Physics, Johannes Gutenberg University Mainz, Staudingerweg 9, D-55128 Mainz (Germany)
2015-04-14
We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.
Ensemble Methods in Data Mining Improving Accuracy Through Combining Predictions
Seni, Giovanni
2010-01-01
This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although e
Adaptive error covariances estimation methods for ensemble Kalman filters
International Nuclear Information System (INIS)
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example
Adaptive error covariances estimation methods for ensemble Kalman filters
Energy Technology Data Exchange (ETDEWEB)
Zhen, Yicun, E-mail: zhen@math.psu.edu [Department of Mathematics, The Pennsylvania State University, University Park, PA 16802 (United States); Harlim, John, E-mail: jharlim@psu.edu [Department of Mathematics and Department of Meteorology, The Pennsylvania State University, University Park, PA 16802 (United States)
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
A canonical correlation analysis based method for contamination event detection in water sources.
Li, Ruonan; Liu, Shuming; Smith, Kate; Che, Han
2016-06-15
In this study, a general framework integrating a data-driven estimation model is employed for contamination event detection in water sources. Sequential canonical correlation coefficients are updated in the model using multivariate water quality time series. The proposed method utilizes canonical correlation analysis for studying the interplay between two sets of water quality parameters. The model is assessed by precision, recall and F-measure. The proposed method is tested using data from a laboratory contaminant injection experiment. The proposed method could detect a contamination event 1 minute after the introduction of 1.600 mg l(-1) acrylamide solution. With optimized parameter values, the proposed method can correctly detect 97.50% of all contamination events with no false alarms. The robustness of the proposed method can be explained using the Bauer-Fike theorem. PMID:27264637
Extending the square root method to account for additive forecast noise in ensemble methods
Raanes, Patrick N; Bertino, Laurent
2015-01-01
A square root approach is considered for the problem of accounting for model noise in the forecast step of the ensemble Kalman filter (EnKF) and related algorithms. The primary aim is to replace the method of simulated, pseudo-random, additive noise so as to eliminate the associated sampling errors. The core method is based on the analysis step of ensemble square root filters, and consists in the deterministic computation of a transform matrix. The theoretical advantages regarding dynamical consistency are surveyed, applying equally well to the square root method in the analysis step. A fundamental problem due to the limited size of the ensemble subspace is discussed, and novel solutions that complement the core method are suggested and studied. Benchmarks from twin experiments with simple, low-order dynamics indicate improved performance over standard approaches such as additive, simulated noise and multiplicative inflation.
Microcanonical ensemble and algebra of conserved generators for generalized quantum dynamics
International Nuclear Information System (INIS)
It has recently been shown, by application of statistical mechanical methods to determine the canonical ensemble governing the equilibrium distribution of operator initial values, that complex quantum field theory can emerge as a statistical approximation to an underlying generalized quantum dynamics. This result was obtained by an argument based on a Ward identity analogous to the equipartition theorem of classical statistical mechanics. We construct here a microcanonical ensemble which forms the basis of this canonical ensemble. This construction enables us to define the microcanonical entropy and free energy of the field configuration of the equilibrium distribution and to study the stability of the canonical ensemble. We also study the algebraic structure of the conserved generators from which the microcanonical and canonical ensembles are constructed, and the flows they induce on the phase space. copyright 1996 American Institute of Physics
Microcanonical ensemble simulation method applied to discrete potential fluids.
Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro
2015-09-01
In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002)0129-183110.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties. PMID:26465582
ENSEMBLE methods to reconcile disparate national long range dispersion forecasts
DEFF Research Database (Denmark)
Mikkelsen, Torben; Galmarini, S.; Bianconi, R.;
2003-01-01
and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecastsfrom meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national....... ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidentalatmospheric release of radioactive material. A series of new decision-making “ENSEMBLE” procedures...
Sparse canonical methods for biological data integration: application to a cross-platform study
Directory of Open Access Journals (Sweden)
Robert-Granié Christèle
2009-01-01
Full Text Available Abstract Background In the context of systems biology, few sparse approaches have been proposed so far to integrate several data sets. It is however an important and fundamental issue that will be widely encountered in post genomic studies, when simultaneously analyzing transcriptomics, proteomics and metabolomics data using different platforms, so as to understand the mutual interactions between the different data sets. In this high dimensional setting, variable selection is crucial to give interpretable results. We focus on a sparse Partial Least Squares approach (sPLS to handle two-block data sets, where the relationship between the two types of variables is known to be symmetric. Sparse PLS has been developed either for a regression or a canonical correlation framework and includes a built-in procedure to select variables while integrating data. To illustrate the canonical mode approach, we analyzed the NCI60 data sets, where two different platforms (cDNA and Affymetrix chips were used to study the transcriptome of sixty cancer cell lines. Results We compare the results obtained with two other sparse or related canonical correlation approaches: CCA with Elastic Net penalization (CCA-EN and Co-Inertia Analysis (CIA. The latter does not include a built-in procedure for variable selection and requires a two-step analysis. We stress the lack of statistical criteria to evaluate canonical correlation methods, which makes biological interpretation absolutely necessary to compare the different gene selections. We also propose comprehensive graphical representations of both samples and variables to facilitate the interpretation of the results. Conclusion sPLS and CCA-EN selected highly relevant genes and complementary findings from the two data sets, which enabled a detailed understanding of the molecular characteristics of several groups of cell lines. These two approaches were found to bring similar results, although they highlighted the same
Hybrid Levenberg-Marquardt and weak-constraint ensemble Kalman smoother method
Mandel, J.; Bergou, E.; Gürol, S.; Gratton, S.; Kasanický, I.
2016-03-01
The ensemble Kalman smoother (EnKS) is used as a linear least-squares solver in the Gauss-Newton method for the large nonlinear least-squares system in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Furthermore, adding a regularization term results in replacing the Gauss-Newton method, which may diverge, by the Levenberg-Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS. The method is illustrated on the Lorenz 63 model and a two-level quasi-geostrophic model.
ENSEMBLE methods to reconcile disparate national long range dispersion forecasting
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, T.; Galmarini, S.; Bianconi, R.; French, S. (eds.)
2003-11-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)
ENSEMBLE methods to reconcile disparate national long range dispersion forecasting
International Nuclear Information System (INIS)
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)
Development of a regional ensemble prediction method for probabilistic weather prediction
International Nuclear Information System (INIS)
A regional ensemble prediction method has been developed to provide probabilistic weather prediction using a numerical weather prediction model. To obtain consistent perturbations with the synoptic weather pattern, both of initial and lateral boundary perturbations were given by differences between control and ensemble member of the Japan Meteorological Agency (JMA)'s operational one-week ensemble forecast. The method provides a multiple ensemble member with a horizontal resolution of 15 km for 48-hour based on a downscaling of the JMA's operational global forecast accompanied with the perturbations. The ensemble prediction was examined in the case of heavy snow fall event in Kanto area on January 14, 2013. The results showed that the predictions represent different features of high-resolution spatiotemporal distribution of precipitation affected by intensity and location of extra-tropical cyclone in each ensemble member. Although the ensemble prediction has model bias of mean values and variances in some variables such as wind speed and solar radiation, the ensemble prediction has a potential to append a probabilistic information to a deterministic prediction. (author)
Constrained Canonical Correlation.
DeSarbo, Wayne S.; And Others
1982-01-01
A variety of problems associated with the interpretation of traditional canonical correlation are discussed. A response surface approach is developed which allows for investigation of changes in the coefficients while maintaining an optimum canonical correlation value. Also, a discrete or constrained canonical correlation method is presented. (JKS)
Methods of Weyl representation of the phase space and canonical transformations
International Nuclear Information System (INIS)
The author studies nonlinear canonical transformations realized in the space of Weyl symbols of quantum operators. The kernels of the transformations, the symbol of the intertwining operator of the group of inhomogeneous point transformations, an the group characters are constructed. The group of PL transformations, which is the free produce of the group of point, p, and linear, L, transformations is considered. The simplest PL complexes relating problems with different potentials, in particular, containing a general Darboux transformation of the factorization method, are constructed. The kernel of an arbitrary element of the group PL is found
International Nuclear Information System (INIS)
The motion of charged particles in a magnetized plasma column, such as that of a magnetic mirror trap or a tokamak, is determined in the framework of the canonical perturbation theory through a method of variation of constants which preserves the energy conservation and the symmetry invariance. The choice of a frame of coordinates close to that of the magnetic coordinates allows a relatively precise determination of the guiding-center motion with a low-ordered approximation in the adiabatic parameter. A Hamiltonian formulation of the motion equations is obtained
A comparison of ensemble post-processing methods for extreme events
Williams, Robin; Ferro, Chris; Kwasniok, Frank
2015-04-01
Ensemble post-processing methods are used in operational weather forecasting to form probability distributions that represent forecast uncertainty. Several such methods have been proposed in the literature, including logistic regression, ensemble dressing, Bayesian model averaging and non-homogeneous Gaussian regression. We conduct an imperfect model experiment with the Lorenz 1996 model to investigate the performance of these methods, especially when forecasting the occurrence of rare extreme events. We show how flexible bias-correction schemes can be incorporated into these post-processing methods, and that allowing the bias correction to depend on the ensemble mean can yield considerable improvements in skill when forecasting extreme events. In the Lorenz 1996 setting, we find that ensemble dressing, Bayesian model averaging and non-homogeneous Gaussian regression perform similarly, while logistic regression performs less well.
Application of the Multimodel Ensemble Kalman Filter Method in Groundwater System
Liang Xue
2015-01-01
With the development of in-situ monitoring techniques, the ensemble Kalman filter (EnKF) has become a popular data assimilation method due to its capability to jointly update model parameters and state variables in a sequential way, and to assess the uncertainty associated with estimation and prediction. To take the conceptual model uncertainty into account during the data assimilation process, a novel multimodel ensemble Kalman filter method has been proposed by incorporating the standard En...
A multi-model ensemble method that combines imperfect models through learning
Berge, L.A.; F. M. Selten; Wiegerinck, W.; Duane, G. S.
2010-01-01
In the current multi-model ensemble approach climate model simulations are combined a posteriori. In the method of this study the models in the ensemble exchange information during simulations and learn from historical observations to combine their strengths into a best representation of the observed climate. The method is developed and tested in the context of small chaotic dynamical systems, like the Lorenz 63 system. Imperfect models are created by perturbing the standard parameter ...
A Simple Bayesian Climate Index Weighting Method for Seasonal Ensemble Forecasting
Bradley, A.; Habib, M. A.; Schwartz, S. S.
2014-12-01
Climate information — in the form of a measure of climate state or a climate forecast — can be an important predictor of future hydrologic conditions. For instance, streamflow variability for many locations around the globe is related to large-scale atmospheric oscillations, like the El Nino Southern Oscillation (ENSO) or the Pacific/Decadal Oscillation (PDO). Furthermore, climate forecast models are growing more skillful in their predictions of future climate variables on seasonal time scales. Finding effective ways to translate this climate information into improved hydrometeorological predictions is an area of ongoing research. In ensemble streamflow forecasting, where historical weather inputs or streamflow observations are used to generate the ensemble, climate index weighting is one way to represent the influence of current climate information. Using a climate index, each forecast variable member of the ensemble is selectively weighted to reflect climate conditions at the time of the forecast. A simple Bayesian climate index weighting of ensemble forecasts is presented. The original hydrologic ensemble members define a sample of the prior distribution; the relationship between the climate index and the ensemble member forecast variable is used to estimate a likelihood function. Given an observation of the climate index at the time of the forecast, the estimated likelihood function is then used to assign weights to each ensemble member. The weighted ensemble forecast is then used to estimate the posterior distribution of the forecast variable conditioned on the climate index. The proposed approach has several advantages over traditional climate index weighting methods. The weights assigned to the ensemble members accomplish the updating of the (prior) ensemble forecast distribution based on Bayes' Theorem, so the method is theoretically sound. The method also automatically adapts to the strength of the relationship between the climate index and the
Evaluation of the thermodynamics of a four level system using canonical density matrix method
Directory of Open Access Journals (Sweden)
Awoga Oladunjoye A.
2013-02-01
Full Text Available We consider a four-level system with two subsystems coupled by weak interaction. The system is in thermal equilibrium. The thermodynamics of the system, namely internal energy, free energy, entropy and heat capacity, are evaluated using the canonical density matrix by two methods. First by Kronecker product method and later by treating the subsystems separately and then adding the evaluated thermodynamic properties of each subsystem. It is discovered that both methods yield the same result, the results obey the laws of thermodynamics and are the same as earlier obtained results. The results also show that each level of the subsystems introduces a new degree of freedom and increases the entropy of the entire system. We also found that the four-level system predicts a linear relationship between heat capacity and temperature at very low temperatures just as in metals. Our numerical results show the same trend.
Alba, David; Crater, Horace W.; Lusanna, Luca
2012-01-01
A new formulation of relativistic classical mechanics allows a revisiting of old unsolved problems in relativistic kinetic theory and in relativistic statistical mechanics. In particular a definition of the relativistic micro-canonical partition function is given strictly in terms of the Poincar\\'e generators of an interacting N-particle system both in the inertial and non-inertial rest frames. The non-relativistic limit allows a definition of both the inertial and non-inertial micro-canonica...
Hybrid Modeling of Flotation Height in Air Flotation Oven Based on Selective Bagging Ensemble Method
Directory of Open Access Journals (Sweden)
Shuai Hou
2013-01-01
Full Text Available The accurate prediction of the flotation height is very necessary for the precise control of the air flotation oven process, therefore, avoiding the scratch and improving production quality. In this paper, a hybrid flotation height prediction model is developed. Firstly, a simplified mechanism model is introduced for capturing the main dynamic behavior of the process. Thereafter, for compensation of the modeling errors existing between actual system and mechanism model, an error compensation model which is established based on the proposed selective bagging ensemble method is proposed for boosting prediction accuracy. In the framework of the selective bagging ensemble method, negative correlation learning and genetic algorithm are imposed on bagging ensemble method for promoting cooperation property between based learners. As a result, a subset of base learners can be selected from the original bagging ensemble for composing a selective bagging ensemble which can outperform the original one in prediction accuracy with a compact ensemble size. Simulation results indicate that the proposed hybrid model has a better prediction performance in flotation height than other algorithms’ performance.
Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-05-01
A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.
Directory of Open Access Journals (Sweden)
Ju Hyoung Lee
2015-12-01
Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.
A Bayes fusion method based ensemble classification approach for Brown cloud application
Directory of Open Access Journals (Sweden)
M.Krishnaveni
2014-03-01
Full Text Available Classification is a recurrent task of determining a target function that maps each attribute set to one of the predefined class labels. Ensemble fusion is one of the suitable classifier model fusion techniques which combine the multiple classifiers to perform high classification accuracy than individual classifiers. The main objective of this paper is to combine base classifiers using ensemble fusion methods namely Decision Template, Dempster-Shafer and Bayes to compare the accuracy of the each fusion methods on the brown cloud dataset. The base classifiers like KNN, MLP and SVM have been considered in ensemble classification in which each classifier with four different function parameters. From the experimental study it is proved, that the Bayes fusion method performs better classification accuracy of 95% than Decision Template of 80%, Dempster-Shaferof 85%, in a Brown Cloud image dataset.
A Synergy Method to Improve Ensemble Weather Predictions and Differential SAR Interferograms
Ulmer, Franz-Georg; Adam, Nico
2015-11-01
A compensation of atmospheric effects is essential for mm-sensitivity in differential interferometric synthetic aperture radar (DInSAR) techniques. Numerical weather predictions are used to compensate these disturbances allowing a reduction in the number of required radar scenes. Practically, predictions are solutions of partial differential equations which never can be precise due to model or initialisation uncertainties. In order to deal with the chaotic nature of the solutions, ensembles of predictions are computed. From a stochastic point of view, the ensemble mean is the expected prediction, if all ensemble members are equally likely. This corresponds to the typical assumption that all ensemble members are physically correct solutions of the set of partial differential equations. DInSAR allows adding to this knowledge. Observations of refractivity can now be utilised to check the likelihood of a solution and to weight the respective ensemble member to estimate a better expected prediction. The objective of the paper is to show the synergy between ensemble weather predictions and differential interferometric atmospheric correction. We demonstrate a new method first to compensate better for the atmospheric effect in DInSAR and second to estimate an improved numerical weather prediction (NWP) ensemble mean. Practically, a least squares fit of predicted atmospheric effects with respect to a differential interferogram is computed. The coefficients of this fit are interpreted as likelihoods and used as weights for the weighted ensemble mean. Finally, the derived weighted prediction has minimal expected quadratic errors which is a better solution compared to the straightforward best-fitting ensemble member. Furthermore, we propose an extension of the algorithm which avoids the systematic bias caused by deformations. It makes this technique suitable for time series analysis, e.g. persistent scatterer interferometry (PSI). We validate the algorithm using the well known
Ensemble-trained source apportionment of fine particulate matter and method uncertainty analysis
Balachandran, Sivaraman; Pachon, Jorge E.; Hu, Yongtao; Lee, Dongho; Mulholland, James A.; Russell, Armistead G.
2012-12-01
An ensemble-based approach is applied to better estimate source impacts on fine particulate matter (PM2.5) and quantify uncertainties in various source apportionment (SA) methods. The approach combines source impacts from applications of four individual SA methods: three receptor-based models and one chemical transport model (CTM). Receptor models used are the chemical mass balance methods CMB-LGO (Chemical Mass Balance-Lipschitz global optimizer) and CMB-MM (molecular markers) as well as a factor analytic method, Positive Matrix Factorization (PMF). The CTM used is the Community Multiscale Air Quality (CMAQ) model. New source impact estimates and uncertainties in these estimates are calculated in a two-step process. First, an ensemble average is calculated for each source category using results from applying the four individual SA methods. The root mean square error (RMSE) between each method with respect to the average is calculated for each source category; the RMSE is then taken to be the updated uncertainty for each individual SA method. Second, these new uncertainties are used to re-estimate ensemble source impacts and uncertainties. The approach is applied to data from daily PM2.5 measurements at the Atlanta, GA, Jefferson Street (JST) site in July 2001 and January 2002. The procedure provides updated uncertainties for the individual SA methods that are calculated in a consistent way across methods. Overall, the ensemble has lower relative uncertainties as compared to the individual SA methods. Calculated CMB-LGO uncertainties tend to decrease from initial estimates, while PMF and CMB-MM uncertainties increase. Estimated CMAQ source impact uncertainties are comparable to other SA methods for gasoline vehicles and SOC but are larger than other methods for other sources. In addition to providing improved estimates of source impact uncertainties, the ensemble estimates do not have unrealistic extremes as compared to individual SA methods and avoids zero impact
A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
Iglesias, Marco A.
2016-02-01
We introduce a derivative-free computational framework for approximating solutions to nonlinear PDE-constrained inverse problems. The general aim is to merge ideas from iterative regularization with ensemble Kalman methods from Bayesian inference to develop a derivative-free stable method easy to implement in applications where the PDE (forward) model is only accessible as a black box (e.g. with commercial software). The proposed regularizing ensemble Kalman method can be derived as an approximation of the regularizing Levenberg-Marquardt (LM) scheme (Hanke 1997 Inverse Problems 13 79-95) in which the derivative of the forward operator and its adjoint are replaced with empirical covariances from an ensemble of elements from the admissible space of solutions. The resulting ensemble method consists of an update formula that is applied to each ensemble member and that has a regularization parameter selected in a similar fashion to the one in the LM scheme. Moreover, an early termination of the scheme is proposed according to a discrepancy principle-type of criterion. The proposed method can be also viewed as a regularizing version of standard Kalman approaches which are often unstable unless ad hoc fixes, such as covariance localization, are implemented. The aim of this paper is to provide a detailed numerical investigation of the regularizing and convergence properties of the proposed regularizing ensemble Kalman scheme; the proof of these properties is an open problem. By means of numerical experiments, we investigate the conditions under which the proposed method inherits the regularizing properties of the LM scheme of (Hanke 1997 Inverse Problems 13 79-95) and is thus stable and suitable for its application in problems where the computation of the Fréchet derivative is not computationally feasible. More concretely, we study the effect of ensemble size, number of measurements, selection of initial ensemble and tunable parameters on the performance of the method
Stochastic dynamics simulations in a new generalized ensemble
Hansmann, Uwe H E; Okamoto, Y; Hansmann, Ulrich H.E.; Eisenmenger, Frank; Okamoto, Yuko
1998-01-01
We develop a formulation for molecular dynamics, Langevin, and hybrid Monte Carlo algorithms in the recently proposed generalized ensemble that is based on a physically motivated realisation of Tsallis weights. The effectiveness of the methods are tested with an energy function for a protein system. Simulations in this generalized ensemble by the three methods are performed for a penta peptide, Met-enkephalin. For each algorithm, it is shown that from only one simulation run one can not only find the global-minimum-energy conformation but also obtain probability distributions in canonical ensemble at any temperature, which allows the calculation of any thermodynamic quantity as a function of temperature.
Method to detect gravitational waves from an ensemble of known pulsars
Fan, Xilong; Messenger, Christopher
2016-01-01
Combining information from weak sources, such as known pulsars, for gravitational wave detection, is an attractive approach to improve detection efficiency. We propose an optimal statistic for a general ensemble of signals and apply it to an ensemble of known pulsars. Our method combines $\\mathcal F$-statistic values from individual pulsars using weights proportional to each pulsar's expected optimal signal-to-noise ratio to improve the detection efficiency. We also point out that to detect at least one pulsar within an ensemble, different thresholds should be designed for each source based on the expected signal strength. The performance of our proposed detection statistic is demonstrated using simulated sources, with the assumption that all pulsars' ellipticities belong to a common (yet unknown) distribution. Comparing with an equal-weight strategy and with individual source approaches, we show that the weighted-combination of all known pulsars, where weights are assigned based on the pulsars' known informa...
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
An Introduction to Ensemble Methods for Data Analysis (Revised July, 2004)
Berk, Richard
2004-01-01
This paper provides an introduction to ensemble statistical procedures as a special case of algorithmic methods. The discussion beings with classification and regression trees (CART) as a didactic device to introduce many of the key issues. Following the material on CART is a consideration of cross-validation, bagging, random forests and boosting. Major points are illustrated with analyses of real data.
Senjean, Bruno; Jensen, Hans Jørgen Aa; Fromager, Emmanuel
2015-01-01
The computation of excitation energies in range-separated ensemble density-functional theory (DFT) is discussed. The latter approach is appealing as it enables the rigorous formulation of a multi-determinant state-averaged DFT method. In the exact theory, the short-range density functional, that complements the long-range wavefunction-based ensemble energy contribution, should vary with the ensemble weights even when the density is held fixed. This weight dependence ensures that the range-separated ensemble energy varies linearly with the ensemble weights. When the (weight-independent) ground-state short-range exchange-correlation functional is used in this context, curvature appears thus leading to an approximate weight-dependent excitation energy. In order to obtain unambiguous approximate excitation energies, we simply propose to interpolate linearly the ensemble energy between equiensembles. It is shown that such a linear interpolation method (LIM) effectively introduces weight dependence effects. LIM has...
Identifying a robust method to build RCMs ensemble as climate forcing for hydrological impact models
Olmos Giménez, P.; García Galiano, S. G.; Giraldo-Osorio, J. D.
2016-06-01
The regional climate models (RCMs) improve the understanding of the climate mechanism and are often used as climate forcing to hydrological impact models. Rainfall is the principal input to the water cycle, so special attention should be paid to its accurate estimation. However, climate change projections of rainfall events exhibit great divergence between RCMs. As a consequence, the rainfall projections, and the estimation of uncertainties, are better based in the combination of the information provided by an ensemble approach from different RCMs simulations. Taking into account the rainfall variability provided by different RCMs, the aims of this work are to evaluate the performance of two novel approaches based on the reliability ensemble averaging (REA) method for building RCMs ensembles of monthly precipitation over Spain. The proposed methodologies are based on probability density functions (PDFs) considering the variability of different levels of information, on the one hand of annual and seasonal rainfall, and on the other hand of monthly rainfall. The sensitivity of the proposed approaches, to two metrics for identifying the best ensemble building method, is evaluated. The plausible future scenario of rainfall for 2021-2050 over Spain, based on the more robust method, is identified. As a result, the rainfall projections are improved thus decreasing the uncertainties involved, to drive hydrological impacts models and therefore to reduce the cumulative errors in the modeling chain.
Nasseri, M.; Zahraie, B.; Ajami, N. K.; Solomatine, D. P.
2014-04-01
Multi-model (ensemble, or committee) techniques have shown to be an effective way to improve hydrological prediction performance and provide uncertainty information. This paper presents two novel multi-model ensemble techniques, one probabilistic, Modified Bootstrap Ensemble Model (MBEM), and one possibilistic, FUzzy C-means Ensemble based on data Pattern (FUCEP). The paper also explores utilization of the Ordinary Kriging (OK) method as a multi-model combination scheme for hydrological simulation/prediction. These techniques are compared against Bayesian Model Averaging (BMA) and Weighted Average (WA) methods to demonstrate their effectiveness. The mentioned techniques are applied to the three monthly water balance models used to generate stream flow simulations for two mountainous basins in the South-West of Iran. For both basins, the results demonstrate that MBEM and FUCEP generate more skillful and reliable probabilistic predictions, outperforming all the other techniques. We have also found that OK did not demonstrate any improved skill as a simple combination method over WA scheme for neither of the basins.
Application of the Multimodel Ensemble Kalman Filter Method in Groundwater System
Directory of Open Access Journals (Sweden)
Liang Xue
2015-02-01
Full Text Available With the development of in-situ monitoring techniques, the ensemble Kalman filter (EnKF has become a popular data assimilation method due to its capability to jointly update model parameters and state variables in a sequential way, and to assess the uncertainty associated with estimation and prediction. To take the conceptual model uncertainty into account during the data assimilation process, a novel multimodel ensemble Kalman filter method has been proposed by incorporating the standard EnKF with Bayesian model averaging framework. In this paper, this method is applied to analyze the dataset obtained from the Hailiutu River Basin located in the northwest part of China. Multiple conceptual models are created by considering two important factors that control groundwater dynamics in semi-arid areas: the zonation pattern of the hydraulic conductivity field and the relationship between evapotranspiration and groundwater level. The results show that the posterior model weights of the postulated models can be dynamically adjusted according to the mismatch between the measurements and the ensemble predictions, and the multimodel ensemble estimation and the corresponding uncertainty can be quantified.
Canonical and grand canonical theory of spinodal instabilities
International Nuclear Information System (INIS)
In the context of the mean field approximation to the Landau-Ginzburg-Wilson functional integral, describing the equilibrium properties of a system with a conserved order parameter, the conditions for critical instabilities in the canonical ensemble are analysed. (A.C.A.S.)
EXPERIMENTS OF ENSEMBLE FORECAST OF TYPHOON TRACK USING BDA PERTURBING METHOD
Institute of Scientific and Technical Information of China (English)
HUANG Yan-yan; WAN Qi-lin; YUAN Jin-nan; DING Wei-yu
2006-01-01
A new method, BDA perturbing, is used in ensemble forecasting of typhoon track. This method is based on the Bogus Data Assimilation scheme. It perturbs the initial position and intensity of typhoons and gets a series of bogus vortex. Then each bogus vortex is used in data assimilation to obtain initial conditions. Ensemble forecast members are constructed by conducting simulation with these initial conditions. Some cases of typhoon are chosen to test the validity of this new method and the results show that: using the BDA perturbing method to perturb initial position and intensity of typhoon for track forecast can improve accuracy, compared with the direct use of the BDA assimilation scheme. And it is concluded that a perturbing amplitude of intensity of 5 hPa is probably more appropriate than 10 hPa if the BDA perturbing method is used in combination with initial position perturbation.
An Introduction to Ensemble Methods for Data Analysis
Berk, Richard A.
2011-01-01
There are a growing number of new statistical procedures Leo Breiman (2001b) has called "algorithmic". Coming from work primarily in statistics, applied mathematics, and computer science, these techniques are sometimes linked to "data mining", "machine learning", and "statistical learning". A key idea behind algorithmic methods is that there is no statistical model in the usual sense; no effort to made to represent how the data were generated. And no apologies are made for the absence of a mo...
A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification
Directory of Open Access Journals (Sweden)
Yongjun Piao
2015-01-01
Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.
A Numerical Comparison of Rule Ensemble Methods and Support Vector Machines
Energy Technology Data Exchange (ETDEWEB)
Meza, Juan C.; Woods, Mark
2009-12-18
Machine or statistical learning is a growing field that encompasses many scientific problems including estimating parameters from data, identifying risk factors in health studies, image recognition, and finding clusters within datasets, to name just a few examples. Statistical learning can be described as 'learning from data' , with the goal of making a prediction of some outcome of interest. This prediction is usually made on the basis of a computer model that is built using data where the outcomes and a set of features have been previously matched. The computer model is called a learner, hence the name machine learning. In this paper, we present two such algorithms, a support vector machine method and a rule ensemble method. We compared their predictive power on three supernova type 1a data sets provided by the Nearby Supernova Factory and found that while both methods give accuracies of approximately 95%, the rule ensemble method gives much lower false negative rates.
Rhythmic canons and modular tiling
Caure, Hélianthe
2016-01-01
This thesis is a contribution to the study of modulo p tiling. Many mathematical and computational tools were used for the study of rhythmic tiling canons. Recent research has mainly focused in finding tiling without inner periodicity, being called Vuza canons. Those canons are a constructive basis for all rhythmic tiling canons, however, they are really difficult to obtain. Best current method is a brut force exploration that, despite a few recent enhancements, is exponential. Many technics ...
Thermodynamic stability of charged BTZ black holes: Ensemble dependency problem and its solution
Hendi, S H; Mamasani, R
2015-01-01
Motivated by the wide applications of thermal stability and phase transition, we investigate thermodynamic properties of charged BTZ black holes. We apply the standard method to calculate the heat capacity and the Hessian matrix and find that thermal stability of charged BTZ solutions depends on the choice of ensemble. To overcome this problem, we take into account cosmological constant as a thermodynamical variable. By this modification, we show that the ensemble dependency is eliminated and thermal stability conditions are the same in both ensembles. Then, we generalize our solutions to the case of nonlinear electrodynamics. We show how nonlinear matter field modifies the geometrical behavior of the metric function. We also study phase transition and thermal stability of these black holes in context of both canonical and grand canonical ensembles. We show that by considering the cosmological constant as a thermodynamical variable and modifying the Hessian matrix, the ensemble dependency of thermal stability...
Wan, H.; Rasch, P. J.; Zhang, K.; Qian, Y.; Yan, H.; Zhao, C.
2014-09-01
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of high
A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface
Cavrini, Francesco; Quitadamo, Lucia Rita; Saggio, Giovanni
2016-01-01
We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI) based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control. PMID:26819595
He, Chengfei; Zhi, Xiefei; You, Qinglong; Song, Bin; Fraedrich, Klaus
2015-08-01
This study conducted 24- to 72-h multi-model ensemble forecasts to explore the tracks and intensities (central mean sea level pressure) of tropical cyclones (TCs). Forecast data for the northwestern Pacific basin in 2010 and 2011 were selected from the China Meteorological Administration, European Centre for Medium-Range Weather Forecasts (ECMWF), Japan Meteorological Agency, and National Centers for Environmental Prediction datasets of the Observing System Research and Predictability Experiment Interactive Grand Global Ensemble project. The Kalman Filter was employed to conduct the TC forecasts, along with the ensemble mean and super-ensemble for comparison. The following results were obtained: (1) The statistical-dynamic Kalman Filter, in which recent observations are given more importance and model weighting coefficients are adjusted over time, produced quite different results from that of the super-ensemble. (2) The Kalman Filter reduced the TC mean absolute track forecast error by approximately 50, 80 and 100 km in the 24-, 48- and 72-h forecasts, respectively, compared with the best individual model (ECMWF). Also, the intensity forecasts were improved by the Kalman Filter to some extent in terms of average intensity deviation (AID) and correlation coefficients with reanalysis intensity data. Overall, the Kalman Filter technique performed better compared to multi-models, the ensemble mean, and the super-ensemble in 3-day forecasts. The implication of this study is that this technique appears to be a very promising statistical-dynamic method for multi-model ensemble forecasts of TCs.
Data Mining and Ensemble of Learning Methods%数据挖掘与组合学习
Institute of Scientific and Technical Information of China (English)
刁力力; 胡可云; 陆玉昌; 石纯一
2001-01-01
Data-mining is a kind of solution for solving the problem of information exploding. Classification and prediction belong to the most fundamental tasks in data-mining field. Many experiments have showed that the results of ensemble of learning methods are generally better than those of single learning methods under most of the time. In the sense,it is of great value to introduce ensemble of learning methods to data mining. This paper introduces data mining and ensemble of learning methods respectively,along with the analysis and formulation about the role ensemble of learning methods can act in some important practicing aspects of data mining:Text mining,multi-media information mining and web mining.
Extensions and applications of ensemble-of-trees methods in machine learning
Bleich, Justin
Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of
Canonical Information Analysis
DEFF Research Database (Denmark)
Vestergaard, Jacob Schack; Nielsen, Allan Aasbjerg
2015-01-01
Canonical correlation analysis is an established multivariate statistical method in which correlation between linear combinations of multivariate sets of variables is maximized. In canonical information analysis introduced here, linear correlation as a measure of association between variables is...... replaced by the information theoretical, entropy based measure mutual information, which is a much more general measure of association. We make canonical information analysis feasible for large sample problems, including for example multispectral images, due to the use of a fast kernel density estimator...... for entropy estimation. Canonical information analysis is applied successfully to (1) simple simulated data to illustrate the basic idea and evaluate performance, (2) fusion of weather radar and optical geostationary satellite data in a situation with heavy precipitation, and (3) change detection in...
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Qin, Hong; Liu, Jian; Xiao, Jianyuan; ZHANG, RUILI; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao
2015-01-01
Particle-in-Cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretizing its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-sca...
Simulating large-scale crop yield by using perturbed-parameter ensemble method
Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.
2010-12-01
Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence
Fast-sum method for the elastic field of three-dimensional dislocation ensembles
International Nuclear Information System (INIS)
The elastic field of complex shape ensembles of dislocation loops is developed as an essential ingredient in the dislocation dynamics method for computer simulation of mesoscopic plastic deformation. Dislocation ensembles are sorted into individual loops, which are then divided into segments represented as parametrized space curves. Numerical solutions are presented as fast numerical sums for relevant elastic field variables (i.e., displacement, strain, stress, force, self-energy, and interaction energy). Gaussian numerical quadratures are utilized to solve for field equations of linear elasticity in an infinite isotropic elastic medium. The accuracy of the method is verified by comparison of numerical results to analytical solutions for typical prismatic and slip dislocation loops. The method is shown to be highly accurate, computationally efficient, and numerically convergent as the number of segments and quadrature points are increased on each loop. Several examples of method applications to calculations of the elastic field of simple and complex loop geometries are given in infinite crystals. The effect of crystal surfaces on the redistribution of the elastic field is demonstrated by superposition of a finite-element image force field on the computed results. copyright 1999 The American Physical Society
An ensemble method for data stream classification in the presence of concept drift
Institute of Scientific and Technical Information of China (English)
Omid ABBASZADEH; Ali AMIRI‡; Ali Reza KHANTEYMOORI
2015-01-01
One recent area of interest in computer science is data stream management and processing. By ‘data stream’, we refer to continuous and rapidly generated packages of data. Specifi c features of data streams are immense volume, high production rate, limited data processing time, and data concept drift; these features differentiate the data stream from standard types of data. An issue for the data stream is classifi cation of input data. A novel ensemble classifi er is proposed in this paper. The classifi er uses base classifi ers of two weighting functions under different data input conditions. In addition, a new method is used to determine drift, which emphasizes the precision of the algorithm. Another characteristic of the proposed method is removal of different numbers of the base classifi ers based on their quality. Implementation of a weighting mechanism to the base classifi ers at the decision-making stage is another advantage of the algorithm. This facilitates adaptability when drifts take place, which leads to classifi ers with higher eﬃciency. Furthermore, the proposed method is tested on a set of standard data and the results confi rm higher accuracy compared to available ensemble classifi ers and single classifi ers. In addition, in some cases the proposed classifi er is faster and needs less storage space.
Examination of multi-perturbation methods for ensemble prediction of the MJO during boreal summer
Kang, In-Sik; Jang, Pyong-Hwa; Almazroui, Mansour
2014-05-01
The impact of initialization and perturbation methods on the ensemble prediction of the boreal summer intraseasonal oscillation was investigated using 20-year hindcast predictions of a coupled general circulation model. The three perturbation methods used in the present study are the lagged-averaged forecast (LAF) method, the breeding method, and the empirical singular vector (ESV) method. Hindcast experiments were performed with a prediction interval of 10 days for extended boreal summer (May-October) seasons over a 20 year period. The empirical orthogonal function (EOF) eigenvectors of the initial perturbations depend on the individual perturbation method used. The leading EOF eigenvectors of the LAF perturbations exhibit large variances in the extratropics. Bred vectors with a breeding interval of 3 days represent the local unstable mode moving northward and eastward over the Indian and western Pacific region, and the leading EOF modes of the ESV perturbations represent planetary-scale eastward moving perturbations over the tropics. By combining the three perturbation methods, a multi-perturbation (MP) ensemble prediction system for the intraseasonal time scale was constructed, and the effectiveness of the MP prediction system for the Madden and Julian oscillation (MJO) prediction was examined in the present study. The MJO prediction skills of the individual perturbation methods are all similar; however, the MP-based prediction has a higher level of correlation skill for predicting the real-time multivariate MJO indices compared to those of the other individual perturbation methods. The predictability of the intraseasonal oscillation is sensitive to the MJO amplitude and to the location of the dominant convective anomaly in the initial state. The improvement in the skill of the MP prediction system is more effective during periods of weak MJO activity.
Fault diagnosis method for nuclear power plant based on ensemble learning
International Nuclear Information System (INIS)
Nuclear power plant (NPP) is a very complex system, which needs to collect and monitor vast parameters, so it's hard to diagnose the faults of NPP. An ensemble learning method was proposed according to the problem. And the method was applied to learn from training samples which were the typical faults of nuclear power plant, i. e., loss of coolant accident (LOCA), feed water pipe rupture, steam generator tube rupture (SGTR), main steam pipe rupture. And the simulation results were carried out on the condition of normal and invalid and absent parameters respectively. The simulation results show that this method can get a good result on the condition of invalid and absent parameters. The method shows very good generalization performance and fault tolerance. (authors)
An efficient ensemble of radial basis functions method based on quadratic programming
Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian
2016-07-01
Radial basis function (RBF) surrogate models have been widely applied in engineering design optimization problems to approximate computationally expensive simulations. Ensemble of radial basis functions (ERBF) using the weighted sum of stand-alone RBFs improves the approximation performance. To achieve a good trade-off between the accuracy and efficiency of the modelling process, this article presents a novel efficient ERBF method to determine the weights through solving a quadratic programming subproblem, denoted ERBF-QP. Several numerical benchmark functions are utilized to test the performance of the proposed ERBF-QP method. The results show that ERBF-QP can significantly improve the modelling efficiency compared with several existing ERBF methods. Moreover, ERBF-QP also provides satisfactory performance in terms of approximation accuracy. Finally, the ERBF-QP method is applied to a satellite multidisciplinary design optimization problem to illustrate its practicality and effectiveness for real-world engineering applications.
Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.
Ensemble approach combining multiple methods improves human transcription start site prediction
LENUS (Irish Health Repository)
Dineen, David G
2010-11-30
Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.
Acceleration of ensemble machine learning methods using many-core devices
Tamerus, A.; Washbrook, A.; Wyeth, D.
2015-12-01
We present a case study into the acceleration of ensemble machine learning methods using many-core devices in collaboration with Toshiba Medical Visualisation Systems Europe (TMVSE). The adoption of GPUs to execute a key algorithm in the classification of medical image data was shown to significantly reduce overall processing time. Using a representative dataset and pre-trained decision trees as input we will demonstrate how the decision forest classification method can be mapped onto the GPU data processing model. It was found that a GPU-based version of the decision forest method resulted in over 138 times speed-up over a single-threaded CPU implementation with further improvements possible. The same GPU-based software was then directly applied to a suitably formed dataset to benefit supervised learning techniques applied in High Energy Physics (HEP) with similar improvements in performance.
Regularized Generalized Canonical Correlation Analysis
Tenenhaus, Arthur; Tenenhaus, Michel
2011-01-01
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…
Senjean, Bruno; Alam, Md Mehboob; Knecht, Stefan; Fromager, Emmanuel
2015-01-01
The combination of a recently proposed linear interpolation method (LIM) [Senjean et al., Phys. Rev. A 92, 012518 (2015)], which enables the calculation of weight-independent excitation energies in range-separated ensemble density-functional approximations, with the extrapolation scheme of Savin [J. Chem. Phys. 140, 18A509 (2014)] is presented in this work. It is shown that LIM excitation energies vary quadratically with the inverse of the range-separation parameter mu when the latter is large. As a result, the extrapolation scheme, which is usually applied to long-range interacting energies, can be adapted straightforwardly to LIM. This extrapolated LIM (ELIM) has been tested on a small test set consisting of He, Be, H2 and HeH+. Relatively accurate results have been obtained for the first singlet excitation energies with the typical mu=0.4 value. The improvement of LIM after extrapolation is remarkable, in particular for the doubly-excited 2^1Sigma+g state in the stretched H2 molecule. Three-state ensemble ...
A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface
Directory of Open Access Journals (Sweden)
Francesco Cavrini
2016-01-01
Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.
Flicek, Paul; Amode, M Ridwan; Barrell, Daniel; Beal, Kathryn; Brent, Simon; Carvalho-Silva, Denise; Clapham, Peter; Coates, Guy; Fairley, Susan; Fitzgerald, Stephen; Gil, Laurent; Gordon, Leo; Hendrix, Maurice; Hourlier, Thibaut; Johnson, Nathan; Kähäri, Andreas K; Keefe, Damian; Keenan, Stephen; Kinsella, Rhoda; Komorowska, Monika; Koscielny, Gautier; Kulesha, Eugene; Larsson, Pontus; Longden, Ian; McLaren, William; Muffato, Matthieu; Overduin, Bert; Pignatelli, Miguel; Pritchard, Bethan; Riat, Harpreet Singh; Ritchie, Graham R S; Ruffier, Magali; Schuster, Michael; Sobral, Daniel; Tang, Y Amy; Taylor, Kieron; Trevanion, Stephen; Vandrovcova, Jana; White, Simon; Wilson, Mark; Wilder, Steven P; Aken, Bronwen L; Birney, Ewan; Cunningham, Fiona; Dunham, Ian; Durbin, Richard; Fernández-Suarez, Xosé M; Harrow, Jennifer; Herrero, Javier; Hubbard, Tim J P; Parker, Anne; Proctor, Glenn; Spudich, Giulietta; Vogel, Jan; Yates, Andy; Zadissa, Amonida; Searle, Stephen M J
2012-01-01
The Ensembl project (http://www.ensembl.org) provides genome resources for chordate genomes with a particular focus on human genome data as well as data for key model organisms such as mouse, rat and zebrafish. Five additional species were added in the last year including gibbon (Nomascus leucogenys) and Tasmanian devil (Sarcophilus harrisii) bringing the total number of supported species to 61 as of Ensembl release 64 (September 2011). Of these, 55 species appear on the main Ensembl website and six species are provided on the Ensembl preview site (Pre!Ensembl; http://pre.ensembl.org) with preliminary support. The past year has also seen improvements across the project. PMID:22086963
On Reducing the Effect of Covariate Factors in Gait Recognition: A Classifier Ensemble Method.
Guan, Yu; Li, Chang-Tsun; Roli, Fabio
2015-07-01
Robust human gait recognition is challenging because of the presence of covariate factors such as carrying condition, clothing, walking surface, etc. In this paper, we model the effect of covariates as an unknown partial feature corruption problem. Since the locations of corruptions may differ for different query gaits, relevant features may become irrelevant when walking condition changes. In this case, it is difficult to train one fixed classifier that is robust to a large number of different covariates. To tackle this problem, we propose a classifier ensemble method based on the random subspace Method (RSM) and majority voting (MV). Its theoretical basis suggests it is insensitive to locations of corrupted features, and thus can generalize well to a large number of covariates. We also extend this method by proposing two strategies, i.e, local enhancing (LE) and hybrid decision-level fusion (HDF) to suppress the ratio of false votes to true votes (before MV). The performance of our approach is competitive against the most challenging covariates like clothing, walking surface, and elapsed time. We evaluate our method on the USF dataset and OU-ISIR-B dataset, and it has much higher performance than other state-of-the-art algorithms. PMID:26352457
Inferring Association between Compound and Pathway with an Improved Ensemble Learning Method.
Song, Meiyue; Jiang, Zhenran
2015-11-01
Emergence of compound molecular data coupled to pathway information offers the possibility of using machine learning methods for compound-pathway associations' inference. To provide insights into the global relationship between compounds and their affected pathways, a improved Rotation Forest ensemble learning method called RGRF (Relief & GBSSL - Rotation Forest) was proposed to predict their potential associations. The main characteristic of the RGRF lies in using the Relief algorithm for feature extraction and regarding the Graph-Based Semi-Supervised Learning method as classifier. By incorporating the chemical structure information, drug mode of action information and genomic space information, our method can achieve a better precision and flexibility on compound-pathway prediction. Moreover, several new compound-pathway associations that having the potential for further clinical investigation have been identified by database searching. In the end, a prediction tool was developed using RGRF algorithm, which can predict the interactions between pathways and all of the compounds in cMap database. PMID:27491036
The random-variable canonical distribution
International Nuclear Information System (INIS)
An alternative interpretation to Gibbs' concept of the canonical distribution for an ensemble of systems in statistical equilibrium is proposed. Whereas Gibbs' theory is based upon a consideration of systems subject to dynamical law, the present analysis relies neither on the classical equations of motion nor makes use of any a priori probability of a complexion; rather, it makes avail of the basic algebra of random variables and, specifically, invokes the law of large numbers. Thereby, a canonical distribution is derived which describes a macrosystem in probabilistic, rather than deterministic, terms, and facilitates the understanding of energy fluctuations which occur in macrosystems at an overall constant ensemble temperature. A discussion is given of a modified form of the Gibbs canonical distribution which takes full account of the effects of random energy fluctuations. It is demonstrated that the results from this modified analysis are entirely consonant with those derived from the random-variable approach. (author)
Functional Multiple-Set Canonical Correlation Analysis
Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.
2012-01-01
We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…
Directory of Open Access Journals (Sweden)
P. J. Irvine
2013-02-01
Full Text Available We present a simple method to generate a perturbed parameter ensemble (PPE of a fully-coupled atmosphere-ocean general circulation model (AOGCM, HadCM3, without requiring flux-adjustment. The aim was to produce an ensemble that samples parametric uncertainty in some key variables and displays a similar range of behavior as seen in multi-model ensembles (MMEs. Six atmospheric parameters, a sea-ice parameter and an ocean parameter were jointly perturbed within a reasonable range to generate an initial group of 200 members. To screen out implausible ensemble members, 20 yr pre-industrial control simulations were run and members whose temperature response to the parameter perturbations was projected to be outside the range of 13.6 ± 2°C, i.e. near to the observed pre-industrial global mean, were discarded. 21 members, including the standard unperturbed model, were accepted, covering almost the entire span of the eight parameters, challenging the argument that without flux-adjustment parameter ranges would be unduly restricted. This ensemble was used in 3 experiments; a 800 yr pre-industrial, a 150 yr quadrupled CO2, and a 150 yr 1% CO2 rise per annum simulation. The behavior of the PPE for the pre-industrial control compared well to the CMIP3 ensemble for a number of surface and atmospheric column variables with the exception of a few members in the Tropics. However, we find that members of the PPE with low values of the entrainment rate coefficient show very large increases in upper tropospheric and stratospheric water vapor concentrations in response to elevated CO2 and some show implausibly high climate sensitivities, and as such some of these members will be excluded from future experiments with this ensemble. The outcome of this study is a PPE of a fully-coupled AOGCM which samples parametric uncertainty with a range of behavior similar to the CMIP3 ensemble and a simple methodology which would be applicable to other GCMs.
Flicek, Paul; Amode, M. Ridwan; Barrell, Daniel; Beal, Kathryn; Brent, Simon; Carvalho-Silva, Denise; Clapham, Peter; Coates, Guy; Fairley, Susan; Fitzgerald, Stephen; Gil, Laurent; Gordon, Leo; Hendrix, Maurice; Hourlier, Thibaut; Johnson, Nathan
2011-01-01
The Ensembl project (http://www.ensembl.org) provides genome resources for chordate genomes with a particular focus on human genome data as well as data for key model organisms such as mouse, rat and zebrafish. Five additional species were added in the last year including gibbon (Nomascus leucogenys) and Tasmanian devil (Sarcophilus harrisii) bringing the total number of supported species to 61 as of Ensembl release 64 (September 2011). Of these, 55 species appear on the main Ensembl website ...
Hamdi, Anis; Missaoui, Oualid; Frigui, Hichem; Gader, Paul
2010-04-01
We propose a landmine detection algorithm that uses ensemble discrete hidden Markov models with context dependent training schemes. We hypothesize that the data are generated by K models. These different models reflect the fact that mines and clutter objects have different characteristics depending on the mine type, soil and weather conditions, and burial depth. Model identification is based on clustering in the log-likelihood space. First, one HMM is fit to each of the N individual sequence. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an N x N log-likelihood distance matrix that will be partitioned into K groups. In the second step, we learn the parameters of one discrete HMM per group. We propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we will investigate the maximum likelihood, and the MCE-based discriminative training approaches. Results on large and diverse Ground Penetrating Radar data collections show that the proposed method can identify meaningful and coherent HMM models that describe different properties of the data. Each HMM models a group of alarm signatures that share common attributes such as clutter, mine type, and burial depth. Our initial experiments have also indicated that the proposed mixture model outperform the baseline HMM that uses one model for the mine and one model for the background.
An ensemble method for gene discovery based on DNA microarray data
Institute of Scientific and Technical Information of China (English)
LI Xia; RAO Shaoqi; ZHANG Tianwen; GUO Zheng; ZHANG Qingpu; Kathy L. MOSER; Eric J. TOPOL
2004-01-01
The advent of DNA microarray technology has offered the promise of casting new insights onto deciphering secrets of life by monitoring activities of thousands of genes simultaneously.Current analyses of microarray data focus on precise classification of biological types,for example,tumor versus normal tissues.A further scientific challenging task is to extract disease-relevant genes from the bewildering amounts of raw data,which is one of the most critical themes in the post-genomic era,but it is generally ignored due to lack of an efficient approach.In this paper,we present a novel ensemble method for gene extraction that can be tailored to fulfill multiple biological tasks including(i)precise classification of biological types;(ii)disease gene mining; and(iii)target-driven gene networking.We also give a numerical application for(i)and(ii)using a public microarrary data set and set aside a separate paper to address(iii).
Xue, Xiaoming; Zhou, Jianzhong; Xu, Yanhe; Zhu, Wenlong; Li, Chaoshun
2015-10-01
Ensemble empirical mode decomposition (EEMD) represents a significant improvement over the original empirical mode decomposition (EMD) method for eliminating the mode mixing problem. However, the added white noises generate some tough problems including the high computational cost, the determination of the two critical parameters (the amplitude of the added white noise and the number of ensemble trials), and the contamination of the residue noise in the signal reconstruction. To solve these problems, an adaptively fast EEMD (AFEEMD) method combined with complementary EEMD (CEEMD) is proposed in this paper. In the proposed method, the two critical parameters are respectively fixed as 0.01 times standard deviation of the original signal and two ensemble trials. Instead, the upper frequency limit of the added white noise is the key parameter which needs to be prescribed beforehand. Unlike the original EEMD method, only two high-frequency white noises are added to the signal to be investigated with anti-phase in AFEEMD. Furthermore, an index termed relative root-mean-square error is employed for the adaptive selection of the proper upper frequency limit of the added white noises. Simulation test and vibration signals based fault diagnosis of rolling element bearing under different fault types are utilized to demonstrate the feasibility and effectiveness of the proposed method. The analysis results indicate that the AFEEMD method represents a sound improvement over the original EEMD method, and has strong practicability.
International Nuclear Information System (INIS)
The commercial viability of PEMFC (proton exchange membrane fuel cell) systems depends on using effective fault diagnosis technologies in PEMFC systems. However, many researchers have experimentally studied PEMFC (proton exchange membrane fuel cell) systems without considering certain fault conditions. In this paper, an ANN (artificial neural network) ensemble method is presented that improves the stability and reliability of the PEMFC systems. In the first part, a transient model giving it flexibility in application to some exceptional conditions is built. The PEMFC dynamic model is built and simulated using MATLAB. In the second, using this model and experiments, the mechanisms of four different faults in PEMFC systems are analyzed in detail. Third, the ANN ensemble for the fault diagnosis is built and modeled. This model is trained and tested by the data. The test result shows that, compared with the previous method for fault diagnosis of PEMFC systems, the proposed fault diagnosis method has higher diagnostic rate and generalization ability. Moreover, the partial structure of this method can be altered easily, along with the change of the PEMFC systems. In general, this method for diagnosis of PEMFC has value for certain applications. - Highlights: • We analyze the principles and mechanisms of the four faults in PEMFC (proton exchange membrane fuel cell) system. • We design and model an ANN (artificial neural network) ensemble method for the fault diagnosis of PEMFC system. • This method has high diagnostic rate and strong generalization ability
Enhanced Sampling in the Well-Tempered Ensemble
Bonomi, M.; Parrinello, M.
2010-05-01
We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.
Kober, K.; Craig, C; Keil, C.; A. Dörnbrack
2012-01-01
A seamless prediction of convective precipitation for a continuous range of lead times from 0Ã¢Â�Â�8 h requires the application of different approaches. Here, a nowcasting method and a high-resolution numerical weather prediction ensemble are combined to provide probabilistic precipitation forecasts. For the nowcast, an existing deterministic extrapolation technique was modified by the local Lagrangian method to calculate the probability of exceeding a threshold value in radar reflectivity...
International Nuclear Information System (INIS)
The dynamic thermal properties of clothing ensembles are important to thermal transient comfort, but have so far not been properly quantified. In this paper, a novel test procedure and new index based on measurements on the sweating fabric manikin-Walter are proposed to quantify and measure the dynamic thermal properties of clothing ensembles. Experiments showed that the new index is correlated to the changing rate of the body temperature of the wearer, which is an important indicator of thermal transient comfort. Clothing ensembles having higher values of the index means the wearer will have a faster changing rate of body temperature and shorter duration before approaching a dangerous thermo-physiological state, when he changes from 'resting' to 'exercising' mode. Clothing should therefore be designed to reduce the value of the index
Design Hybrid method for intrusion detection using Ensemble cluster classification and SOM network
Directory of Open Access Journals (Sweden)
Deepak Rathore
2012-09-01
Full Text Available In current scenario of internet technology security is big challenge. Internet network threats by various cyber-attack and loss the system data and degrade the performance of host computer. In this sense intrusion detection are challenging field of research in concern of network security based on firewall and some rule based detection technique. In this paper we proposed an Ensemble Cluster Classification technique using som network for detection of mixed variable data generated by malicious software for attack purpose in host system. In our methodology SOM network control the iteration of distance of different parameters of ensembling our experimental result show that better empirical evaluation on KDD data set 99 in comparison of existing ensemble classifier.
Lee, Mark D; Ruostekoski, Janne
2016-01-01
We derive equations for the strongly coupled system of light and dense atomic ensembles. The formalism includes an arbitrary internal level structure for the atoms and is not restricted to weak excitation of atoms by light. In the low light intensity limit for atoms with a single electronic ground state, the full quantum field-theoretical representation of the model can be solved exactly by means of classical stochastic electrodynamics simulations for stationary atoms that represent cold atomic ensembles. Simulations for the optical response of atoms in a quantum degenerate regime require one to synthesize a stochastic ensemble of atomic positions that generates the corresponding quantum statistical position correlations between the atoms. In the case of multiple ground levels or at light intensities where saturation becomes important, the classical simulations require approximations that neglect quantum fluctuations between the levels. We show how the model is extended to incorporate corrections due to quant...
Contour-integral method for transitions to the circular unitary ensemble
Vinayak Akhilesh Pandey Vinayak; Pandey, Akhilesh
2009-08-01
The representation of correlation functions as a contour integral has been useful in the study of transitions to the Gaussian unitary ensemble (GUE). We develop the formalism for transitions to the circular unitary ensemble (CUE) and consider the general ellCUE to CUE transition where ellCUE denotes a superposition of ell independent CUE spectra in an arbitrary ratio. For large matrices, we derive the two-level correlation function for all ell including ell = ∞ (the Poisson case). The results are useful in the study of weakly broken partitioning symmetries and weakly coupled mesoscopic cavities.
Contour-integral method for transitions to the circular unitary ensemble
International Nuclear Information System (INIS)
The representation of correlation functions as a contour integral has been useful in the study of transitions to the Gaussian unitary ensemble (GUE). We develop the formalism for transitions to the circular unitary ensemble (CUE) and consider the general lCUE to CUE transition where lCUE denotes a superposition of l independent CUE spectra in an arbitrary ratio. For large matrices, we derive the two-level correlation function for all l including l = ∞ (the Poisson case). The results are useful in the study of weakly broken partitioning symmetries and weakly coupled mesoscopic cavities.
International Nuclear Information System (INIS)
A method of canonical transformations extended to dissipative Hamiltonian systems in a previous article is here applied to the behaviour of an extended charge coupled to the em field which is deductible from a Lagrangian function explicitly dependent on time. The generating function of a transformation which decouples the variables of the system is given, for an elastic applied force, and hence the constants in motion are found by a general method. Some limit cases are examined. (auth)
Lee, J. H.; Timmermans, J.; Su, Z.; Mancini, M.
2012-04-01
Aerodynamic roughness height (Zom) is a key parameter required in land surface hydrological model, since errors in heat flux estimations are largely dependent on accurate optimization of this parameter. Despite its significance, it remains an uncertain parameter that is not easily determined. This is mostly because of non-linear relationship in Monin-Obukhov Similarity (MOS) and unknown vertical characteristic of vegetation. Previous studies determined aerodynamic roughness using traditional wind profile method, remotely sensed vegetation index, minimization of cost function over MOS relationship or linear regression. However, these are complicated procedures that presume high accuracy for several other related parameters embedded in MOS equations. In order to simplify a procedure and reduce the number of parameters in need, this study suggests a new approach to extract aerodynamic roughness parameter via Ensemble Kalman Filter (EnKF) that affords non-linearity and that requires only single or two heat flux measurement. So far, to our knowledge, no previous study has applied EnKF to aerodynamic roughness estimation, while a majority of data assimilation study has paid attention to land surface state variables such as soil moisture or land surface temperature. This approach was applied to grassland in semi-arid Tibetan area and maize on moderately wet condition in Italy. It was demonstrated that aerodynamic roughness parameter can inversely be tracked from data assimilated heat flux analysis. The aerodynamic roughness height estimated in this approach was consistent with eddy covariance result and literature value. Consequently, this newly estimated input adjusted the sensible heat overestimated and latent heat flux underestimated by the original Surface Energy Balance System (SEBS) model, suggesting better heat flux estimation especially during the summer Monsoon period. The advantage of this approach over other methodologies is that aerodynamic roughness height
Otsuru, Toru; Tomiku, Reiji; Din, Nazli Bin Che; Okamoto, Noriko; Murakami, Masahiko
2009-06-01
An in-situ measurement technique of a material surface normal impedance is proposed. It includes a concept of "ensemble averaged" surface normal impedance that extends the usage of obtained values to various applications such as architectural acoustics and computational simulations, especially those based on the wave theory. The measurement technique itself is a refinement of a method using a two-microphone technique and environmental anonymous noise, or diffused ambient noise, as proposed by Takahashi et al. [Appl. Acoust. 66, 845-865 (2005)]. Measured impedance can be regarded as time-space averaged normal impedance at the material surface. As a preliminary study using numerical simulations based on the boundary element method, normal incidence and random incidence measurements are compared numerically: results clarify that ensemble averaging is an effective mode of measuring sound absorption characteristics of materials with practical sizes in the lower frequency range of 100-1000 Hz, as confirmed by practical measurements. PMID:19507960
Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao
2016-01-01
Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 109, degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani’s theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.
Energy Technology Data Exchange (ETDEWEB)
Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao
2015-12-14
Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.
Comparison of Selected methods of Ensemble Generation in EnKF for Air-Quality Modelling
Czech Academy of Sciences Publication Activity Database
Resler, Jaroslav; Juruš, Pavel; Eben, Kryštof; Belda, Michal
Praha: Český hydrometeorologický ústav, 2005. s. 37-37. ISBN 80-86690-23-7. [WMO International Symposium on Assimilation of Observations in Meteorology and Oceanography /4./. 18.04.2005-22.04.2005, Prague] Institutional research plan: CEZ:AV0Z10300504 Keywords : ensemble Kalman filter * data assimilation * spatial correlation * NMC
Elsheikh, Ahmed H.
2013-06-01
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.
Molecular dynamics, Langevin, and hybrid Monte Carlo simulations in multicanonical ensemble
Hansmann, Uwe H E; Eisenmenger, F; Hansmann, Ulrich H.E.; Okamoto, Yuko; Eisenmenger, Frank
1996-01-01
We demonstrate that the multicanonical approach is not restricted to Monte Carlo simulations, but can also be applied to simulation techniques such as molecular dynamics, Langevin, and hybrid Monte Carlo algorithms. The effectiveness of the methods are tested with an energy function for the protein folding problem. Simulations in the multicanonical ensemble by the three methods are performed for a penta peptide, Met-enkephalin. For each algorithm, it is shown that from only one simulation run one can not only find the global-minimum-energy conformation but also obtain probability distributions in canonical ensemble at any temperature, which allows the calculation of any thermodynamic quantity as a function of temperature.
Directory of Open Access Journals (Sweden)
González-Martín, M. I.
2016-03-01
Full Text Available The canonical biplot method (CB is used to determine the discriminatory power of volatile chemical compounds in cheese. These volatile compounds were used as variables in order to differentiate among 6 groups or populations of cheeses (combinations of two seasons (winter and summer with 3 types of cheese (cow, sheep and goat’s milk. We analyzed a total of 17 volatile compounds by means of gas chromatography coupled with mass detection. The compounds included aldehydes and methyl-aldehydes, alcohols (primary, secondary and branched chain, ketones, methyl-ketones and esters in winter (WC and summer (SC cow’s cheeses, winter (WSh and summer (SSh sheep’s cheeses and in winter (WG and summer (SG goat’s cheeses. The CB method allows differences to be found as a function of the elaboration of the cheeses, the seasonality of the milk, and the separation of the six groups of cheeses, characterizing the specific volatile chemical compounds responsible for such differences.El m.todo biplot can.nico (CB se utiliza para determinar el poder discriminatorio de compuestos qu.micos vol.tiles en queso. Los compuestos vol.tiles se utilizan como variables con el fin de diferenciar entre los 6 grupos o poblaciones de quesos (combinaciones de dos temporadas (invierno y verano con 3 tipos de queso (vaca, oveja y cabra. Se analizan un total de 17 compuestos vol.tiles por medio de cromatograf.a de gases acoplada con detecci.n de masas. Los compuestos incluyen aldeh.dos y metil-aldeh.dos, alcoholes (primarios de cadena, secundaria y ramificada, cetonas, metil-cetonas y .steres. Los seis grupos de quesos son, quesos de vaca de invierno (WC y verano (SC; quesos de oveja de invierno (WSh y verano (SSh y quesos de cabra de invierno (WG y verano (SG. El m.todo CB permite la separaci.n de los seis grupos de quesos y encontrar las diferencias en funci.n del tipo y estacionalidad de la leche, caracterizando los compuestos qu.micos vol.tiles espec.ficos responsables de
Directory of Open Access Journals (Sweden)
J. H. Lee
2012-04-01
Full Text Available Aerodynamic roughness height (Z_{om} is a key parameter required in land surface hydrological model, since errors in heat flux estimations are largely dependent on accurate optimization of this parameter. Despite its significance, it remains an uncertain parameter that is not easily determined. This is mostly because of non-linear relationship in Monin-Obukhov Similarity (MOS and unknown vertical characteristic of vegetation. Previous studies determined aerodynamic roughness using traditional wind profile method, remotely sensed vegetation index, minimization of cost function over MOS relationship or linear regression. However, these are complicated procedures that presume high accuracy for several other related parameters embedded in MOS equations. In order to simplify a procedure and reduce the number of parameters in need, this study suggests a new approach to extract aerodynamic roughness parameter via Ensemble Kalman Filter (EnKF that affords non-linearity and that requires only single or two heat flux measurement. So far, to our knowledge, no previous study has applied EnKF to aerodynamic roughness estimation, while a majority of data assimilation study has paid attention to land surface state variables such as soil moisture or land surface temperature. This approach was applied to grassland in semi-arid Tibetan area and maize on moderately wet condition in Italy. It was demonstrated that aerodynamic roughness parameter can inversely be tracked from data assimilated heat flux analysis. The aerodynamic roughness height estimated in this approach was consistent with eddy covariance result and literature value. Consequently, this newly estimated input adjusted the sensible heat overestimated and latent heat flux underestimated by the original Surface Energy Balance System (SEBS model, suggesting better heat flux estimation especially during the summer Monsoon period. The advantage of this approach over other methodologies is
Babaei, Masoud; Pan, Indranil
2016-06-01
In this paper we defined a relatively complex reservoir engineering optimization problem of maximizing the net present value of the hydrocarbon production in a water flooding process by controlling the water injection rates in multiple control periods. We assessed the performance of a number of response surface surrogate models and their ensembles which are combined by Dempster-Shafer theory and Weighted Averaged Surrogates as found in contemporary literature works. Most of these ensemble methods are based on the philosophy that multiple weak learners can be leveraged to obtain one strong learner which is better than the individual weak ones. Even though these techniques have been shown to work well for test bench functions, we found them not offering a considerable improvement compared to an individually used cubic radial basis function surrogate model. Our simulations on two and three dimensional cases, with varying number of optimization variables suggest that cubic radial basis functions-based surrogate model is reliable, outperforms Kriging surrogates and multivariate adaptive regression splines, and if it does not outperform, it is rarely outperformed by the ensemble surrogate models.
2002-01-01
NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx
Directory of Open Access Journals (Sweden)
Xiaoning Pan
2015-04-01
Full Text Available Model performance of the partial least squares method (PLS alone and bagging-PLS was investigated in online near-infrared (NIR sensor monitoring of pilot-scale extraction process in Fructus aurantii. High-performance liquid chromatography (HPLC was used as a reference method to identify the active pharmaceutical ingredients: naringin, hesperidin and neohesperidin. Several preprocessing methods and synergy interval partial least squares (SiPLS and moving window partial least squares (MWPLS variable selection methods were compared. Single quantification models (PLS and ensemble methods combined with partial least squares (bagging-PLS were developed for quantitative analysis of naringin, hesperidin and neohesperidin. SiPLS was compared to SiPLS combined with bagging-PLS. Final results showed the root mean square error of prediction (RMSEP of bagging-PLS to be lower than that of PLS regression alone. For this reason, an ensemble method of online NIR sensor is here proposed as a means of monitoring the pilot-scale extraction process in Fructus aurantii, which may also constitute a suitable strategy for online NIR monitoring of CHM.
Consecutive Charging of a Molecule-on-Insulator Ensemble Using Single Electron Tunnelling Methods.
Rahe, Philipp; Steele, Ryan P; Williams, Clayton C
2016-02-10
We present the local charge state modification at room temperature of small insulator-supported molecular ensembles formed by 1,1'-ferrocenedicarboxylic acid on calcite. Single electron tunnelling between the conducting tip of a noncontact atomic force microscope (NC-AFM) and the molecular islands is observed. By joining NC-AFM with Kelvin probe force microscopy, successive charge build-up in the sample is observed from consecutive experiments. Charge transfer within the islands and structural relaxation of the adsorbate/surface system is suggested by the experimental data. PMID:26713686
Qin, Hong; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Burby, Joshua W; Ellison, Leland; Zhou, Yao
2015-01-01
Particle-in-Cell (PIC) simulation is the most important numerical tool in plasma physics and accelerator physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretizing the Marsden-Weinstein bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root researching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g., $10^{9}$, degrees of freedom.
Duan, Kai; Mei, Yadong
2014-05-01
This study evaluated the performance of three frequently applied statistical downscaling tools including SDSM, SVM, and LARS-WG, and their model-averaging ensembles under diverse moisture conditions with respect to the capability of reproducing the extremes as well as mean behaviors of precipitation. Daily observed precipitation and NCEP reanalysis data of 30 stations across China were collected for the period 1961-2000, and model parameters were calibrated for each season at individual site with 1961-1990 as the calibration period and 1991-2000 as the validation period. A flexible framework of multi-criteria model averaging was established in which model weights were optimized by the shuffled complex evolution algorithm. Model performance was compared for the optimal objective and nine more specific metrics. Results indicate that different downscaling methods can gain diverse usefulness and weakness in simulating various precipitation characteristics under different circumstances. SDSM showed more adaptability by acquiring better overall performance at a majority of the stations while LARS-WG revealed better accuracy in modeling most of the single metrics, especially extreme indices. SVM provided more usefulness under drier conditions, but it had less skill in capturing temporal patterns. Optimized model averaging, aiming at certain objective functions, can achieve a promising ensemble with increasing model complexity and computational cost. However, the variation of different methods' performances highlighted the tradeoff among different criteria, which compromised the ensemble forecast in terms of single metrics. As the superiority over single models cannot be guaranteed, model averaging technique should be used cautiously in precipitation downscaling.
Ensemble Equivalence for Distinguishable Particles
Directory of Open Access Journals (Sweden)
Antonio Fernández-Peralta
2016-07-01
Full Text Available Statistics of distinguishable particles has become relevant in systems of colloidal particles and in the context of applications of statistical mechanics to complex networks. In this paper, we present evidence that a commonly used expression for the partition function of a system of distinguishable particles leads to huge fluctuations of the number of particles in the grand canonical ensemble and, consequently, to nonequivalence of statistical ensembles. We will show that the alternative definition of the partition function including, naturally, Boltzmann’s correct counting factor for distinguishable particles solves the problem and restores ensemble equivalence. Finally, we also show that this choice for the partition function does not produce any inconsistency for a system of distinguishable localized particles, where the monoparticular partition function is not extensive.
Czech Academy of Sciences Publication Activity Database
Brázdová, Marie; Kyjovský, Ivo; Tichý, Vlastimil; Navrátilová, Lucie; Loscher, Ch.; Jurčo, J.; Kotrs, J.; Lexa, M.; Martínek, T.; Tolstonog, G.; Fojta, Miroslav; Paleček, Emil; Deppert, W.
Brno, 2009. s. 78. ISBN 978-80-210-4830-0. [Pracovní setkání biochemiků a molekulárních biologů /13./. 14.04.2009-15.04.2009, Brno] R&D Projects: GA MŠk(CZ) 1K04119; GA MŠk(CZ) LC06035; GA ČR(CZ) GP204/06/P369; GA ČR(CZ) GA204/08/1560; GA AV ČR(CZ) IAA500040701 Institutional research plan: CEZ:AV0Z50040507; CEZ:AV0Z50040702 Keywords : mutant p53 * non-canonical DNA * glioblastoma cells Subject RIV: BO - Biophysics
Directory of Open Access Journals (Sweden)
Lina Zhang
2015-09-01
Full Text Available Bacteriophage virion proteins and non-virion proteins have distinct functions in biological processes, such as specificity determination for host bacteria, bacteriophage replication and transcription. Accurate identification of bacteriophage virion proteins from bacteriophage protein sequences is significant to understand the complex virulence mechanism in host bacteria and the influence of bacteriophages on the development of antibacterial drugs. In this study, an ensemble method for bacteriophage virion protein prediction from bacteriophage protein sequences is put forward with hybrid feature spaces incorporating CTD (composition, transition and distribution, bi-profile Bayes, PseAAC (pseudo-amino acid composition and PSSM (position-specific scoring matrix. When performing on the training dataset 10-fold cross-validation, the presented method achieves a satisfactory prediction result with a sensitivity of 0.870, a specificity of 0.830, an accuracy of 0.850 and Matthew’s correlation coefficient (MCC of 0.701, respectively. To evaluate the prediction performance objectively, an independent testing dataset is used to evaluate the proposed method. Encouragingly, our proposed method performs better than previous studies with a sensitivity of 0.853, a specificity of 0.815, an accuracy of 0.831 and MCC of 0.662 on the independent testing dataset. These results suggest that the proposed method can be a potential candidate for bacteriophage virion protein prediction, which may provide a useful tool to find novel antibacterial drugs and to understand the relationship between bacteriophage and host bacteria. For the convenience of the vast majority of experimental Int. J. Mol. Sci. 2015, 16 21735 scientists, a user-friendly and publicly-accessible web-server for the proposed ensemble method is established.
Energy Technology Data Exchange (ETDEWEB)
Yu, Lifeng, E-mail: yu.lifeng@mayo.edu; Vrieze, Thomas J.; Leng, Shuai; Fletcher, Joel G.; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)
2015-05-15
Purpose: The spatial resolution of iterative reconstruction (IR) in computed tomography (CT) is contrast- and noise-dependent because of the nonlinear regularization. Due to the severe noise contamination, it is challenging to perform precise spatial-resolution measurements at very low-contrast levels. The purpose of this study was to measure the spatial resolution of a commercially available IR method using ensemble-averaged images acquired from repeated scans. Methods: A low-contrast phantom containing three rods (7, 14, and 21 HU below background) was scanned on a 128-slice CT scanner at three dose levels (CTDI{sub vol} = 16, 8, and 4 mGy). Images were reconstructed using two filtered-backprojection (FBP) kernels (B40 and B20) and a commercial IR method (sinogram affirmed iterative reconstruction, SAFIRE, Siemens Healthcare) with two strength settings (I40-3 and I40-5). The same scan was repeated 100 times at each dose level. The modulation transfer function (MTF) was calculated based on the edge profile measured on the ensemble-averaged images. Results: The spatial resolution of the two FBP kernels, B40 and B20, remained relatively constant across contrast and dose levels. However, the spatial resolution of the two IR kernels degraded relative to FBP as contrast or dose level decreased. For a given dose level at 16 mGy, the MTF{sub 50%} value normalized to the B40 kernel decreased from 98.4% at 21 HU to 88.5% at 7 HU for I40-3 and from 97.6% to 82.1% for I40-5. At 21 HU, the relative MTF{sub 50%} value decreased from 98.4% at 16 mGy to 90.7% at 4 mGy for I40-3 and from 97.6% to 85.6% for I40-5. Conclusions: A simple technique using ensemble averaging from repeated CT scans can be used to measure the spatial resolution of IR techniques in CT at very low contrast levels. The evaluated IR method degraded the spatial resolution at low contrast and high noise levels.
Directory of Open Access Journals (Sweden)
Marin-Garcia Pablo
2010-05-01
Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.
An Ensemble Method based on Particle of Swarm for the Reduction of Noise, Outlier and Core Point
Directory of Open Access Journals (Sweden)
Satish Dehariya,
2013-04-01
Full Text Available The majority voting and accurate prediction ofclassification algorithm in data mining arechallenging task for data classification. For theimprovement of data classification used differentclassifier along with another classifier in a mannerof ensembleprocess. Ensemble process increase theclassification ratio of classification algorithm, nowsuch par diagram of classification algorithm iscalled ensemble classifier. Ensemble learning is atechnique to improve the performance and accuracyof classification and predication of machinelearning algorithm. Many researchers proposed amodel for ensemble classifier for merging adifferent classification algorithm, but theperformance of ensemble algorithm suffered fromproblem of outlier, noise and core pointproblem ofdata from features selection process. In this paperwe combined core, outlier and noise data (COB forfeatures selection process for ensemble model. Theprocess of best feature selection with appropriateclassifier used particle of swarm optimization.
Enhanced Sampling in the Well-Tempered Ensemble
Bonomi, M.; Parrinello, M
2009-01-01
We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the ...
Directory of Open Access Journals (Sweden)
Carlos García-Bedoya Maguiña
2011-05-01
Full Text Available Canon es un concepto clave en la historia literaria. En el presente artículo,se revisa la evolución histórica del canon literario peruano. Es solo con la llamada República Aristocrática, en las primeras décadas del siglo XX, que cabe hablar en el caso peruano de la formación de un auténtico canon nacional. El autor denomina a esta primera versión del canon literario peruano como canon oligárquico y destaca la importancia de la obra de Riva Agüero y de Ventura García Calderón en su configuración. Es solo más tarde, desde los años 20 y de modo definitivo desde los años 50, que puede hablarse de la emergencia de un nuevo canon literarioal que el autor propone determinar canon posoligárquico.
Resistant multiple sparse canonical correlation.
Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna
2016-04-01
Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca. PMID:26963062
Ye, Linlin; Yang, Dan; Wang, Xu
2014-06-01
A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236
Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach
International Nuclear Information System (INIS)
Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are
An introduction to the theory of canonical matrices
Turnbull, H W
2004-01-01
Thorough and self-contained, this penetrating study of the theory of canonical matrices presents a detailed consideration of all the theory's principal features. Topics include elementary transformations and bilinear and quadratic forms; canonical reduction of equivalent matrices; subgroups of the group of equivalent transformations; and rational and classical canonical forms. The final chapters explore several methods of canonical reduction, including those of unitary and orthogonal transformations. 1952 edition. Index. Appendix. Historical notes. Bibliographies. 275 problems.
Bertot, Yves; Gonthier, Georges; Ould Biha, Sidi; Pasca, Ioana
2008-01-01
In this paper, we present an approach to describe uniformly iterated “big” operations and to provide lemmas that encapsulate all the commonly used reasoning steps on these constructs. We show that these iterated operations can be handled generically using the syntactic notation and canonical structure facilities provided by the Coq system. We then show how these canonical big operations played a crucial enabling role in the study of various parts of linear algebra and multi-dimensional real a...
An Ensemble Method based on Particle of Swarm for the Reduction of Noise, Outlier and Core Point
Directory of Open Access Journals (Sweden)
Satish Dehariya
2013-03-01
Full Text Available The majority voting and accurate prediction of classification algorithm in data mining are challenging task for data classification. For the improvement of data classification used different classifier along with another classifier in a manner of ensemble process. Ensemble process increase the classification ratio of classification algorithm, now such par diagram of classification algorithm is called ensemble classifier. Ensemble learning is a technique to improve the performance and accuracy of classification and predication of machine learning algorithm. Many researchers proposed a model for ensemble classifier for merging a different classification algorithm, but the performance of ensemble algorithm suffered from problem of outlier, noise and core point problem of data from features selection process. In this paper we combined core, outlier and noise data (COB for features selection process for ensemble model. The process of best feature selection with appropriate classifier used particle of swarm optimization. Empirical results with UCI data set prediction on Ecoil and glass dataset indicate that the proposed COB model optimization algorithm can help to improve accuracy and classification.
Relations between canonical and non-canonical inflation
Energy Technology Data Exchange (ETDEWEB)
Gwyn, Rhiannon [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany); Rummel, Markus [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group
2012-12-15
We look for potential observational degeneracies between canonical and non-canonical models of inflation of a single field {phi}. Non-canonical inflationary models are characterized by higher than linear powers of the standard kinetic term X in the effective Lagrangian p(X,{phi}) and arise for instance in the context of the Dirac-Born-Infeld (DBI) action in string theory. An on-shell transformation is introduced that transforms non-canonical inflationary theories to theories with a canonical kinetic term. The 2-point function observables of the original non-canonical theory and its canonical transform are found to match in the case of DBI inflation.
Relations between canonical and non-canonical inflation
International Nuclear Information System (INIS)
We look for potential observational degeneracies between canonical and non-canonical models of inflation of a single field φ. Non-canonical inflationary models are characterized by higher than linear powers of the standard kinetic term X in the effective Lagrangian p(X,φ) and arise for instance in the context of the Dirac-Born-Infeld (DBI) action in string theory. An on-shell transformation is introduced that transforms non-canonical inflationary theories to theories with a canonical kinetic term. The 2-point function observables of the original non-canonical theory and its canonical transform are found to match in the case of DBI inflation.
Ensemble approach combining multiple methods improves human transcription start site prediction.
LENUS (Irish Health Repository)
Dineen, David G
2010-01-01
The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.
Ensemble and constrained clustering with applications
Abdala, D.D. (Daniel)
2011-01-01
Diese Arbeit stellt neue Entwicklungen in Ensemble und Constrained Clustering vor und enthält die folgenden wesentlichen Beiträge: 1) Eine Vereinigung von Constrained und Ensemble Clustering in einem einheitlichen Framework. 2) Eine neue Methode zur Messung und Visualisierung der Variabilität von Ensembles. 3) Ein neues, Random Walker basiertes Verfahren für Ensemble Clustering. 4) Anwendung von Ensemble Clustering für Bildsegmentierung. 5) Eine neue Consensus-Funktion für das Ensemble Cluste...
Directory of Open Access Journals (Sweden)
S. Skachko
2014-01-01
Full Text Available The Ensemble Kalman filter (EnKF assimilation method is applied to the tracer transport using the same stratospheric transport model as in the 4D-Var assimilation system BASCOE. This EnKF version of BASCOE was built primarily to avoid the large costs associated with the maintenance of an adjoint model. The EnKF developed in BASCOE accounts for two adjustable parameters: a parameter α controlling the model error term and a parameter r controlling the observational error. The EnKF system is shown to be markedly sensitive to these two parameters, which are adjusted based on the monitoring of a χ2-test measuring the misfit between the control variable and the observations. The performance of the EnKF and 4D-Var versions was estimated through the assimilation of Aura-MLS ozone observations during an 8 month period which includes the formation of the 2008 Antarctic ozone hole. To ensure a proper comparison, despite the fundamental differences between the two assimilation methods, both systems use identical and carefully calibrated input error statistics. We provide the detailed procedure for these calibrations, and compare the two sets of analyses with a focus on the lower and middle stratosphere where the ozone lifetime is much larger than the observational update frequency. Based on the Observation-minus-Forecast statistics, we show that the analyses provided by the two systems are markedly similar, with biases smaller than 5% and standard deviation errors smaller than 10% in most of the stratosphere. Since the biases are markedly similar, they have most probably the same causes: these can be deficiencies in the model and in the observation dataset, but not in the assimilation algorithm nor in the error calibration. The remarkably similar performance also shows that in the context of stratospheric transport, the choice of the assimilation method can be based on application-dependent factors, such as CPU cost or the ability to generate an ensemble
A Classifier Ensemble of Binary Classifier Ensembles
Directory of Open Access Journals (Sweden)
Sajad Parvin
2011-09-01
Full Text Available This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of classifiers. Although a better performance for classifier is defined the more accurate classifier, but turning to the best classifier is not always the best option to obtain the best quality in classification. It means to reach the best classification there is another alternative to use many inaccurate or weak classifiers each of them is specialized for a sub-space in the problem space and using their consensus vote as the final classifier. So this paper proposes a heuristic classifier ensemble to improve the performance of classification learning. It is specially deal with multiclass problems which their aim is to learn the boundaries of each class from many other classes. Based on the concept of multiclass problems classifiers are divided into two different categories: pairwise classifiers and multiclass classifiers. The aim of a pairwise classifier is to separate one class from another one. Because of pairwise classifiers just train for discrimination between two classes, decision boundaries of them are simpler and more effective than those of multiclass classifiers.The main idea behind the proposed method is to focus classifier in the erroneous spaces of problem and use of pairwise classification concept instead of multiclass classification concept. Indeed although usage of pairwise classification concept instead of multiclass classification concept is not new, we propose a new pairwise classifier ensemble with a very lower order. In this paper, first the most confused classes are determined and then some ensembles of classifiers are created. The classifiers of each of these ensembles jointly work using majority weighting votes. The results of these ensembles
Directory of Open Access Journals (Sweden)
Jing Xu
2015-10-01
Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.
Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua
2015-01-01
In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985
You, Setthivoine
2015-11-01
A new canonical field theory has been developed to help interpret the interaction between plasma flows and magnetic fields. The theory augments the Lagrangian of general dynamical systems to rigourously demonstrate that canonical helicity transport is valid across single particle, kinetic and fluid regimes, on scales ranging from classical to general relativistic. The Lagrangian is augmented with two extra terms that represent the interaction between the motion of matter and electromagnetic fields. The dynamical equations can then be re-formulated as a canonical form of Maxwell's equations or a canonical form of Ohm's law valid across all non-quantum regimes. The field theory rigourously shows that helicity can be preserved in kinetic regimes and not only fluid regimes, that helicity transfer between species governs the formation of flows or magnetic fields, and that helicity changes little compared to total energy only if density gradients are shallow. The theory suggests a possible interpretation of particle energization partitioning during magnetic reconnection as canonical wave interactions. This work is supported by US DOE Grant DE-SC0010340.
Canonical phylogenetic ordination.
Giannini, Norberto P
2003-10-01
A phylogenetic comparative method is proposed for estimating historical effects on comparative data using the partitions that compose a cladogram, i.e., its monophyletic groups. Two basic matrices, Y and X, are defined in the context of an ordinary linear model. Y contains the comparative data measured over t taxa. X consists of an initial tree matrix that contains all the xj monophyletic groups (each coded separately as a binary indicator variable) of the phylogenetic tree available for those taxa. The method seeks to define the subset of groups, i.e., a reduced tree matrix, that best explains the patterns in Y. This definition is accomplished via regression or canonical ordination (depending on the dimensionality of Y) coupled with Monte Carlo permutations. It is argued here that unrestricted permutations (i.e., under an equiprobable model) are valid for testing this specific kind of groupwise hypothesis. Phylogeny is either partialled out or, more properly, incorporated into the analysis in the form of component variation. Direct extensions allow for testing ecomorphological data controlled by phylogeny in a variation partitioning approach. Currently available statistical techniques make this method applicable under most univariate/multivariate models and metrics; two-way phylogenetic effects can be estimated as well. The simplest case (univariate Y), tested with simulations, yielded acceptable type I error rates. Applications presented include examples from evolutionary ethology, ecology, and ecomorphology. Results showed that the new technique detected previously overlooked variation clearly associated with phylogeny and that many phylogenetic effects on comparative data may occur at particular groups rather than across the entire tree. PMID:14530135
Canonical affordances in context
Directory of Open Access Journals (Sweden)
Alan Costall
2012-12-01
Full Text Available James Gibson’s concept of affordances was an attempt to undermine the traditional dualism of the objective and subjective. Gibson himself insisted on the continuity of “affordances in general” and those attached to human artifacts. However, a crucial distinction needs to be drawn between “affordances in general” and the “canonical affordances” that are connected primarily to artifacts. Canonical affordances are conventional and normative. It is only in such cases that it makes sense to talk of the affordance of the object. Chairs, for example, are for sitting-on, even though we may also use them in many other ways. A good deal of confusion has arisen in the discussion of affordances from (1 the failure to recognize the normative status of canonical affordances and (2 then generalizing from this special case.
Modeling of two-phase flow in boiling water reactor using phase-weighted ensemble average method
International Nuclear Information System (INIS)
Investigations into boiling, the generation of vapor and the prediction of its behavior are important in the stability of boiling water reactors. The present models are limited to simplifications made to draw governing equations or lack of closure framework of the constitutive relations. The commercial codes fall into this category as well. Consequently, researchers cannot simply find the comprehensive updated relations before simplification in order to simplify them for their own works. This study offers a state of the art, phase-weighted, ensemble-averaged, two-phase flow, two-fluid model for the simulation of two-phase flow with heat and mass transfer. This approach is then used for modeling the bulk boiling (thermal-hydraulic modeling) in boiling water reactors. The resultant approach is based on using the energy balance equation to find a relation for quality of vapor at any point. The equations are solved using SIMPLE algorithm in the finite volume method and the results compared with real BWR (PB2 BWR/4 NPP) and the boiling data. Comparison shows that the present model is satisfactorily improved in accuracy.
A composite state method for ensemble data assimilation with multiple limited-area models
Directory of Open Access Journals (Sweden)
Matthew Kretschmer
2015-04-01
Full Text Available Limited-area models (LAMs allow high-resolution forecasts to be made for geographic regions of interest when resources are limited. Typically, boundary conditions for these models are provided through one-way boundary coupling from a coarser resolution global model. Here, data assimilation is considered in a situation in which a global model supplies boundary conditions to multiple LAMs. The data assimilation method presented combines information from all of the models to construct a single ‘composite state’, on which data assimilation is subsequently performed. The analysis composite state is then used to form the initial conditions of the global model and all of the LAMs for the next forecast cycle. The method is tested by using numerical experiments with simple, chaotic models. The results of the experiments show that there is a clear forecast benefit to allowing LAM states to influence one another during the analysis. In addition, adding LAM information at analysis time has a strong positive impact on global model forecast performance, even at points not covered by the LAMs.
一种集成式不确定推理方法研究%Research on an Ensemble Method of Uncertainty Reasoning
Institute of Scientific and Technical Information of China (English)
贺怀清; 李建伏
2011-01-01
Ensemble learning is a machine learning paradigm where multiple models are strategically generated and combined to obtain better predictive performance than a single learning method.It was proven that ensemble learning is feasible and tends to yield better results.Uncertainty reasoning is one of the important directions in artificial intelligence.Various uncertainty reasoning methods have been developed and all have their own advantages and disadvantages.Motivated by ensemble learning, an ensemble method of uncertainty reasoning was proposed.The main idea of the new method is in accordance with the basic framework of ensemble learning,where multiple uncertainty reasoning methods is used in time and the result of various reasoning methods is integrated by some rules as the final result.Finally, theoretical analysis and experimental tests show that the ensemble uncertainty reasoning method is effective and feasible.%集成学习是采用某种规则把一系列学习器的结果进行整合以获得比单个学习器更好的学习效果的一种机器学习方法.研究表明集成学习是可行的,能取得比传统学习方法更好的性能.不确定推理是人工智能的重要研究方向之一,目前已经开发出了多种不确定推理方法,这些方法在实际应用中各有优缺点.借鉴集成学习,提出一种集成式不确定推理方法,其基本思想是按照一定的策略集成多种不确定推理方法,以提高推理的准确性.理论分析和实验结果验证了方法的合理性和可行性.
Measuring sub-canopy evaporation in a forested wetland using an ensemble of methods
Allen, S. T.; Edwards, B.; Reba, M. L.; Keim, R.
2013-12-01
and humidity gradients. This suggests the need to use combined methods during periods with problematic boundary layer conditions.
Bayesian Decision-theoretic Methods for Parameter Ensembles with Application to Epidemiology
Gunterman, Haluna Penelope Frances
and water-uptake behavior of CLs. Isolated CLs were made in-house and commercially and tested for their PC-S response. CLs have the propensity to be highly hydrophilic and require capillary pressures as low as -80 kPa to eject water. The presence of Pt or surface cracks increases hydrophilicity. These findings suggest that saturation in CLs, especially cracked CLs, may exacerbate poor transport. Lastly, this work includes early-stage development of a limiting-current measurement that can be used to calculate effective transport properties as a function of saturation. Results indicate that the method is valid, and different DM have higher transport depending on the operating condition. The technique is yet in a formative stage, and this work includes advice and recommendations for operation and design improvements.
Differential Forms on Log Canonical Spaces
Greb, Daniel; Kovacs, Sandor J; Peternell, Thomas
2010-01-01
The present paper is concerned with differential forms on log canonical varieties. It is shown that any p-form defined on the smooth locus of a variety with canonical or klt singularities extends regularly to any resolution of singularities. In fact, a much more general theorem for log canonical pairs is established. The proof relies on vanishing theorems for log canonical varieties and on methods of the minimal model program. In addition, a theory of differential forms on dlt pairs is developed. It is shown that many of the fundamental theorems and techniques known for sheaves of logarithmic differentials on smooth varieties also hold in the dlt setting. Immediate applications include the existence of a pull-back map for reflexive differentials, generalisations of Bogomolov-Sommese type vanishing results, and a positive answer to the Lipman-Zariski conjecture for klt spaces.
Regularized canonical correlation analysis with unlabeled data
Institute of Scientific and Technical Information of China (English)
Xi-chuan ZHOU; Hai-bin SHEN
2009-01-01
In standard canonical correlation analysis (CCA), the data from definite datasets are used to estimate their canonical correlation. In real applications, for example in bilingual text retrieval, it may have a great portion of data that we do not know which set it belongs to. This part of data is called unlabeled data, while the rest from definite datasets is called labeled data. We propose a novel method called regularized canonical correlation analysis (RCCA), which makes use of both labeled and unlabeled samples. Specifically, we learn to approximate canonical correlation as if all data were labeled. Then. we describe a generalization of RCCA for the multi-set situation. Experiments on four real world datasets, Yeast, Cloud, Iris, and Haberman, demonstrate that,by incorporating the unlabeled data points, the accuracy of correlation coefficients can be improved by over 30%.
Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun
2016-07-01
In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. PMID:26861909
On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles
Luo, Xiaodong
2010-09-19
The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].
Quantum Gibbs ensemble Monte Carlo
International Nuclear Information System (INIS)
We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of 4He in two dimensions
Quaternion Linear Canonical Transform Application
Bahri, Mawardi
2015-01-01
Quaternion linear canonical transform (QLCT) is a generalization of the classical linear canonical transfom (LCT) using quaternion algebra. The focus of this paper is to introduce an application of the QLCT to study of generalized swept-frequency filter
Realizations of the Canonical Representation
Indian Academy of Sciences (India)
M K Vemuri
2008-02-01
A characterisation of the maximal abelian subalgebras of the bounded operators on Hilbert space that are normalised by the canonical representation of the Heisenberg group is given. This is used to classify the perfect realizations of the canonical representation.
International Nuclear Information System (INIS)
Highlights: • Sum of ranking differences (SRD) used for tuning parameter selection based on fusion of multicriteria. • No weighting scheme is needed for the multicriteria. • SRD allows automatic selection of one model or a collection of models if so desired. • SRD allows simultaneous comparison of different calibration methods with tuning parameter selection. • New MATLAB programs are described and made available. - Abstract: Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a “good” tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user’s preference for the degree of balance between bias and variance ultimately decides the merits used in SRD
Energy Technology Data Exchange (ETDEWEB)
Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Héberger, Károly [Research Centre for Natural Sciences, Hungarian Academy of Sciences, Pusztaszeri út 59-67, 1025 Budapest (Hungary); Andries, Erik [Center for Advanced Research Computing, University of New Mexico, Albuquerque, NM 87106 (United States); Department of Mathematics, Central New Mexico Community College, Albuquerque, NM 87106 (United States)
2015-04-15
Highlights: • Sum of ranking differences (SRD) used for tuning parameter selection based on fusion of multicriteria. • No weighting scheme is needed for the multicriteria. • SRD allows automatic selection of one model or a collection of models if so desired. • SRD allows simultaneous comparison of different calibration methods with tuning parameter selection. • New MATLAB programs are described and made available. - Abstract: Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a “good” tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user’s preference for the degree of balance between bias and variance ultimately decides the merits used in SRD
A Framework for Non-Equilibrium Statistical Ensemble Theory
Institute of Scientific and Technical Information of China (English)
BI Qiao; HE Zu-Tan; LIU Jie
2011-01-01
Since Gibbs synthesized a general equilibrium statistical ensemble theory, many theorists have attempted to generalized the Gibbsian theory to non-equilibrium phenomena domain, however the status of the theory of nonequilibrium phenomena can not be said as firm as well established as the Gibbsian ensemble theory. In this work, we present a framework for the non-equilibrium statistical ensemble formalism based on a subdynamic kinetic equation (SKE) rooted from the Brussels-Austin school and followed by some up-to-date works. The constructed key is to use a similarity transformation between Gibbsian ensembles formalism based on Liouville equation and the subdynamic ensemble formalism based on the SKE. Using this formalism, we study the spin-Boson system, as cases of weak coupling or strongly coupling, and obtain the reduced density operators for the Canonical ensembles easily.
Canonical quantization of macroscopic electromagnetism
Philbin, Thomas Gerard
2010-01-01
Application of the standard canonical quantization rules of quantum field theory to macroscopic electromagnetism has encountered obstacles due to material dispersion and absorption. This has led to a phenomenological approach to macroscopic quantum electrodynamics where no canonical formulation is attempted. In this paper macroscopic electromagnetism is canonically quantized. The results apply to any linear, inhomogeneous, magnetodielectric medium with dielectric functions that obey the Krame...
Revisiting Canonical Quantization
Klauder, John R.
2012-01-01
Conventional canonical quantization procedures directly link various c-number and q-number quantities. Here, we advocate a different association of classical and quantum quantities that renders classical theory a natural subset of quantum theory with \\hbar>0, in conformity with the real world wherein nature has chosen \\hbar>0 rather than \\hbar=0. While keeping the good results of conventional procedures, some examples are presented for which the new procedures offer better results than conven...
Canonical Infinitesimal Deformations
Ran, Ziv
1998-01-01
This paper gives a canonical construction, in terms of additive cohomological functors, of the universal formal deformation of a compact complex manifold without vector fields (more generally of a faithful $g$-module, where $g$ is a sheaf of Lie algebras without sections). The construction is based on a certain (multivariate) Jacobi complex $J(g)$ associatd to $g$: indeed ${\\mathbb C}\\oplus {\\mathbb H}^0(J(g))^*$ is precisely the base ring of the universal deformation.
Kurt Hornik
2005-01-01
Cluster ensembles are collections of individual solutions to a given clustering problem which are useful or necessary to consider in a wide range of applications. The R package clue provides an extensible computational environment for creating and analyzing cluster ensembles, with basic data structures for representing partitions and hierarchies, and facilities for computing on these, including methods for measuring proximity and obtaining consensus and "secondary" clusterings....
Similarity measures for protein ensembles
DEFF Research Database (Denmark)
Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper
2009-01-01
Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations...... synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single...
Meaning of temperature in different thermostatistical ensembles.
Hänggi, Peter; Hilbert, Stefan; Dunkel, Jörn
2016-03-28
Depending on the exact experimental conditions, the thermodynamic properties of physical systems can be related to one or more thermostatistical ensembles. Here, we survey the notion of thermodynamic temperature in different statistical ensembles, focusing in particular on subtleties that arise when ensembles become non-equivalent. The 'mother' of all ensembles, the microcanonical ensemble, uses entropy and internal energy (the most fundamental, dynamically conserved quantity) to derive temperature as a secondary thermodynamic variable. Over the past century, some confusion has been caused by the fact that several competing microcanonical entropy definitions are used in the literature, most commonly the volume and surface entropies introduced by Gibbs. It can be proved, however, that only the volume entropy satisfies exactly the traditional form of the laws of thermodynamics for a broad class of physical systems, including all standard classical Hamiltonian systems, regardless of their size. This mathematically rigorous fact implies that negative 'absolute' temperatures and Carnot efficiencies more than 1 are not achievable within a standard thermodynamical framework. As an important offspring of microcanonical thermostatistics, we shall briefly consider the canonical ensemble and comment on the validity of the Boltzmann weight factor. We conclude by addressing open mathematical problems that arise for systems with discrete energy spectra. PMID:26903095
Alorizi, Seyed Morteza Emami; Nimruzi, Majid
2016-01-01
Background: Stroke has a huge negative impact on the society and more adversely affect women. There is scarce evidence about any neuroprotective effects of commonly used drug in acute stroke. Bushnell et al. provided a guideline focusing on the risk factors of stroke unique to women, including reproductive factors, metabolic syndrome, obesity, atrial fibrillation, and migraine with aura. The ten variables cited by Avicenna in Canon of Medicine would compensate for the gaps mentioned in this guideline. The prescribed drugs should be selected qualitatively opposite to Mizaj (warm-cold and wet-dry qualities induced by disease state) of the disease and according to ten variables, including the nature of the affected organ, intensity of disease, sex, age, habit, season, place of living, occupation, stamina and physical status. Methods: Information related to stroke was searched in Canon of Medicine, which is an outstanding book in traditional Persian medicine written by Avicenna. Results: A hemorrhagic stroke is the result of increasing sanguine humor in the body. Sanguine has warm-wet quality, and should be treated with food and drugs that quench the abundance of blood in the body. An acute episode of ischemic stroke is due to the abundance of phlegm that causes a blockage in the cerebral vessels. Phlegm has cold-wet quality and treatment should be started with compound medicines that either solve the phlegm or eject it from the body. Conclusion: Avicenna has cited in Canon of Medicine that women have cold and wet temperament compared to men. For this reason, they are more prone to accumulation of phlegm in their body organs including the liver, joints and vessels, and consequently in the risk of fatty liver, degenerative joint disease, atherosclerosis, and stroke especially the ischemic one. This is in accordance with epidemiological studies that showed higher rate of ischemic stroke in women rather than hemorrhagic one. PMID:26722147
Belayneh, A.; Adamowski, J.; Khalil, B.; Quilty, J.
2016-05-01
This study explored the ability of coupled machine learning models and ensemble techniques to predict drought conditions in the Awash River Basin of Ethiopia. The potential of wavelet transforms coupled with the bootstrap and boosting ensemble techniques to develop reliable artificial neural network (ANN) and support vector regression (SVR) models was explored in this study for drought prediction. Wavelet analysis was used as a pre-processing tool and was shown to improve drought predictions. The Standardized Precipitation Index (SPI) (in this case SPI 3, SPI 12 and SPI 24) is a meteorological drought index that was forecasted using the aforementioned models and these SPI values represent short and long-term drought conditions. The performances of all models were compared using RMSE, MAE, and R2. The prediction results indicated that the use of the boosting ensemble technique consistently improved the correlation between observed and predicted SPIs. In addition, the use of wavelet analysis improved the prediction results of all models. Overall, the wavelet boosting ANN (WBS-ANN) and wavelet boosting SVR (WBS-SVR) models provided better prediction results compared to the other model types evaluated.
A Selective Fuzzy Clustering Ensemble Algorithm
Kai Li; Peng Li
2013-01-01
To improve the performance of clustering ensemble method, a selective fuzzy clustering ensemble algorithm is proposed. It mainly includes selection of clustering ensemble members and combination of clustering results. In the process of member selection, measure method is defined to select the better clustering members. Then some selected clustering members are viewed as hyper-graph in order to select the more influential hyper-edges (or features) and to weight the selected features. For proce...
Multi-Model Ensemble Wake Vortex Prediction
Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.
2015-01-01
Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.
Canonical brackets of a toy model for the Hodge theory without its canonical conjugate momenta
Shukla, D; Malik, R P
2014-01-01
We consider the toy model of a rigid rotor as an example of the Hodge theory within the framework of the Becchi-Rouet-Stora-Tyutin (BRST) formalism and show that the internal symmetries of this theory lead to the derivation of canonical brackets amongst the creation and annihilation operators of the dynamical variables where the definition of the canonical conjugate momenta is not required. We invoke only the spin-statistics theorem, normal ordering and basic concepts of continuous symmetries (and their generators) to derive the canonical brackets for the model of a one (0 + 1)-dimensional (1D) rigid rotor without using the definition of the canonical conjugate momenta anywhere. Our present method of derivation of the basic brackets is conjectured to be true for a class of theories that provide a set of tractable physical examples for the Hodge theory.
Canonical brackets of a toy model for the Hodge theory without its canonical conjugate momenta
Shukla, D.; Bhanja, T.; Malik, R. P.
2015-07-01
We consider the toy model of a rigid rotor as an example of the Hodge theory within the framework of Becchi-Rouet-Stora-Tyutin (BRST) formalism and show that the internal symmetries of this theory lead to the derivation of canonical brackets amongst the creation and annihilation operators of the dynamical variables where the definition of the canonical conjugate momenta is not required. We invoke only the spin-statistics theorem, normal ordering and basic concepts of continuous symmetries (and their generators) to derive the canonical brackets for the model of a one (0 + 1)-dimensional (1D) rigid rotor without using the definition of the canonical conjugate momenta anywhere. Our present method of derivation of the basic brackets is conjectured to be true for a class of theories that provide a set of tractable physical examples for the Hodge theory.
Bonduelle, M
1987-01-01
The Canon Law (Codex Iuris Canonici), promulgated in 1917, was a classification of laws and jurisprudence which ruled the early Church, governed the ecclesiastical condition of Roman Church until its reorganisation in 1983. It forbade to be ordained or to exercise orders already received to "those who are or were epileptics either not quite in their right mind or possessed by the Evil One". All the context and in particular the paragraph which treated of bodily lacks, indicated that between these three conditions, there was juxtaposition and no confusion. The texts specified the foundations of these dispositions, not in a malefic view of epilepsy inherited from Morbus Sacer of Antiquity, but in decency and on account of risk incured by Eucharist in case of fit. Some derogations could attenuate the severity of these dispositions--as jurisprudence had taken progresses of Epileptology and therapeutics into consideration. In the new Code of Canon Law (1983) physical disabilities were removed from the text and also possessed evil and epilepsy, the only impediment being "insanity or other psychic defect" appreciation of which is done by experts. Concerning poorly controlled epilepsies, we believe that experts will be allowed to express their opinion and a new jurisprudence will make up for the silence of the law. PMID:3310183
DEFF Research Database (Denmark)
Christensen, Eva Arnspang; Schwartzentruber, J.; Clausen, M. P.;
2013-01-01
comparing the results for a biotinylated lipid labeled at high densities with Atto647N-strepatvidin (sAv) or sparse densities with sAv-QDs. In this latter case, we see that the recovered diffusion rate is two-fold greater for the same lipid and in the same cell-type when labeled with Atto647N-sAv as...... compared to sAv-QDs. This data demonstrates that kICS can be used for analysis of single molecule data and furthermore can bridge between samples with a labeling densities ranging from single molecule to ensemble level measurements....
Boundary conditions in first order gravity: Hamiltonian and Ensemble
Aros, Rodrigo
2005-01-01
In this work two different boundary conditions for first order gravity, corresponding to a null and a negative cosmological constant respectively, are studied. Both boundary conditions allows to obtain the standard black hole thermodynamics. Furthermore both boundary conditions define a canonical ensemble. Additionally the quasilocal energy definition is obtained for the null cosmological constant case.
Canonic form of linear quaternion functions
Sangwine, Stephen J.
2008-01-01
The general linear quaternion function of degree one is a sum of terms with quaternion coefficients on the left and right. The paper considers the canonic form of such a function, and builds on the recent work of Todd Ell, who has shown that any such function may be represented using at most four quaternion coefficients. In this paper, a new and simple method is presented for obtaining these coefficients numerically using a matrix approach which also gives an alternative proof of the canonic ...
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
Multinomial logistic regression ensembles.
Lee, Kyewon; Ahn, Hongshik; Moon, Hojin; Kodell, Ralph L; Chen, James J
2013-05-01
This article proposes a method for multiclass classification problems using ensembles of multinomial logistic regression models. A multinomial logit model is used as a base classifier in ensembles from random partitions of predictors. The multinomial logit model can be applied to each mutually exclusive subset of the feature space without variable selection. By combining multiple models the proposed method can handle a huge database without a constraint needed for analyzing high-dimensional data, and the random partition can improve the prediction accuracy by reducing the correlation among base classifiers. The proposed method is implemented using R, and the performance including overall prediction accuracy, sensitivity, and specificity for each category is evaluated on two real data sets and simulation data sets. To investigate the quality of prediction in terms of sensitivity and specificity, the area under the receiver operating characteristic (ROC) curve (AUC) is also examined. The performance of the proposed model is compared to a single multinomial logit model and it shows a substantial improvement in overall prediction accuracy. The proposed method is also compared with other classification methods such as the random forest, support vector machines, and random multinomial logit model. PMID:23611203
Bouallegue, Zied Ben; Theis, Susanne E; Pinson, Pierre
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (...
Canonical correlation analysis of course and teacher evaluation
DEFF Research Database (Denmark)
Sliusarenko, Tamara; Ersbøll, Bjarne Kjær
2010-01-01
information obtained from the course evaluation form overlaps with information obtained from the teacher evaluation form. Employing canonical correlation analysis it was found that course and teacher evaluations are correlated. However, the structure of the canonical correlation is subject to change with...... changes in teaching methods from one year to another....
Directory of Open Access Journals (Sweden)
J. D. Giraldo
2011-04-01
Full Text Available The Sudano-Sahelian zone of West Africa, one of the poorest of the Earth, is characterized by high rainfall variability and rapid population growth. In this region, heavy storm events frequently cause extensive damage. Nonetheless, the projections for change in extreme rainfall values have shown a great divergence between Regional Climate Models (RCM, increasing the forecast uncertainty. Novel methodologies should be applied, taking into account both the variability provided by different RCMs, as well as the non-stationary nature of time series for the building of hazard maps of extreme rainfall events. The present work focuses in the probability density functions (PDFs-based evaluation and a simple quantitative measure of how well each RCM considered can capture the observed annual maximum daily rainfall (AMDR series on the Senegal River basin. Since meaningful trends have been detected in historical rainfall time series for the region, non-stationary probabilistic models were used to fit the PDF parameters to the AMDR time series. In the development of PDF ensemble by bootstrapping techniques, Reliability Ensemble Averaging (REA maps were applied to score the RCMs. The REA factors were computed using a metric to evaluate the agreement between observed -or best estimated- PDFs, and that simulated with each RCM. The assessment of plausible regional trends associated to the return period, from the hazard maps of AMDR, showed a general rise, owing to an increase in the mean and the variability of extreme precipitation. These spatial-temporal distributions could be considered by local stakeholders in such a way as to reach a better balance between mitigation and adaptation.
Data assimilation the ensemble Kalman filter
Evensen, Geir
2006-01-01
Covers data assimilation and inverse methods, including both traditional state estimation and parameter estimation. This text and reference focuses on various popular data assimilation methods, such as weak and strong constraint variational methods and ensemble filters and smoothers.
Ensemble clustering in deterministic ensemble Kalman filters
Directory of Open Access Journals (Sweden)
Javier Amezcua
2012-07-01
Full Text Available Ensemble clustering (EC can arise in data assimilation with ensemble square root filters (EnSRFs using non-linear models: an M-member ensemble splits into a single outlier and a cluster of M–1 members. The stochastic Ensemble Kalman Filter does not present this problem. Modifications to the EnSRFs by a periodic resampling of the ensemble through random rotations have been proposed to address it. We introduce a metric to quantify the presence of EC and present evidence to dispel the notion that EC leads to filter failure. Starting from a univariate model, we show that EC is not a permanent but transient phenomenon; it occurs intermittently in non-linear models. We perform a series of data assimilation experiments using a standard EnSRF and a modified EnSRF by a resampling though random rotations. The modified EnSRF thus alleviates issues associated with EC at the cost of traceability of individual ensemble trajectories and cannot use some of algorithms that enhance performance of standard EnSRF. In the non-linear regimes of low-dimensional models, the analysis root mean square error of the standard EnSRF slowly grows with ensemble size if the size is larger than the dimension of the model state. However, we do not observe this problem in a more complex model that uses an ensemble size much smaller than the dimension of the model state, along with inflation and localisation. Overall, we find that transient EC does not handicap the performance of the standard EnSRF.
A COMPREHENSIVE EVOLUTIONARY APPROACH FOR NEURAL NETWORK ENSEMBLES AUTOMATIC DESIGN
Bukhtoyarov, V.; Semenkin, E.
2010-01-01
A new comprehensive approach for neural network ensembles design is proposed. It consists of a method of neural networks automatic design and a method of automatic formation of an ensemble solution on the basis of separate neural networks solutions. It is demonstrated that the proposed approach is not less effective than a number of other approaches for neural network ensembles design.
A Selective Fuzzy Clustering Ensemble Algorithm
Directory of Open Access Journals (Sweden)
Kai Li
2013-12-01
Full Text Available To improve the performance of clustering ensemble method, a selective fuzzy clustering ensemble algorithm is proposed. It mainly includes selection of clustering ensemble members and combination of clustering results. In the process of member selection, measure method is defined to select the better clustering members. Then some selected clustering members are viewed as hyper-graph in order to select the more influential hyper-edges (or features and to weight the selected features. For processing hyper-edges with fuzzy membership, CSPA and MCLA consensus function are generalized. In the experiments, some UCI data sets are chosen to test the presented algorithm’s performance. From the experimental results, it can be seen that the proposed ensemble method can get better clustering ensemble result.
A mollified Ensemble Kalman filter
Bergemann, Kay
2010-01-01
It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high frequency adjustment processes in the model dynamics. Various methods have been devised to continuously spread out the analysis increments over a fixed time interval centered about analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.
Canonical duties, liabilities of trustees and administrators.
Morrisey, F G
1985-06-01
The new Code of Canon Law outlines a number of duties of those who have responsibility for administering the Church's temporal goods. Before assuming office, administrators must pledge to be efficient and faithful, and they must prepare an inventory of goods belonging to the juridic person they serve. Among their duties, administrators must: Ensure that adequate insurance is provided; Use civilly valid methods to protect canonical ownership of the goods; Observe civil and canon law prescriptions as well as donors' intentions; Collect and safeguard revenues, repay debts, and invest funds securely; Maintain accurate records, keep documents secure, and prepare an annual budget; Prepare an annual report and present it to the Ordinary where prescribed; Observe civil law concerning labor and social policy, and pay employees a just and decent wage. Administrators who carry out acts that are invalid canonically are liable for such acts. The juridic person is not liable, unless it derived benefit from the transaction. Liability is especially high when the sale of property is involved or when a contract is entered into without proper cannonical consent. Although Church law is relatively powerless to punish those who have been negligent, stewards, administrators, and trustees must do all they can to be truthful to the responsibility with which they have been entrusted. PMID:10271510
Canonical versus grand canonical treatment of the conservation laws
International Nuclear Information System (INIS)
The differences between the canonical and the grand canoncial treatment of the conservation laws in the relativistic statistical thermodynamics are discussed. The possible implications on the thermodynamics description of hadronic matter created in particle or ion collisions are considered
Ensemble algorithms in reinforcement learning
Wiering, Marco A; van Hasselt, Hado
2008-01-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and imple
DEFF Research Database (Denmark)
Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.;
2015-01-01
is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast...... from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error....... The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (d-ECC). The ensemble based empirical copulas, ECC and d-ECC, are applied to wind forecasts from the high resolution ensemble system COSMO-DEEPS run operationally at the German...
Online Learning with Ensembles
Urbanczik, R
1999-01-01
Supervised online learning with an ensemble of students randomized by the choice of initial conditions is analyzed. For the case of the perceptron learning rule, asymptotically the same improvement in the generalization error of the ensemble compared to the performance of a single student is found as in Gibbs learning. For more optimized learning rules, however, using an ensemble yields no improvement. This is explained by showing that for any learning rule $f$ a transform $\\tilde{f}$ exists,...
Morphing ensemble Kalman filters
Beezley, Jonathan D.; Mandel, Jan
2008-01-01
A new type of ensemble filter is proposed, which combines an ensemble Kalman filter (EnKF) with the ideas of morphing and registration from image processing. This results in filters suitable for non-linear problems whose solutions exhibit moving coherent features, such as thin interfaces in wildfire modelling. The ensemble members are represented as the composition of one common state with a spatial transformation, called registration mapping, plus a residual. A fully automatic registration m...
Morphing Ensemble Kalman Filters
Beezley, Jonathan D.; Mandel, Jan
2007-01-01
A new type of ensemble filter is proposed, which combines an ensemble Kalman filter (EnKF) with the ideas of morphing and registration from image processing. This results in filters suitable for nonlinear problems whose solutions exhibit moving coherent features, such as thin interfaces in wildfire modeling. The ensemble members are represented as the composition of one common state with a spatial transformation, called registration mapping, plus a residual. A fully automatic registration met...
The canon as text for a biblical theology
Directory of Open Access Journals (Sweden)
James A. Loader
2005-10-01
Full Text Available The novelty of the canonical approach is questioned and its fascination at least partly traced to the Reformation, as well as to the post-Reformation’s need for a clear and authoritative canon to perform the function previously performed by the church. This does not minimise the elusiveness and deeply contradictory positions both within the canon and triggered by it. On the one hand, the canon itself is a centripetal phenomenon and does play an important role in exegesis and theology. Even so, on the other hand, it not only contains many difficulties, but also causes various additional problems of a formal as well as a theological nature. The question is mooted whether the canonical approach alleviates or aggravates the dilemma. Since this approach has become a major factor in Christian theology, aspects of the Christian canon are used to gauge whether “canon” is an appropriate category for eliminating difficulties that arise by virtue of its own existence. Problematic uses and appropriations of several Old Testament canons are advanced, as well as evidence in the New Testament of a consciousness that the “old” has been surpassed(“Überbietungsbewußtsein”. It is maintained that at least the Childs version of the canonical approach fails to smooth out these and similar difficulties. As a method it can cater for the New Testament’s (superior role as the hermeneutical standard for evaluating the Old, but flounders on its inability to create the theological unity it claims can solve religious problems exposed by Old Testament historical criticism. It is concluded that canon as a category cannot be dispensed with, but is useful for the opposite of the purpose to which it is conventionally put: far from bringing about theological “unity” or producing a standard for “correct” exegesis, it requires different readings of different canons.
On the canonical quantization of local field theories
International Nuclear Information System (INIS)
A nonconventional extension of the canonical quantization method for local field theories is presented. Some difficulties of the conventional approach are avoided, e.g. there are no divergencies in the corresponding S-matrices. (author)
Asymptotic distributions in the projection pursuit based canonical correlation analysis
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this paper, associations between two sets of random variables based on the projection pursuit (PP) method are studied. The asymptotic normal distributions of estimators of the PP based canonical correlations and weighting vectors are derived.