Stabilizing Canonical-Ensemble Calculations in the Auxiliary-Field Monte Carlo Method
Gilbreth, C N
2014-01-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Derivation of Mayer Series from Canonical Ensemble
Wang, Xian-Zhi
2016-02-01
Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.
Quantum statistical model of nuclear multifragmentation in the canonical ensemble method
Energy Technology Data Exchange (ETDEWEB)
Toneev, V.D.; Ploszajczak, M. [Grand Accelerateur National d' Ions Lourds (GANIL), 14 - Caen (France); Parvant, A.S. [Institute of Applied Physics, Moldova Academy of Sciences, MD Moldova (Ukraine); Parvant, A.S. [Joint Institute for Nuclear Research, Bogoliubov Lab. of Theoretical Physics, Dubna (Russian Federation)
1999-07-01
A quantum statistical model of nuclear multifragmentation is proposed. The recurrence equation method used the canonical ensemble makes the model solvable and transparent to physical assumptions and allows to get results without involving the Monte Carlo technique. The model exhibits the first order phase transition. Quantum statistics effects are clearly seen on the microscopic level of occupation numbers but are almost washed out for global thermodynamic variables and the averaged observables studied. In the latter case, the recurrence relations for multiplicity distributions of both intermediate-mass and all fragments are derived and the specific changes in the shape of multiplicity distributions in the narrow region of the transition temperature is stressed. The temperature domain favorable to search for the HBT effect is noted. (authors)
Extending the parQ transition matrix method to grand canonical ensembles.
Haber, René; Hoffmann, Karl Heinz
2016-06-01
Phase coexistence properties as well as other thermodynamic features of fluids can be effectively determined from the grand canonical density of states (DOS). We present an extension of the parQ transition matrix method in combination with the efasTM method as a very fast approach for determining the grand canonical DOS from the transition matrix. The efasTM method minimizes the deviation from detailed balance in the transition matrix using a fast Krylov-based equation solver. The method allows a very effective use of state space transition data obtained by different exploration schemes. An application to a Lennard-Jones system produces phase coexistence properties of the same quality as reference data. PMID:27415394
Matrix product purifications for canonical ensembles and quantum number distributions
Barthel, Thomas
2016-09-01
Matrix product purifications (MPPs) are a very efficient tool for the simulation of strongly correlated quantum many-body systems at finite temperatures. When a system features symmetries, these can be used to reduce computation costs substantially. It is straightforward to compute an MPP of a grand-canonical ensemble, also when symmetries are exploited. This paper provides and demonstrates methods for the efficient computation of MPPs of canonical ensembles under utilization of symmetries. Furthermore, we present a scheme for the evaluation of global quantum number distributions using matrix product density operators (MPDOs). We provide exact matrix product representations for canonical infinite-temperature states, and discuss how they can be constructed alternatively by applying matrix product operators to vacuum-type states or by using entangler Hamiltonians. A demonstration of the techniques for Heisenberg spin-1 /2 chains explains why the difference in the energy densities of canonical and grand-canonical ensembles decays as 1 /L .
Canonical Ensemble Model for Black Hole Radiation
Indian Academy of Sciences (India)
Jingyi Zhang
2014-09-01
In this paper, a canonical ensemble model for the black hole quantum tunnelling radiation is introduced. In this model the probability distribution function corresponding to the emission shell is calculated to second order. The formula of pressure and internal energy of the thermal system is modified, and the fundamental equation of thermodynamics is also discussed.
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been
Multiplicity fluctuations in heavy-ion collisions using canonical and grand-canonical ensemble
Energy Technology Data Exchange (ETDEWEB)
Garg, P. [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Science, Simrol (India); Mishra, D.K.; Netrakanti, P.K.; Mohanty, A.K. [Bhabha Atomic Research Center, Nuclear Physics Division, Mumbai (India)
2016-02-15
We report the higher-order cumulants and their ratios for baryon, charge and strangeness multiplicity in canonical and grand-canonical ensembles in ideal thermal model including all the resonances. When the number of conserved quanta is small, an explicit treatment of these conserved charges is required, which leads to a canonical description of the system and the fluctuations are significantly different from the grand-canonical ensemble. Cumulant ratios of total-charge and net-charge multiplicity as a function of collision energies are also compared in grand-canonical ensemble. (orig.)
Critical adsorption and critical Casimir forces in the canonical ensemble.
Gross, Markus; Vasilyev, Oleg; Gambassi, Andrea; Dietrich, S
2016-08-01
Critical properties of a liquid film between two planar walls are investigated in the canonical ensemble, within which the total number of fluid particles, rather than their chemical potential, is kept constant. The effect of this constraint is analyzed within mean-field theory (MFT) based on a Ginzburg-Landau free-energy functional as well as via Monte Carlo simulations of the three-dimensional Ising model with fixed total magnetization. Within MFT and for finite adsorption strengths at the walls, the thermodynamic properties of the film in the canonical ensemble can be mapped exactly onto a grand canonical ensemble in which the corresponding chemical potential plays the role of the Lagrange multiplier associated with the constraint. However, due to a nonintegrable divergence of the mean-field order parameter profile near a wall, the limit of infinitely strong adsorption turns out to be not well-defined within MFT, because it would necessarily violate the constraint. The critical Casimir force (CCF) acting on the two planar walls of the film is generally found to behave differently in the canonical and grand canonical ensembles. For instance, the canonical CCF in the presence of equal preferential adsorption at the two walls is found to have the opposite sign and a slower decay behavior as a function of the film thickness compared to its grand canonical counterpart. We derive the stress tensor in the canonical ensemble and find that it has the same expression as in the grand canonical case, but with the chemical potential playing the role of the Lagrange multiplier associated with the constraint. The different behavior of the CCF in the two ensembles is rationalized within MFT by showing that, for a prescribed value of the thermodynamic control parameter of the film, i.e., density or chemical potential, the film pressures are identical in the two ensembles, while the corresponding bulk pressures are not. PMID:27627242
Critical adsorption and critical Casimir forces in the canonical ensemble
Gross, Markus; Vasilyev, Oleg; Gambassi, Andrea; Dietrich, S.
2016-08-01
Critical properties of a liquid film between two planar walls are investigated in the canonical ensemble, within which the total number of fluid particles, rather than their chemical potential, is kept constant. The effect of this constraint is analyzed within mean-field theory (MFT) based on a Ginzburg-Landau free-energy functional as well as via Monte Carlo simulations of the three-dimensional Ising model with fixed total magnetization. Within MFT and for finite adsorption strengths at the walls, the thermodynamic properties of the film in the canonical ensemble can be mapped exactly onto a grand canonical ensemble in which the corresponding chemical potential plays the role of the Lagrange multiplier associated with the constraint. However, due to a nonintegrable divergence of the mean-field order parameter profile near a wall, the limit of infinitely strong adsorption turns out to be not well-defined within MFT, because it would necessarily violate the constraint. The critical Casimir force (CCF) acting on the two planar walls of the film is generally found to behave differently in the canonical and grand canonical ensembles. For instance, the canonical CCF in the presence of equal preferential adsorption at the two walls is found to have the opposite sign and a slower decay behavior as a function of the film thickness compared to its grand canonical counterpart. We derive the stress tensor in the canonical ensemble and find that it has the same expression as in the grand canonical case, but with the chemical potential playing the role of the Lagrange multiplier associated with the constraint. The different behavior of the CCF in the two ensembles is rationalized within MFT by showing that, for a prescribed value of the thermodynamic control parameter of the film, i.e., density or chemical potential, the film pressures are identical in the two ensembles, while the corresponding bulk pressures are not.
Geometric integrator for simulations in the canonical ensemble
Tapias, Diego; Sanders, David P.; Bravetti, Alessandro
2016-08-01
We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.
Geometric integrator for simulations in the canonical ensemble
Tapias, Diego; Bravetti, Alessandro
2016-01-01
In this work we introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble. In particular, we consider the equations arising from the so-called density dynamics algorithm with any possible type of thermostat and provide an integrator that preserves the invariant distribution. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of the system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results.
On Black Hole Entropy Corrections in the Grand Canonical Ensemble
Mahapatra, Subhash; Sarkar, Tapobrata
2011-01-01
We study entropy corrections due to thermal fluctuations for asymptotically AdS black holes in the grand canonical ensemble. To leading order, these can be expressed in terms of the black hole response coefficients via fluctuation moments. We also analyze entropy corrections due to mass and charge fluctuations of R-charged black holes, and our results indicate an universality in the logarithmic corrections to charged AdS black hole entropy in various dimensions.
Climate Prediction Center(CPC)Ensemble Canonical Correlation Analysis Forecast of Temperature
National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) temperature forecast is a 90-day (seasonal) outlook of US surface temperature anomalies. The ECCA uses Canonical...
National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...
Canonical ensemble in non-extensive statistical mechanics, q > 1
Ruseckas, Julius
2016-09-01
The non-extensive statistical mechanics has been used to describe a variety of complex systems. The maximization of entropy, often used to introduce the non-extensive statistical mechanics, is a formal procedure and does not easily lead to physical insight. In this article we investigate the canonical ensemble in the non-extensive statistical mechanics by considering a small system interacting with a large reservoir via short-range forces and assuming equal probabilities for all available microstates. We concentrate on the situation when the reservoir is characterized by generalized entropy with non-extensivity parameter q > 1. We also investigate the problem of divergence in the non-extensive statistical mechanics occurring when q > 1 and show that there is a limit on the growth of the number of microstates of the system that is given by the same expression for all values of q.
Energy Technology Data Exchange (ETDEWEB)
Parvan, A.S. [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Horia Hulubei National Institute of Physics and Nuclear Engineering, Department of Theoretical Physics, Bucharest (Romania); Moldova Academy of Sciences, Institute of Applied Physics, Chisinau (Moldova, Republic of)
2015-09-15
In the present paper, the Tsallis statistics in the grand canonical ensemble was reconsidered in a general form. The thermodynamic properties of the nonrelativistic ideal gas of hadrons in the grand canonical ensemble was studied numerically and analytically in a finite volume and the thermodynamic limit. It was proved that the Tsallis statistics in the grand canonical ensemble satisfies the requirements of the equilibrium thermodynamics in the thermodynamic limit if the thermodynamic potential is a homogeneous function of the first order with respect to the extensive variables of state of the system and the entropic variable z = 1/(q - 1) is an extensive variable of state. The equivalence of canonical, microcanonical and grand canonical ensembles for the nonrelativistic ideal gas of hadrons was demonstrated. (orig.)
National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) precipitation forecast is a 90-day (seasonal) outlook of US surface precipitation anomalies. The ECCA uses...
Hard, charged spheres in spherical pores. Grand canonical ensemble Monte Carlo calculations
DEFF Research Database (Denmark)
Sloth, Peter; Sørensen, T. S.
1992-01-01
A model consisting of hard charged spheres inside hard spherical pores is investigated by grand canonical ensemble Monte Carlo calculations. It is found that the mean ionic density profiles in the pores are almost the same when the wall of the pore is moderately charged as when it is uncharged...
Courtney, Owen T
2016-01-01
Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks, to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three or more linked nodes. Moreover we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing respectively the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff...
A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and Its Applications
Tamura, H.; Ito, K. R.
2006-04-01
We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of the canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle density converges to a finite value, the boson/fermion processes are obtained. This argument is a realization of the equivalence of ensembles, since resulting processes are considered to describe a grand canonical ensemble of points. Random point processes corresponding to para-particles of order two are discussed as an application of the formulation. Statistics of a system of composite particles at zero temperature are also considered as a model of determinantal random point processes.
Study of critical dynamics in fluids via molecular dynamics in canonical ensemble.
Roy, Sutapa; Das, Subir K
2015-12-01
With the objective of understanding the usefulness of thermostats in the study of dynamic critical phenomena in fluids, we present results for transport properties in a binary Lennard-Jones fluid that exhibits liquid-liquid phase transition. Various collective transport properties, calculated from the molecular dynamics (MD) simulations in canonical ensemble, with different thermostats, are compared with those obtained from MD simulations in microcanonical ensemble. It is observed that the Nosé-Hoover and dissipative particle dynamics thermostats are useful for the calculations of mutual diffusivity and shear viscosity. The Nosé-Hoover thermostat, however, as opposed to the latter, appears inadequate for the study of bulk viscosity. PMID:26687057
THERMODYNAMICS OF THE SLOWLY ROTATING KERR-NEWMAN BLACK HOLE IN THE GRAND CANONICAL ENSEMBLE
Institute of Scientific and Technical Information of China (English)
CHEN JU-HUA; JING JI-LIANG
2001-01-01
We investigate the thermodynamics of the slowly rotating Kerr-Newman (K-N) black hole in the grand canonical ensemble with York's formalism. Some thermodynamical properties, such as the thermodynamical action, entropy,thermodynamical energy and heat capacity are studied, and solutions of the slowly rotating K-N black hole with different boundary conditions are analysed. We find stable solutions and instantons under certain boundary conditions.
THERMODYNAMICS OF GLOBAL MONOPOLE ANTI-DE-SITTER BLACK HOLE IN GRAND CANONICAL ENSEMBLE
Institute of Scientific and Technical Information of China (English)
陈菊华; 荆继良; 王永久
2001-01-01
In this paper, we investigate the thermodynamics of the global monopole anti-de-Sitter black hole in the grand canonical ensemble following the York's formalism. The black hole is enclosed in a cavity with a finite radius where the temperature and potential are fixed. We have studied some thermodynamical properties, i.e. the reduced action,thermal energy and entropy. By investigating the stability of the solutions, we find stable solutions and instantons.
A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and its Applications
Tamura, H.; Ito, K. R.
2005-01-01
We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle density...
A Canonical Ensemble Approach to the Fermion/Boson Random Point Processes and Its Applications
Tamura, Hiroshi; Ito, Keiichi R.
2006-01-01
We introduce the boson and the fermion point processes from the elementary quantum mechanical point of view. That is, we consider quantum statistical mechanics of the canonical ensemble for a fixed number of particles which obey Bose-Einstein, Fermi-Dirac statistics, respectively, in a finite volume. Focusing on the distribution of positions of the particles, we have point processes of the fixed number of points in a bounded domain. By taking the thermodynamic limit such that the particle den...
Using lattice methods in non-canonical quantum statistics
International Nuclear Information System (INIS)
We define a natural coarse-graining procedure which can be applied to any closed equilibrium quantum system described by a density matrix ensemble and we show how the coarse-graining leads to the Gaussian and canonical ensembles. After this motivation, we present two ways of evaluating the Gaussian expectation values with lattice simulations. The first one is computationally demanding but general, whereas the second employs only canonical expectation values but it is applicable only for systems which are almost thermodynamical
Neirotti, J P; Freeman, D L; Doll, J D; Freeman, David L.
2000-01-01
The heat capacity and isomer distributions of the 38 atom Lennard-Jones cluster have been calculated in the canonical ensemble using parallel tempering Monte Carlo methods. A distinct region of temperature is identified that corresponds to equilibrium between the global minimum structure and the icosahedral basin of structures. This region of temperatures occurs below the melting peak of the heat capacity and is accompanied by a peak in the derivative of the heat capacity with temperature. Parallel tempering is shown to introduce correlations between results at different temperatures. A discussion is given that compares parallel tempering with other related approaches that ensure ergodic simulations.
Courtney, Owen T.; Bianconi, Ginestra
2016-06-01
Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three, or more linked nodes. Moreover, we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing, respectively, the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff of simplicial complexes that for simplicial complexes of dimension d =1 reduces to the structural cutoff of simple networks. Finally, we provide a numerical analysis of the natural correlations emerging in the configuration model of simplicial complexes without structural cutoff.
Canonical transformation method in classical electrodynamics
Pavlenko, Yu. G.
1983-08-01
The solutions of Maxwell's equations in the parabolic equation approximation is obtained on the basis of the canonical transformation method. The Hamiltonian form of the equations for the field in an anisotropic stratified medium is also examined. The perturbation theory for the calculation of the wave reflection and transmission coefficients is developed.
Ensemble Methods Foundations and Algorithms
Zhou, Zhi-Hua
2012-01-01
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a
Phase structures of 4D stringy charged black holes in canonical ensemble
Jia, Qiang; Lu, J. X.; Tan, Xiao-Jun
2016-08-01
We study the thermodynamics and phase structures of the asymptotically flat dilatonic black holes in 4 dimensions, placed in a cavity a la York, in string theory for an arbitrary dilaton coupling. We consider these charged black systems in canonical ensemble for which the temperature at the wall of and the charge inside the cavity are fixed. We find that the dilaton coupling plays the key role in the underlying phase structures. The connection of these black holes to higher dimensional brane systems via diagonal (double) and/or direct dimensional reductions indicates that the phase structures of the former may exhaust all possible ones of the latter, which are more difficult to study, under conditions of similar settings. Our study also shows that a diagonal (double) dimensional reduction preserves the underlying phase structure while a direct dimensional reduction has the potential to change it.
Phase structures of 4D stringy charged black holes in canonical ensemble
Jia, Qiang; Tan, Xiao-Jun
2016-01-01
We study the thermodynamics and phase structures of the asymptotically flat dilatonic black holes in 4 dimensions, placed in a cavity {\\it a la} York, in string theory for an arbitrary dilaton coupling. We consider these charged black systems in canonical ensemble for which the temperature at the wall of and the charge inside the cavity are fixed. We find that the dilaton coupling plays the key role in the underlying phase structures. The connection of these black holes to higher dimensional brane systems via diagonal (double) and/or direct dimensional reductions indicates that the phase structures of the former may exhaust all possible ones of the latter, which are more difficult to study, under conditions of similar settings. Our study also shows that a diagonal (double) dimensional reduction preserves the underlying phase structure while a direct dimensional reduction has the potential to change it.
DEFF Research Database (Denmark)
Sloth, Peter
1990-01-01
Density profiles and partition coefficients are obtained for hard-sphere fluids inside hard, spherical pores of different sizes by grand canonical ensemble Monte Carlo calculations. The Monte Carlo results are compared to the results obtained by application of different kinds of integral equation...
Li, Gu-Qiang
2016-01-01
The phase transition of four-dimensional charged AdS black hole solution in the $R+f(R)$ gravity with constant curvature is investigated in the grand canonical ensemble, where we find novel characteristics quite different from that in canonical ensemble. There exists no critical point for $T-S$ curve while in former research critical point was found for both the $T-S$ curve and $T-r_+$ curve when the electric charge of $f(R)$ black holes is kept fixed. Moreover, we derive the explicit expression for the specific heat, the analog of volume expansion coefficient and isothermal compressibility coefficient when the electric potential of $f(R)$ AdS black hole is fixed. The specific heat $C_\\Phi$ encounters a divergence when $0b$. This finding also differs from the result in the canonical ensemble, where there may be two, one or no divergence points for the specific heat $C_Q$. To examine the phase structure newly found in the grand canonical ensemble, we appeal to the well-known thermodynamic geometry tools and de...
Path planning in uncertain flow fields using ensemble method
Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.
2016-08-01
An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.
Analysis of mesoscale forecasts using ensemble methods
Gross, Markus
2016-01-01
Mesoscale forecasts are now routinely performed as elements of operational forecasts and their outputs do appear convincing. However, despite their realistic appearance at times the comparison to observations is less favorable. At the grid scale these forecasts often do not compare well with observations. This is partly due to the chaotic system underlying the weather. Another key problem is that it is impossible to evaluate the risk of making decisions based on these forecasts because they do not provide a measure of confidence. Ensembles provide this information in the ensemble spread and quartiles. However, running global ensembles at the meso or sub mesoscale involves substantial computational resources. National centers do run such ensembles, but the subject of this publication is a method which requires significantly less computation. The ensemble enhanced mesoscale system presented here aims not at the creation of an improved mesoscale forecast model. Also it is not to create an improved ensemble syste...
Popular Ensemble Methods: An Empirical Study
Maclin, R; 10.1613/jair.614
2011-01-01
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being exa...
Indian Academy of Sciences (India)
W. X. Zhong
2014-09-01
In this paper, we use the canonical ensemble model to discuss the radiation of a Schwarzschild–de Sitter black hole on the black hole horizon. Using this model, we calculate the probability distribution from function of the emission shell. And the statistical meaning which compare with the distribution function is used to investigate the black hole tunnelling radiation spectrum.We also discuss the mechanism of information flowing from the black hole.
DEFF Research Database (Denmark)
Sloth, Peter
1993-01-01
The grand canonical ensemble has been used to study the evaluation of single ion activity coefficients in homogeneous ionic fluids. In this work, the Coulombic interactions are truncated according to the minimum image approximation, and the ions are assumed to be placed in a structureless...... of the individual ionic activity coefficients with respect to the total ionic concentration. This formula has previously been proposed on the basis of somewhat different considerations....
Directory of Open Access Journals (Sweden)
Xun Chen
2014-01-01
Full Text Available Electroencephalogram (EEG recordings are often contaminated with muscle artifacts. This disturbing muscular activity strongly affects the visual analysis of EEG and impairs the results of EEG signal processing such as brain connectivity analysis. If multichannel EEG recordings are available, then there exist a considerable range of methods which can remove or to some extent suppress the distorting effect of such artifacts. Yet to our knowledge, there is no existing means to remove muscle artifacts from single-channel EEG recordings. Moreover, considering the recently increasing need for biomedical signal processing in ambulatory situations, it is crucially important to develop single-channel techniques. In this work, we propose a simple, yet effective method to achieve the muscle artifact removal from single-channel EEG, by combining ensemble empirical mode decomposition (EEMD with multiset canonical correlation analysis (MCCA. We demonstrate the performance of the proposed method through numerical simulations and application to real EEG recordings contaminated with muscle artifacts. The proposed method can successfully remove muscle artifacts without altering the recorded underlying EEG activity. It is a promising tool for real-world biomedical signal processing applications.
Composed ensembles of random unitary ensembles
Pozniak, M; Kus, M; Pozniak, Marcin; Zyczkowski, Karol; Kus, Marek
1997-01-01
Composed ensembles of random unitary matrices are defined via products of matrices, each pertaining to a given canonical circular ensemble of Dyson. We investigate statistical properties of spectra of some composed ensembles and demonstrate their physical relevance. We discuss also the methods of generating random matrices distributed according to invariant Haar measure on the orthogonal and unitary group.
Improved Canonical Quantization Method of Self Dual Field
Institute of Scientific and Technical Information of China (English)
樊丰华; 黄永畅
2012-01-01
In this paper,the improved canonical quantization method of the self dual field is given in order to overcome linear combination problem about the second class constraint and the first class constraint number maximization problem in the Dirac method.In the improved canonical quantization method,there are no artificial linear combination and the first class constraint number maximization problems,at the same time,the stability of the system is considered.Therefore,the improved canonical quantization method is more natural and easier accepted by people than the usual Dirac method.We use the improved canonical quantization method to realize the canonical quantization of the self dual field,which has relation with string theory successfully and the results are equal to the results by using the Dirac method.
Ensemble methods for noise in classification problems
Verbaeten, Sofie; Van Assche, Anneleen
2003-01-01
Ensemble methods combine a set of classifiers to construct a new classifier that is (often) more accurate than any of its component classifiers. In this paper, we use ensemble methods to identify noisy training examples. More precisely, we consider the problem of mislabeled training examples in classification tasks, and address this problem by pre-processing the training set, i.e. by identifying and removing outliers from the training set. We study a number of filter techniques that are based...
Ensemble Kalman methods for inverse problems
International Nuclear Information System (INIS)
The ensemble Kalman filter (EnKF) was introduced by Evensen in 1994 (Evensen 1994 J. Geophys. Res. 99 10143–62) as a novel method for data assimilation: state estimation for noisily observed time-dependent problems. Since that time it has had enormous impact in many application domains because of its robustness and ease of implementation, and numerical evidence of its accuracy. In this paper we propose the application of an iterative ensemble Kalman method for the solution of a wide class of inverse problems. In this context we show that the estimate of the unknown function that we obtain with the ensemble Kalman method lies in a subspace A spanned by the initial ensemble. Hence the resulting error may be bounded above by the error found from the best approximation in this subspace. We provide numerical experiments which compare the error incurred by the ensemble Kalman method for inverse problems with the error of the best approximation in A, and with variants on traditional least-squares approaches, restricted to the subspace A. In so doing we demonstrate that the ensemble Kalman method for inverse problems provides a derivative-free optimization method with comparable accuracy to that achieved by traditional least-squares approaches. Furthermore, we also demonstrate that the accuracy is of the same order of magnitude as that achieved by the best approximation. Three examples are used to demonstrate these assertions: inversion of a compact linear operator; inversion of piezometric head to determine hydraulic conductivity in a Darcy model of groundwater flow; and inversion of Eulerian velocity measurements at positive times to determine the initial condition in an incompressible fluid. (paper)
Kadoura, Ahmad Salim
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.
Energy Technology Data Exchange (ETDEWEB)
Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
Electronic chemical response indexes at finite temperature in the canonical ensemble
International Nuclear Information System (INIS)
Assuming that the electronic energy is given by a smooth function of the number of electrons and within the extension of density functional theory to finite temperature, the first and second order chemical reactivity response functions of the Helmholtz free energy with respect to the temperature, the number of electrons, and the external potential are derived. It is found that in all cases related to the first or second derivatives with respect to the number of electrons or the external potential, there is a term given by the average of the corresponding derivative of the electronic energy of each state (ground and excited). For the second derivatives, including those related with the temperature, there is a thermal fluctuation contribution that is zero at zero temperature. Thus, all expressions reduce correctly to their corresponding chemical reactivity expressions at zero temperature and show that, at room temperature, the corrections are very small. When the assumption that the electronic energy is given by a smooth function of the number of electrons is replaced by the straight lines behavior connecting integer values, as required by the ensemble theorem, one needs to introduce directional derivatives in most cases, so that the temperature dependent expressions reduce correctly to their zero temperature counterparts. However, the main result holds, namely, at finite temperature the thermal corrections to the chemical reactivity response functions are very small. Consequently, the present work validates the usage of reactivity indexes calculated at zero temperature to infer chemical behavior at room and even higher temperatures
Electronic chemical response indexes at finite temperature in the canonical ensemble
Energy Technology Data Exchange (ETDEWEB)
Franco-Pérez, Marco, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx; Gázquez, José L., E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Universidad Autónoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, México, D. F. 09340, México (Mexico); Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico); Vela, Alberto, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico)
2015-07-14
Assuming that the electronic energy is given by a smooth function of the number of electrons and within the extension of density functional theory to finite temperature, the first and second order chemical reactivity response functions of the Helmholtz free energy with respect to the temperature, the number of electrons, and the external potential are derived. It is found that in all cases related to the first or second derivatives with respect to the number of electrons or the external potential, there is a term given by the average of the corresponding derivative of the electronic energy of each state (ground and excited). For the second derivatives, including those related with the temperature, there is a thermal fluctuation contribution that is zero at zero temperature. Thus, all expressions reduce correctly to their corresponding chemical reactivity expressions at zero temperature and show that, at room temperature, the corrections are very small. When the assumption that the electronic energy is given by a smooth function of the number of electrons is replaced by the straight lines behavior connecting integer values, as required by the ensemble theorem, one needs to introduce directional derivatives in most cases, so that the temperature dependent expressions reduce correctly to their zero temperature counterparts. However, the main result holds, namely, at finite temperature the thermal corrections to the chemical reactivity response functions are very small. Consequently, the present work validates the usage of reactivity indexes calculated at zero temperature to infer chemical behavior at room and even higher temperatures.
The canonical and grand canonical models for nuclear multifragmentation
Indian Academy of Sciences (India)
G Chaudhuri; S Das Gupta
2010-08-01
Many observables seen in intermediate energy heavy-ion collisions can be explained on the basis of statistical equilibrium. Calculations based on statistical equilibrium can be implemented in microcanonical ensemble, canonical ensemble or grand canonical ensemble. This paper deals with calculations with canonical and grand canonical ensembles. A recursive relation developed recently allows calculations with arbitrary precision for many nuclear problems. Calculations are done to study the nature of phase transition in nuclear matter.
Parametric Potential Determination by the Canonical Function Method
Tannous, C; Langlois, J M
1999-01-01
The canonical function method (CFM) is a powerful means for solving the Radial Schrodinger Equation. The mathematical difficulty of the RSE lies in the fact it is a singular boundary value problem. The CFM turns it into a regular initial value problem and allows the full determination of the spectrum of the Schrodinger operator without calculating the eigenfunctions. Following the parametrisation suggested by Klapisch and Green, Sellin and Zachor we develop a CFM to optimise the potential parameters in order to reproduce the experimental Quantum Defect results for various Rydberg series of He, Ne and Ar as evaluated from Moore's data.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-12-03
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Directory of Open Access Journals (Sweden)
S. Roh
2015-05-01
Full Text Available In ensemble Kalman filtering (EnKF, the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-05-08
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Kocharovsky, V. V.; Kocharovsky, Vl. V.; Tarasov, S. V.
2016-01-01
The analytical theory of Bose-Einstein condensation of an ideal gas in mesoscopic systems has been briefly reviewed in application to traps with arbitrary shapes and dimension. This theory describes the phases of the classical gas and the formed Bose-Einstein condensate, as well as the entire vicinity of the phase transition point. The statistics and thermodynamics of Bose-Einstein condensation have been studied in detail, including their self-similar structure in the critical region, transition to the thermodynamic limit, effect of boundary conditions on the properties of a system, and nonequivalence of the description of Bose-Einstein condensation in different statistical ensembles. The complete classification of universality classes of Bose-Einstein condensation has been given.
Hybrid Intrusion Detection Using Ensemble of Classification Methods
Directory of Open Access Journals (Sweden)
M.Govindarajan
2014-01-01
Full Text Available One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed for homogeneous ensemble classifiers using bagging and heterogeneous ensemble classifiers using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF and Support Vector Machine (SVM as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of real and benchmark data sets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase and combining phase. A wide range of comparative experiments are conducted for real and benchmark data sets of intrusion detection. The accuracy of base classifiers is compared with homogeneous and heterogeneous models for data mining problem. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and also heterogeneous models exhibit better results than homogeneous models for real and benchmark data sets of intrusion detection.
Methods of Assessing Replicability in Canonical Correlation Analysis (CCA).
King, Jason E.
Theoretical hypotheses generated from data analysis of a single sample should not be advanced until the replicability issue is treated. At least one of three questions usually arises when evaluating the invariance of results obtained from a canonical correlation analysis (CCA): (1) "Will an effect occur in subsequent studies?"; (2) "Will the size…
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
Representations and Ensemble Methods for Dynamic Relational Classification
Rossi, Ryan A
2011-01-01
Temporal networks are ubiquitous and evolve over time by the addition, deletion, and changing of links, nodes, and attributes. Although many relational datasets contain temporal information, the majority of existing techniques in relational learning focus on static snapshots and ignore the temporal dynamics. We propose a framework for discovering temporal representations of relational data to increase the accuracy of statistical relational learning algorithms. The temporal relational representations serve as a basis for classification, ensembles, and pattern mining in evolving domains. The framework includes (1) selecting the time-varying relational components (links, attributes, nodes), (2) selecting the temporal granularity, (3) predicting the temporal influence of each time-varying relational component, and (4) choosing the weighted relational classifier. Additionally, we propose temporal ensemble methods that exploit the temporal-dimension of relational data. These ensembles outperform traditional and mor...
Directory of Open Access Journals (Sweden)
Kazuo Saito
2012-01-01
Full Text Available The effect of lateral boundary perturbations (LBPs on the mesoscale breeding (MBD method and the local ensemble transform Kalman filter (LETKF as the initial perturbations generators for mesoscale ensemble prediction systems (EPSs was examined. A LBPs method using the Japan Meteorological Agency's (JMA's operational one-week global ensemble prediction was developed and applied to the mesoscale EPS of the Meteorological Research Institute for the World Weather Research Programme, Beijing 2008 Olympics Research and Development Project. The amplitude of the LBPs was adjusted based on the ensemble spread statistics considering the difference of the forecast times of the JMA's one-week EPS and the associated breeding/ensemble Kalman filter (EnKF cycles. LBPs in the ensemble forecast increase the ensemble spread and improve the accuracy of the ensemble mean forecast. In the MBD method, if LBPs were introduced in its breeding cycles, the growth rate of the generated bred vectors is increased, and the ensemble spread and the root mean square errors (RMSEs of the ensemble mean are further improved in the ensemble forecast. With LBPs in the breeding cycles, positional correspondences to the meteorological disturbances and the orthogonality of the bred vectors are improved. Brier Skill Scores (BSSs also showed a remarkable effect of LBPs in the breeding cycles. LBPs showed a similar effect with the LETKF. If LBPs were introduced in the EnKF data assimilation cycles, the ensemble spread, ensemble mean accuracy, and BSSs for precipitation were improved, although the relative advantage of LETKF as the initial perturbations generator against MDB was not necessarily clear. LBPs in the EnKF cycles contribute not to the orthogonalisation but to prevent the underestimation of the forecast error near the lateral boundary.The accuracy of the LETKF analyses was compared with that of the mesoscale 4D-VAR analyses. With LBPs in the LETKF cycles, the RMSEs of the
Splitting K-symplectic methods for non-canonical separable Hamiltonian problems
Zhu, Beibei; Zhang, Ruili; Tang, Yifa; Tu, Xiongbiao; Zhao, Yue
2016-10-01
Non-canonical Hamiltonian systems have K-symplectic structures which are preserved by K-symplectic numerical integrators. There is no universal method to construct K-symplectic integrators for arbitrary non-canonical Hamiltonian systems. However, in many cases of interest, by using splitting, we can construct explicit K-symplectic methods for separable non-canonical systems. In this paper, we identify situations where splitting K-symplectic methods can be constructed. Comparative numerical experiments in three non-canonical Hamiltonian problems show that symmetric/non-symmetric splitting K-symplectic methods applied to the non-canonical systems are more efficient than the same-order Gauss' methods/non-symmetric symplectic methods applied to the corresponding canonicalized systems; for the non-canonical Lotka-Volterra model, the splitting algorithms behave better in efficiency and energy conservation than the K-symplectic method we construct via generating function technique. In our numerical experiments, the favorable energy conservation property of the splitting K-symplectic methods is apparent.
Canonical density matrix perturbation theory.
Niklasson, Anders M N; Cawkwell, M J; Rubensson, Emanuel H; Rudberg, Elias
2015-12-01
Density matrix perturbation theory [Niklasson and Challacombe, Phys. Rev. Lett. 92, 193001 (2004)] is generalized to canonical (NVT) free-energy ensembles in tight-binding, Hartree-Fock, or Kohn-Sham density-functional theory. The canonical density matrix perturbation theory can be used to calculate temperature-dependent response properties from the coupled perturbed self-consistent field equations as in density-functional perturbation theory. The method is well suited to take advantage of sparse matrix algebra to achieve linear scaling complexity in the computational cost as a function of system size for sufficiently large nonmetallic materials and metals at high temperatures. PMID:26764847
EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms
Rapakoulia, Trisevgeni
2014-04-26
Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.
Algebraic method for exact solution of canonical partition function in nuclear multifragmentation
Parvan, A S
2002-01-01
An algebraic method for the exact recursion formula for the calculation of canonical partition function of non-interaction finite systems of particles obeying Bose-Einstein, Fermi-Dirac, Maxwell-Boltzmann statistics or parastatistics is derived. A new exactly solvable multifragmentation model with baryon and electric charge conservation laws is developed. Recursion relations for this model are presented that allow exact calculation of canonical partition function for any statistics.
Black Hole Statistical Mechanics and The Angular Velocity Ensemble
Thomson, Mitchell
2012-01-01
An new ensemble - the angular velocity ensemble - is derived using Jaynes' method of maximising entropy subject to prior information constraints. The relevance of the ensemble to black holes is motivated by a discussion of external parameters in statistical mechanics and their absence from the Hamiltonian of general relativity. It is shown how this leads to difficulty in deriving entropy as a function of state and recovering the first law of thermodynamics from the microcanonical and canonical ensembles applied to black holes.
Ensemble Methods in Data Mining Improving Accuracy Through Combining Predictions
Seni, Giovanni
2010-01-01
This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although e
Local polynomial method for ensemble forecast of time series
Directory of Open Access Journals (Sweden)
S. Regonda
2005-01-01
Full Text Available We present a nonparametric approach based on local polynomial regression for ensemble forecast of time series. The state space is first reconstructed by embedding the univariate time series of the response variable in a space of dimension (D with a delay time (τ. To obtain a forecast from a given time point t, three steps are involved: (i the current state of the system is mapped on to the state space, known as the feature vector, (ii a small number (K=α*n, α=fraction (0,1] of the data, n=data length of neighbors (and their future evolution to the feature vector are identified in the state space, and (iii a polynomial of order p is fitted to the identified neighbors, which is then used for prediction. A suite of parameter combinations (D, τ, α, p is selected based on an objective criterion, called the Generalized Cross Validation (GCV. All of the selected parameter combinations are then used to issue a T-step iterated forecast starting from the current time t, thus generating an ensemble forecast which can be used to obtain the forecast probability density function (PDF. The ensemble approach improves upon the traditional method of providing a single mean forecast by providing the forecast uncertainty. Further, for short noisy data it can provide better forecasts. We demonstrate the utility of this approach on two synthetic (Henon and Lorenz attractors and two real data sets (Great Salt Lake bi-weekly volume and NINO3 index. This framework can also be used to forecast a vector of response variables based on a vector of predictors.
A canonical correlation analysis based method for contamination event detection in water sources.
Li, Ruonan; Liu, Shuming; Smith, Kate; Che, Han
2016-06-15
In this study, a general framework integrating a data-driven estimation model is employed for contamination event detection in water sources. Sequential canonical correlation coefficients are updated in the model using multivariate water quality time series. The proposed method utilizes canonical correlation analysis for studying the interplay between two sets of water quality parameters. The model is assessed by precision, recall and F-measure. The proposed method is tested using data from a laboratory contaminant injection experiment. The proposed method could detect a contamination event 1 minute after the introduction of 1.600 mg l(-1) acrylamide solution. With optimized parameter values, the proposed method can correctly detect 97.50% of all contamination events with no false alarms. The robustness of the proposed method can be explained using the Bauer-Fike theorem. PMID:27264637
Adaptive error covariances estimation methods for ensemble Kalman filters
Energy Technology Data Exchange (ETDEWEB)
Zhen, Yicun, E-mail: zhen@math.psu.edu [Department of Mathematics, The Pennsylvania State University, University Park, PA 16802 (United States); Harlim, John, E-mail: jharlim@psu.edu [Department of Mathematics and Department of Meteorology, The Pennsylvania State University, University Park, PA 16802 (United States)
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Extending the square root method to account for additive forecast noise in ensemble methods
Raanes, Patrick N; Bertino, Laurent
2015-01-01
A square root approach is considered for the problem of accounting for model noise in the forecast step of the ensemble Kalman filter (EnKF) and related algorithms. The primary aim is to replace the method of simulated, pseudo-random, additive noise so as to eliminate the associated sampling errors. The core method is based on the analysis step of ensemble square root filters, and consists in the deterministic computation of a transform matrix. The theoretical advantages regarding dynamical consistency are surveyed, applying equally well to the square root method in the analysis step. A fundamental problem due to the limited size of the ensemble subspace is discussed, and novel solutions that complement the core method are suggested and studied. Benchmarks from twin experiments with simple, low-order dynamics indicate improved performance over standard approaches such as additive, simulated noise and multiplicative inflation.
Microcanonical ensemble simulation method applied to discrete potential fluids.
Sastre, Francisco; Benavides, Ana Laura; Torres-Arenas, José; Gil-Villegas, Alejandro
2015-09-01
In this work we extend the applicability of the microcanonical ensemble simulation method, originally proposed to study the Ising model [A. Hüller and M. Pleimling, Int. J. Mod. Phys. C 13, 947 (2002)0129-183110.1142/S0129183102003693], to the case of simple fluids. An algorithm is developed by measuring the transition rates probabilities between macroscopic states, that has as advantage with respect to conventional Monte Carlo NVT (MC-NVT) simulations that a continuous range of temperatures are covered in a single run. For a given density, this new algorithm provides the inverse temperature, that can be parametrized as a function of the internal energy, and the isochoric heat capacity is then evaluated through a numerical derivative. As an illustrative example we consider a fluid composed of particles interacting via a square-well (SW) pair potential of variable range. Equilibrium internal energies and isochoric heat capacities are obtained with very high accuracy compared with data obtained from MC-NVT simulations. These results are important in the context of the application of the Hüller-Pleimling method to discrete-potential systems, that are based on a generalization of the SW and square-shoulder fluids properties. PMID:26465582
Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods
Directory of Open Access Journals (Sweden)
Saadia Zahid
2015-01-01
Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.
Sparse canonical methods for biological data integration: application to a cross-platform study
Directory of Open Access Journals (Sweden)
Robert-Granié Christèle
2009-01-01
Full Text Available Abstract Background In the context of systems biology, few sparse approaches have been proposed so far to integrate several data sets. It is however an important and fundamental issue that will be widely encountered in post genomic studies, when simultaneously analyzing transcriptomics, proteomics and metabolomics data using different platforms, so as to understand the mutual interactions between the different data sets. In this high dimensional setting, variable selection is crucial to give interpretable results. We focus on a sparse Partial Least Squares approach (sPLS to handle two-block data sets, where the relationship between the two types of variables is known to be symmetric. Sparse PLS has been developed either for a regression or a canonical correlation framework and includes a built-in procedure to select variables while integrating data. To illustrate the canonical mode approach, we analyzed the NCI60 data sets, where two different platforms (cDNA and Affymetrix chips were used to study the transcriptome of sixty cancer cell lines. Results We compare the results obtained with two other sparse or related canonical correlation approaches: CCA with Elastic Net penalization (CCA-EN and Co-Inertia Analysis (CIA. The latter does not include a built-in procedure for variable selection and requires a two-step analysis. We stress the lack of statistical criteria to evaluate canonical correlation methods, which makes biological interpretation absolutely necessary to compare the different gene selections. We also propose comprehensive graphical representations of both samples and variables to facilitate the interpretation of the results. Conclusion sPLS and CCA-EN selected highly relevant genes and complementary findings from the two data sets, which enabled a detailed understanding of the molecular characteristics of several groups of cell lines. These two approaches were found to bring similar results, although they highlighted the same
Directory of Open Access Journals (Sweden)
Heinz Toparkus
2014-04-01
Full Text Available In this paper we consider first-order systems with constant coefficients for two real-valued functions of two real variables. This is both a problem in itself, as well as an alternative view of the classical linear partial differential equations of second order with constant coefficients. The classification of the systems is done using elementary methods of linear algebra. Each type presents its special canonical form in the associated characteristic coordinate system. Then you can formulate initial value problems in appropriate basic areas, and you can try to achieve a solution of these problems by means of transform methods.
ENSEMBLE methods to reconcile disparate national long range dispersion forecasts
DEFF Research Database (Denmark)
Mikkelsen, Torben; Galmarini, S.; Bianconi, R.;
2003-01-01
and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecastsfrom meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national....... ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidentalatmospheric release of radioactive material. A series of new decision-making “ENSEMBLE” procedures...
Hybrid Levenberg-Marquardt and weak-constraint ensemble Kalman smoother method
Mandel, J.; Bergou, E.; Gürol, S.; Gratton, S.; Kasanický, I.
2016-03-01
The ensemble Kalman smoother (EnKS) is used as a linear least-squares solver in the Gauss-Newton method for the large nonlinear least-squares system in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Furthermore, adding a regularization term results in replacing the Gauss-Newton method, which may diverge, by the Levenberg-Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS. The method is illustrated on the Lorenz 63 model and a two-level quasi-geostrophic model.
ENSEMBLE methods to reconcile disparate national long range dispersion forecasting
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, T.; Galmarini, S.; Bianconi, R.; French, S. (eds.)
2003-11-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)
ENSEMBLE methods to reconcile disparate national long range dispersion forecasting
International Nuclear Information System (INIS)
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)
International Nuclear Information System (INIS)
The motion of charged particles in a magnetized plasma column, such as that of a magnetic mirror trap or a tokamak, is determined in the framework of the canonical perturbation theory through a method of variation of constants which preserves the energy conservation and the symmetry invariance. The choice of a frame of coordinates close to that of the magnetic coordinates allows a relatively precise determination of the guiding-center motion with a low-ordered approximation in the adiabatic parameter. A Hamiltonian formulation of the motion equations is obtained
Development of a regional ensemble prediction method for probabilistic weather prediction
International Nuclear Information System (INIS)
A regional ensemble prediction method has been developed to provide probabilistic weather prediction using a numerical weather prediction model. To obtain consistent perturbations with the synoptic weather pattern, both of initial and lateral boundary perturbations were given by differences between control and ensemble member of the Japan Meteorological Agency (JMA)'s operational one-week ensemble forecast. The method provides a multiple ensemble member with a horizontal resolution of 15 km for 48-hour based on a downscaling of the JMA's operational global forecast accompanied with the perturbations. The ensemble prediction was examined in the case of heavy snow fall event in Kanto area on January 14, 2013. The results showed that the predictions represent different features of high-resolution spatiotemporal distribution of precipitation affected by intensity and location of extra-tropical cyclone in each ensemble member. Although the ensemble prediction has model bias of mean values and variances in some variables such as wind speed and solar radiation, the ensemble prediction has a potential to append a probabilistic information to a deterministic prediction. (author)
A comparison of ensemble post-processing methods for extreme events
Williams, Robin; Ferro, Chris; Kwasniok, Frank
2015-04-01
Ensemble post-processing methods are used in operational weather forecasting to form probability distributions that represent forecast uncertainty. Several such methods have been proposed in the literature, including logistic regression, ensemble dressing, Bayesian model averaging and non-homogeneous Gaussian regression. We conduct an imperfect model experiment with the Lorenz 1996 model to investigate the performance of these methods, especially when forecasting the occurrence of rare extreme events. We show how flexible bias-correction schemes can be incorporated into these post-processing methods, and that allowing the bias correction to depend on the ensemble mean can yield considerable improvements in skill when forecasting extreme events. In the Lorenz 1996 setting, we find that ensemble dressing, Bayesian model averaging and non-homogeneous Gaussian regression perform similarly, while logistic regression performs less well.
Evaluation of the thermodynamics of a four level system using canonical density matrix method
Directory of Open Access Journals (Sweden)
Awoga Oladunjoye A.
2013-02-01
Full Text Available We consider a four-level system with two subsystems coupled by weak interaction. The system is in thermal equilibrium. The thermodynamics of the system, namely internal energy, free energy, entropy and heat capacity, are evaluated using the canonical density matrix by two methods. First by Kronecker product method and later by treating the subsystems separately and then adding the evaluated thermodynamic properties of each subsystem. It is discovered that both methods yield the same result, the results obey the laws of thermodynamics and are the same as earlier obtained results. The results also show that each level of the subsystems introduces a new degree of freedom and increases the entropy of the entire system. We also found that the four-level system predicts a linear relationship between heat capacity and temperature at very low temperatures just as in metals. Our numerical results show the same trend.
Alba, David; Crater, Horace W.; Lusanna, Luca
2012-01-01
A new formulation of relativistic classical mechanics allows a revisiting of old unsolved problems in relativistic kinetic theory and in relativistic statistical mechanics. In particular a definition of the relativistic micro-canonical partition function is given strictly in terms of the Poincar\\'e generators of an interacting N-particle system both in the inertial and non-inertial rest frames. The non-relativistic limit allows a definition of both the inertial and non-inertial micro-canonica...
Hybrid Modeling of Flotation Height in Air Flotation Oven Based on Selective Bagging Ensemble Method
Directory of Open Access Journals (Sweden)
Shuai Hou
2013-01-01
Full Text Available The accurate prediction of the flotation height is very necessary for the precise control of the air flotation oven process, therefore, avoiding the scratch and improving production quality. In this paper, a hybrid flotation height prediction model is developed. Firstly, a simplified mechanism model is introduced for capturing the main dynamic behavior of the process. Thereafter, for compensation of the modeling errors existing between actual system and mechanism model, an error compensation model which is established based on the proposed selective bagging ensemble method is proposed for boosting prediction accuracy. In the framework of the selective bagging ensemble method, negative correlation learning and genetic algorithm are imposed on bagging ensemble method for promoting cooperation property between based learners. As a result, a subset of base learners can be selected from the original bagging ensemble for composing a selective bagging ensemble which can outperform the original one in prediction accuracy with a compact ensemble size. Simulation results indicate that the proposed hybrid model has a better prediction performance in flotation height than other algorithms’ performance.
Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-05-01
A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.
Directory of Open Access Journals (Sweden)
Ju Hyoung Lee
2015-12-01
Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.
A Synergy Method to Improve Ensemble Weather Predictions and Differential SAR Interferograms
Ulmer, Franz-Georg; Adam, Nico
2015-11-01
A compensation of atmospheric effects is essential for mm-sensitivity in differential interferometric synthetic aperture radar (DInSAR) techniques. Numerical weather predictions are used to compensate these disturbances allowing a reduction in the number of required radar scenes. Practically, predictions are solutions of partial differential equations which never can be precise due to model or initialisation uncertainties. In order to deal with the chaotic nature of the solutions, ensembles of predictions are computed. From a stochastic point of view, the ensemble mean is the expected prediction, if all ensemble members are equally likely. This corresponds to the typical assumption that all ensemble members are physically correct solutions of the set of partial differential equations. DInSAR allows adding to this knowledge. Observations of refractivity can now be utilised to check the likelihood of a solution and to weight the respective ensemble member to estimate a better expected prediction. The objective of the paper is to show the synergy between ensemble weather predictions and differential interferometric atmospheric correction. We demonstrate a new method first to compensate better for the atmospheric effect in DInSAR and second to estimate an improved numerical weather prediction (NWP) ensemble mean. Practically, a least squares fit of predicted atmospheric effects with respect to a differential interferogram is computed. The coefficients of this fit are interpreted as likelihoods and used as weights for the weighted ensemble mean. Finally, the derived weighted prediction has minimal expected quadratic errors which is a better solution compared to the straightforward best-fitting ensemble member. Furthermore, we propose an extension of the algorithm which avoids the systematic bias caused by deformations. It makes this technique suitable for time series analysis, e.g. persistent scatterer interferometry (PSI). We validate the algorithm using the well known
ENSO-conditioned weather resampling method for seasonal ensemble streamflow prediction
Beckers, Joost V.L.; Weerts, Albrecht H.; Tijdeman, Erik; Welles, Edwin
2016-01-01
Oceanic-atmospheric climate modes, such as El Niño-Southern Oscillation (ENSO), are known to affect the local streamflow regime in many rivers around the world. A new method is proposed to incorporate climate mode information into the well-known ensemble streamflow prediction (ESP) method for sea
Energy Technology Data Exchange (ETDEWEB)
Filatov, Michael, E-mail: mike.filatov@gmail.com [Department of Chemistry, Southern Methodist University, 3215 Daniel Avenue, Dallas, Texas 75275-0314 (United States); Huix-Rotllant, Miquel; Burghardt, Irene [Institute of Physical and Theoretical Chemistry, Goethe University Frankfurt, Max-von-Laue-Str. 7, D-60438 Frankfurt am Main (Germany)
2015-05-14
State-averaged (SA) variants of the spin-restricted ensemble-referenced Kohn-Sham (REKS) method, SA-REKS and state-interaction (SI)-SA-REKS, implement ensemble density functional theory for variationally obtaining excitation energies of molecular systems. In this work, the currently existing version of the SA-REKS method, which included only one excited state into the ensemble averaging, is extended by adding more excited states to the averaged energy functional. A general strategy for extension of the REKS-type methods to larger ensembles of ground and excited states is outlined and implemented in extended versions of the SA-REKS and SI-SA-REKS methods. The newly developed methods are tested in the calculation of several excited states of ground-state multi-reference systems, such as dissociating hydrogen molecule, and excited states of donor–acceptor molecular systems. For hydrogen molecule, the new method correctly reproduces the distance dependence of the lowest excited state energies and describes an avoided crossing between the doubly excited and singly excited states. For bithiophene–perylenediimide stacked complex, the SI-SA-REKS method correctly describes crossing between the locally excited state and the charge transfer excited state and yields vertical excitation energies in good agreement with the ab initio wavefunction methods.
Method to detect gravitational waves from an ensemble of known pulsars
Fan, Xilong; Messenger, Christopher
2016-01-01
Combining information from weak sources, such as known pulsars, for gravitational wave detection, is an attractive approach to improve detection efficiency. We propose an optimal statistic for a general ensemble of signals and apply it to an ensemble of known pulsars. Our method combines $\\mathcal F$-statistic values from individual pulsars using weights proportional to each pulsar's expected optimal signal-to-noise ratio to improve the detection efficiency. We also point out that to detect at least one pulsar within an ensemble, different thresholds should be designed for each source based on the expected signal strength. The performance of our proposed detection statistic is demonstrated using simulated sources, with the assumption that all pulsars' ellipticities belong to a common (yet unknown) distribution. Comparing with an equal-weight strategy and with individual source approaches, we show that the weighted-combination of all known pulsars, where weights are assigned based on the pulsars' known informa...
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
An Introduction to Ensemble Methods for Data Analysis (Revised July, 2004)
Berk, Richard
2004-01-01
This paper provides an introduction to ensemble statistical procedures as a special case of algorithmic methods. The discussion beings with classification and regression trees (CART) as a didactic device to introduce many of the key issues. Following the material on CART is a consideration of cross-validation, bagging, random forests and boosting. Major points are illustrated with analyses of real data.
Nasseri, M.; Zahraie, B.; Ajami, N. K.; Solomatine, D. P.
2014-04-01
Multi-model (ensemble, or committee) techniques have shown to be an effective way to improve hydrological prediction performance and provide uncertainty information. This paper presents two novel multi-model ensemble techniques, one probabilistic, Modified Bootstrap Ensemble Model (MBEM), and one possibilistic, FUzzy C-means Ensemble based on data Pattern (FUCEP). The paper also explores utilization of the Ordinary Kriging (OK) method as a multi-model combination scheme for hydrological simulation/prediction. These techniques are compared against Bayesian Model Averaging (BMA) and Weighted Average (WA) methods to demonstrate their effectiveness. The mentioned techniques are applied to the three monthly water balance models used to generate stream flow simulations for two mountainous basins in the South-West of Iran. For both basins, the results demonstrate that MBEM and FUCEP generate more skillful and reliable probabilistic predictions, outperforming all the other techniques. We have also found that OK did not demonstrate any improved skill as a simple combination method over WA scheme for neither of the basins.
Application of the Multimodel Ensemble Kalman Filter Method in Groundwater System
Directory of Open Access Journals (Sweden)
Liang Xue
2015-02-01
Full Text Available With the development of in-situ monitoring techniques, the ensemble Kalman filter (EnKF has become a popular data assimilation method due to its capability to jointly update model parameters and state variables in a sequential way, and to assess the uncertainty associated with estimation and prediction. To take the conceptual model uncertainty into account during the data assimilation process, a novel multimodel ensemble Kalman filter method has been proposed by incorporating the standard EnKF with Bayesian model averaging framework. In this paper, this method is applied to analyze the dataset obtained from the Hailiutu River Basin located in the northwest part of China. Multiple conceptual models are created by considering two important factors that control groundwater dynamics in semi-arid areas: the zonation pattern of the hydraulic conductivity field and the relationship between evapotranspiration and groundwater level. The results show that the posterior model weights of the postulated models can be dynamically adjusted according to the mismatch between the measurements and the ensemble predictions, and the multimodel ensemble estimation and the corresponding uncertainty can be quantified.
Identifying a robust method to build RCMs ensemble as climate forcing for hydrological impact models
Olmos Giménez, P.; García Galiano, S. G.; Giraldo-Osorio, J. D.
2016-06-01
The regional climate models (RCMs) improve the understanding of the climate mechanism and are often used as climate forcing to hydrological impact models. Rainfall is the principal input to the water cycle, so special attention should be paid to its accurate estimation. However, climate change projections of rainfall events exhibit great divergence between RCMs. As a consequence, the rainfall projections, and the estimation of uncertainties, are better based in the combination of the information provided by an ensemble approach from different RCMs simulations. Taking into account the rainfall variability provided by different RCMs, the aims of this work are to evaluate the performance of two novel approaches based on the reliability ensemble averaging (REA) method for building RCMs ensembles of monthly precipitation over Spain. The proposed methodologies are based on probability density functions (PDFs) considering the variability of different levels of information, on the one hand of annual and seasonal rainfall, and on the other hand of monthly rainfall. The sensitivity of the proposed approaches, to two metrics for identifying the best ensemble building method, is evaluated. The plausible future scenario of rainfall for 2021-2050 over Spain, based on the more robust method, is identified. As a result, the rainfall projections are improved thus decreasing the uncertainties involved, to drive hydrological impacts models and therefore to reduce the cumulative errors in the modeling chain.
Canonical Strangeness and Distillation Effects in Hadron Production
Toneev, V D
2004-01-01
Strangeness canonical ensemble for Maxwell-Boltzmann statistics is reconsidered for excited nuclear systems with non-vanishing net strangeness. A new recurrence relation method is applied to find the partition function. The method is first generalized to the case of quantum strangeness canonical ensemble. Uncertainties in calculation of the K+/pi+ excitation function are discussed. A new scenario based on the strangeness distillation effect is put forward for a possible explanation of anomalous strangeness production observed at the bombarding energy near 30 AGeV. The peaked maximum in the K+/pi+ ratio is considered as a sign of the critical end-point reached in evolution of the system rather than a latent heat jump emerging from the onset of the first order deconfinement phase transition.
Rhythmic canons and modular tiling
Caure, Hélianthe
2016-01-01
This thesis is a contribution to the study of modulo p tiling. Many mathematical and computational tools were used for the study of rhythmic tiling canons. Recent research has mainly focused in finding tiling without inner periodicity, being called Vuza canons. Those canons are a constructive basis for all rhythmic tiling canons, however, they are really difficult to obtain. Best current method is a brut force exploration that, despite a few recent enhancements, is exponential. Many technics ...
An Introduction to Ensemble Methods for Data Analysis
Berk, Richard A.
2011-01-01
There are a growing number of new statistical procedures Leo Breiman (2001b) has called "algorithmic". Coming from work primarily in statistics, applied mathematics, and computer science, these techniques are sometimes linked to "data mining", "machine learning", and "statistical learning". A key idea behind algorithmic methods is that there is no statistical model in the usual sense; no effort to made to represent how the data were generated. And no apologies are made for the absence of a mo...
EXPERIMENTS OF ENSEMBLE FORECAST OF TYPHOON TRACK USING BDA PERTURBING METHOD
Institute of Scientific and Technical Information of China (English)
HUANG Yan-yan; WAN Qi-lin; YUAN Jin-nan; DING Wei-yu
2006-01-01
A new method, BDA perturbing, is used in ensemble forecasting of typhoon track. This method is based on the Bogus Data Assimilation scheme. It perturbs the initial position and intensity of typhoons and gets a series of bogus vortex. Then each bogus vortex is used in data assimilation to obtain initial conditions. Ensemble forecast members are constructed by conducting simulation with these initial conditions. Some cases of typhoon are chosen to test the validity of this new method and the results show that: using the BDA perturbing method to perturb initial position and intensity of typhoon for track forecast can improve accuracy, compared with the direct use of the BDA assimilation scheme. And it is concluded that a perturbing amplitude of intensity of 5 hPa is probably more appropriate than 10 hPa if the BDA perturbing method is used in combination with initial position perturbation.
A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification
Directory of Open Access Journals (Sweden)
Yongjun Piao
2015-01-01
Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.
A Numerical Comparison of Rule Ensemble Methods and Support Vector Machines
Energy Technology Data Exchange (ETDEWEB)
Meza, Juan C.; Woods, Mark
2009-12-18
Machine or statistical learning is a growing field that encompasses many scientific problems including estimating parameters from data, identifying risk factors in health studies, image recognition, and finding clusters within datasets, to name just a few examples. Statistical learning can be described as 'learning from data' , with the goal of making a prediction of some outcome of interest. This prediction is usually made on the basis of a computer model that is built using data where the outcomes and a set of features have been previously matched. The computer model is called a learner, hence the name machine learning. In this paper, we present two such algorithms, a support vector machine method and a rule ensemble method. We compared their predictive power on three supernova type 1a data sets provided by the Nearby Supernova Factory and found that while both methods give accuracies of approximately 95%, the rule ensemble method gives much lower false negative rates.
Thermodynamic stability of charged BTZ black holes: Ensemble dependency problem and its solution
Hendi, S H; Mamasani, R
2015-01-01
Motivated by the wide applications of thermal stability and phase transition, we investigate thermodynamic properties of charged BTZ black holes. We apply the standard method to calculate the heat capacity and the Hessian matrix and find that thermal stability of charged BTZ solutions depends on the choice of ensemble. To overcome this problem, we take into account cosmological constant as a thermodynamical variable. By this modification, we show that the ensemble dependency is eliminated and thermal stability conditions are the same in both ensembles. Then, we generalize our solutions to the case of nonlinear electrodynamics. We show how nonlinear matter field modifies the geometrical behavior of the metric function. We also study phase transition and thermal stability of these black holes in context of both canonical and grand canonical ensembles. We show that by considering the cosmological constant as a thermodynamical variable and modifying the Hessian matrix, the ensemble dependency of thermal stability...
Chen, Jinglong; Zhang, Chunlin; Zhang, Xiaoyan; Zi, Yanyang; He, Shuilong; Yang, Zhe
2015-03-01
Satellite communication antennas are key devices of a measurement ship to support voice, data, fax and video integration services. Condition monitoring of mechanical equipment from the vibration measurement data is significant for guaranteeing safe operation and avoiding the unscheduled breakdown. So, condition monitoring system for ship-based satellite communication antennas is designed and developed. Planetary gearboxes play an important role in the transmission train of satellite communication antenna. However, condition monitoring of planetary gearbox still faces challenges due to complexity and weak condition feature. This paper provides a possibility for planetary gearbox condition monitoring by proposing ensemble a multiwavelet analysis method. Benefit from the property on multi-resolution analysis and the multiple wavelet basis functions, multiwavelet has the advantage over characterizing the non-stationary signal. In order to realize the accurate detection of the condition feature and multi-resolution analysis in the whole frequency band, adaptive multiwavelet basis function is constructed via increasing multiplicity and then vibration signal is processed by the ensemble multiwavelet transform. Finally, normalized ensemble multiwavelet transform information entropy is computed to describe the condition of planetary gearbox. The effectiveness of proposed method is first validated through condition monitoring of experimental planetary gearbox. Then this method is used for planetary gearbox condition monitoring of ship-based satellite communication antennas and the results support its feasibility.
Canonical Information Analysis
DEFF Research Database (Denmark)
Vestergaard, Jacob Schack; Nielsen, Allan Aasbjerg
2015-01-01
Canonical correlation analysis is an established multivariate statistical method in which correlation between linear combinations of multivariate sets of variables is maximized. In canonical information analysis introduced here, linear correlation as a measure of association between variables...... is replaced by the information theoretical, entropy based measure mutual information, which is a much more general measure of association. We make canonical information analysis feasible for large sample problems, including for example multispectral images, due to the use of a fast kernel density estimator...... for entropy estimation. Canonical information analysis is applied successfully to (1) simple simulated data to illustrate the basic idea and evaluate performance, (2) fusion of weather radar and optical geostationary satellite data in a situation with heavy precipitation, and (3) change detection in optical...
Characterizing the spin state of an atomic ensemble using the magneto-optical resonance method
Julsgaard, B; Sherson, J; Sørensen, J L
2004-01-01
Quantum information protocols utilizing atomic ensembles require preparation of a coherent spin state (CSS) of the ensemble as an important starting point. We investigate the magneto-optical resonance method for characterizing a spin state of cesium atoms in a paraffin coated vapor cell. Atoms in a constant magnetic field are subject to an off-resonant laser beam and an RF magnetic field. The spectrum of the Zeeman sub-levels, in particular the weak quadratic Zeeman effect, enables us to measure the spin orientation, the number of atoms, and the transverse spin coherence time. Notably the use of 894nm pumping light on the D1-line, ensuring the state F=4, m_F=4 to be a dark state, helps us to achieve spin orientation of better than 98%. Hence we can establish a CSS with high accuracy which is critical for the analysis of the entangled states of atoms.
ENSO-conditioned weather resampling method for seasonal ensemble streamflow prediction
Beckers, Joost V. L.; Weerts, Albrecht H.; Tijdeman, Erik; Welles, Edwin
2016-08-01
Oceanic-atmospheric climate modes, such as El Niño-Southern Oscillation (ENSO), are known to affect the local streamflow regime in many rivers around the world. A new method is proposed to incorporate climate mode information into the well-known ensemble streamflow prediction (ESP) method for seasonal forecasting. The ESP is conditioned on an ENSO index in two steps. First, a number of original historical ESP traces are selected based on similarity between the index value in the historical year and the index value at the time of forecast. In the second step, additional ensemble traces are generated by a stochastic ENSO-conditioned weather resampler. These resampled traces compensate for the reduction of ensemble size in the first step and prevent degradation of skill at forecasting stations that are less affected by ENSO. The skill of the ENSO-conditioned ESP is evaluated over 50 years of seasonal hindcasts of streamflows at three test stations in the Columbia River basin in the US Pacific Northwest. An improvement in forecast skill of 5 to 10 % is found for two test stations. The streamflows at the third station are less affected by ENSO and no change in forecast skill is found here.
DEFF Research Database (Denmark)
Senjean, Bruno; Knecht, Stefan; Jensen, Hans Jørgen Aa;
2015-01-01
equiensembles. It is shown that such a linear interpolation method (LIM) can be rationalized and that it effectively introduces weight dependence effects. As proof of principle, the LIM has been applied to He, Be, and H2 in both equilibrium and stretched geometries as well as the stretched HeH+ molecule. Very......Gross-Oliveira-Kohn density-functional theory (GOK-DFT) for ensembles is, in principle, very attractive but has been hard to use in practice. A practical model based on GOK-DFT for the calculation of electronic excitation energies is discussed. The model relies on two modifications of GOK-DFT: use......, which complements the long-range wave-function-based ensemble energy contribution, should vary with the ensemble weights even when the density is held fixed. This weight dependence ensures that the range-separated ensemble energy varies linearly with the ensemble weights. When the (weight...
Data Mining and Ensemble of Learning Methods%数据挖掘与组合学习
Institute of Scientific and Technical Information of China (English)
刁力力; 胡可云; 陆玉昌; 石纯一
2001-01-01
Data-mining is a kind of solution for solving the problem of information exploding. Classification and prediction belong to the most fundamental tasks in data-mining field. Many experiments have showed that the results of ensemble of learning methods are generally better than those of single learning methods under most of the time. In the sense,it is of great value to introduce ensemble of learning methods to data mining. This paper introduces data mining and ensemble of learning methods respectively,along with the analysis and formulation about the role ensemble of learning methods can act in some important practicing aspects of data mining:Text mining,multi-media information mining and web mining.
Extensions and applications of ensemble-of-trees methods in machine learning
Bleich, Justin
Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Battogtokh, D.; Asch, D. K.; Case, M. E.; Arnold, J.; Schüttler, H.-B.
2002-01-01
A chemical reaction network for the regulation of the quinic acid (qa) gene cluster of Neurospora crassa is proposed. An efficient Monte Carlo method for walking through the parameter space of possible chemical reaction networks is developed to identify an ensemble of deterministic kinetics models with rate constants consistent with RNA and protein profiling data. This method was successful in identifying a model ensemble fitting available RNA profiling data on the qa gene cluster. PMID:12477937
Simulating large-scale crop yield by using perturbed-parameter ensemble method
Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.
2010-12-01
Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence
Oh, Seok-Geun; Suh, Myoung-Seok
2016-03-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Fast-sum method for the elastic field of three-dimensional dislocation ensembles
International Nuclear Information System (INIS)
The elastic field of complex shape ensembles of dislocation loops is developed as an essential ingredient in the dislocation dynamics method for computer simulation of mesoscopic plastic deformation. Dislocation ensembles are sorted into individual loops, which are then divided into segments represented as parametrized space curves. Numerical solutions are presented as fast numerical sums for relevant elastic field variables (i.e., displacement, strain, stress, force, self-energy, and interaction energy). Gaussian numerical quadratures are utilized to solve for field equations of linear elasticity in an infinite isotropic elastic medium. The accuracy of the method is verified by comparison of numerical results to analytical solutions for typical prismatic and slip dislocation loops. The method is shown to be highly accurate, computationally efficient, and numerically convergent as the number of segments and quadrature points are increased on each loop. Several examples of method applications to calculations of the elastic field of simple and complex loop geometries are given in infinite crystals. The effect of crystal surfaces on the redistribution of the elastic field is demonstrated by superposition of a finite-element image force field on the computed results. copyright 1999 The American Physical Society
An ensemble method for data stream classification in the presence of concept drift
Institute of Scientific and Technical Information of China (English)
Omid ABBASZADEH; Ali AMIRI‡; Ali Reza KHANTEYMOORI
2015-01-01
One recent area of interest in computer science is data stream management and processing. By ‘data stream’, we refer to continuous and rapidly generated packages of data. Specifi c features of data streams are immense volume, high production rate, limited data processing time, and data concept drift; these features differentiate the data stream from standard types of data. An issue for the data stream is classifi cation of input data. A novel ensemble classifi er is proposed in this paper. The classifi er uses base classifi ers of two weighting functions under different data input conditions. In addition, a new method is used to determine drift, which emphasizes the precision of the algorithm. Another characteristic of the proposed method is removal of different numbers of the base classifi ers based on their quality. Implementation of a weighting mechanism to the base classifi ers at the decision-making stage is another advantage of the algorithm. This facilitates adaptability when drifts take place, which leads to classifi ers with higher eﬃciency. Furthermore, the proposed method is tested on a set of standard data and the results confi rm higher accuracy compared to available ensemble classifi ers and single classifi ers. In addition, in some cases the proposed classifi er is faster and needs less storage space.
Fault diagnosis method for nuclear power plant based on ensemble learning
International Nuclear Information System (INIS)
Nuclear power plant (NPP) is a very complex system, which needs to collect and monitor vast parameters, so it's hard to diagnose the faults of NPP. An ensemble learning method was proposed according to the problem. And the method was applied to learn from training samples which were the typical faults of nuclear power plant, i. e., loss of coolant accident (LOCA), feed water pipe rupture, steam generator tube rupture (SGTR), main steam pipe rupture. And the simulation results were carried out on the condition of normal and invalid and absent parameters respectively. The simulation results show that this method can get a good result on the condition of invalid and absent parameters. The method shows very good generalization performance and fault tolerance. (authors)
An efficient ensemble of radial basis functions method based on quadratic programming
Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian
2016-07-01
Radial basis function (RBF) surrogate models have been widely applied in engineering design optimization problems to approximate computationally expensive simulations. Ensemble of radial basis functions (ERBF) using the weighted sum of stand-alone RBFs improves the approximation performance. To achieve a good trade-off between the accuracy and efficiency of the modelling process, this article presents a novel efficient ERBF method to determine the weights through solving a quadratic programming subproblem, denoted ERBF-QP. Several numerical benchmark functions are utilized to test the performance of the proposed ERBF-QP method. The results show that ERBF-QP can significantly improve the modelling efficiency compared with several existing ERBF methods. Moreover, ERBF-QP also provides satisfactory performance in terms of approximation accuracy. Finally, the ERBF-QP method is applied to a satellite multidisciplinary design optimization problem to illustrate its practicality and effectiveness for real-world engineering applications.
Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.
Acceleration of ensemble machine learning methods using many-core devices
Tamerus, A.; Washbrook, A.; Wyeth, D.
2015-12-01
We present a case study into the acceleration of ensemble machine learning methods using many-core devices in collaboration with Toshiba Medical Visualisation Systems Europe (TMVSE). The adoption of GPUs to execute a key algorithm in the classification of medical image data was shown to significantly reduce overall processing time. Using a representative dataset and pre-trained decision trees as input we will demonstrate how the decision forest classification method can be mapped onto the GPU data processing model. It was found that a GPU-based version of the decision forest method resulted in over 138 times speed-up over a single-threaded CPU implementation with further improvements possible. The same GPU-based software was then directly applied to a suitably formed dataset to benefit supervised learning techniques applied in High Energy Physics (HEP) with similar improvements in performance.
Ensemble approach combining multiple methods improves human transcription start site prediction
LENUS (Irish Health Repository)
Dineen, David G
2010-11-30
Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.
A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface
Directory of Open Access Journals (Sweden)
Francesco Cavrini
2016-01-01
Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.
Senjean, Bruno; Alam, Md Mehboob; Knecht, Stefan; Fromager, Emmanuel
2015-01-01
The combination of a recently proposed linear interpolation method (LIM) [Senjean et al., Phys. Rev. A 92, 012518 (2015)], which enables the calculation of weight-independent excitation energies in range-separated ensemble density-functional approximations, with the extrapolation scheme of Savin [J. Chem. Phys. 140, 18A509 (2014)] is presented in this work. It is shown that LIM excitation energies vary quadratically with the inverse of the range-separation parameter mu when the latter is large. As a result, the extrapolation scheme, which is usually applied to long-range interacting energies, can be adapted straightforwardly to LIM. This extrapolated LIM (ELIM) has been tested on a small test set consisting of He, Be, H2 and HeH+. Relatively accurate results have been obtained for the first singlet excitation energies with the typical mu=0.4 value. The improvement of LIM after extrapolation is remarkable, in particular for the doubly-excited 2^1Sigma+g state in the stretched H2 molecule. Three-state ensemble ...
Inferring Association between Compound and Pathway with an Improved Ensemble Learning Method.
Song, Meiyue; Jiang, Zhenran
2015-11-01
Emergence of compound molecular data coupled to pathway information offers the possibility of using machine learning methods for compound-pathway associations' inference. To provide insights into the global relationship between compounds and their affected pathways, a improved Rotation Forest ensemble learning method called RGRF (Relief & GBSSL - Rotation Forest) was proposed to predict their potential associations. The main characteristic of the RGRF lies in using the Relief algorithm for feature extraction and regarding the Graph-Based Semi-Supervised Learning method as classifier. By incorporating the chemical structure information, drug mode of action information and genomic space information, our method can achieve a better precision and flexibility on compound-pathway prediction. Moreover, several new compound-pathway associations that having the potential for further clinical investigation have been identified by database searching. In the end, a prediction tool was developed using RGRF algorithm, which can predict the interactions between pathways and all of the compounds in cMap database. PMID:27491036
Functional Multiple-Set Canonical Correlation Analysis
Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.
2012-01-01
We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…
An ensemble method with hybrid features to identify extracellular matrix proteins.
Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina
2015-01-01
The extracellular matrix (ECM) is a dynamic composite of secreted proteins that play important roles in numerous biological processes such as tissue morphogenesis, differentiation and homeostasis. Furthermore, various diseases are caused by the dysfunction of ECM proteins. Therefore, identifying these important ECM proteins may assist in understanding related biological processes and drug development. In view of the serious imbalance in the training dataset, a Random Forest-based ensemble method with hybrid features is developed in this paper to identify ECM proteins. Hybrid features are employed by incorporating sequence composition, physicochemical properties, evolutionary and structural information. The Information Gain Ratio and Incremental Feature Selection (IGR-IFS) methods are adopted to select the optimal features. Finally, the resulting predictor termed IECMP (Identify ECM Proteins) achieves an balanced accuracy of 86.4% using the 10-fold cross-validation on the training dataset, which is much higher than results obtained by other methods (ECMPRED: 71.0%, ECMPP: 77.8%). Moreover, when tested on a common independent dataset, our method also achieves significantly improved performance over ECMPP and ECMPRED. These results indicate that IECMP is an effective method for ECM protein prediction, which has a more balanced prediction capability for positive and negative samples. It is anticipated that the proposed method will provide significant information to fully decipher the molecular mechanisms of ECM-related biological processes and discover candidate drug targets. For public access, we develop a user-friendly web server for ECM protein identification that is freely accessible at http://iecmp.weka.cc.
Directory of Open Access Journals (Sweden)
Jiang Tianzi
2004-09-01
Full Text Available Abstract Background Microarray experiments are becoming a powerful tool for clinical diagnosis, as they have the potential to discover gene expression patterns that are characteristic for a particular disease. To date, this problem has received most attention in the context of cancer research, especially in tumor classification. Various feature selection methods and classifier design strategies also have been generally used and compared. However, most published articles on tumor classification have applied a certain technique to a certain dataset, and recently several researchers compared these techniques based on several public datasets. But, it has been verified that differently selected features reflect different aspects of the dataset and some selected features can obtain better solutions on some certain problems. At the same time, faced with a large amount of microarray data with little knowledge, it is difficult to find the intrinsic characteristics using traditional methods. In this paper, we attempt to introduce a combinational feature selection method in conjunction with ensemble neural networks to generally improve the accuracy and robustness of sample classification. Results We validate our new method on several recent publicly available datasets both with predictive accuracy of testing samples and through cross validation. Compared with the best performance of other current methods, remarkably improved results can be obtained using our new strategy on a wide range of different datasets. Conclusions Thus, we conclude that our methods can obtain more information in microarray data to get more accurate classification and also can help to extract the latent marker genes of the diseases for better diagnosis and treatment.
Xue, L.; Dai, C.; Zhang, D.; Guadagnini, A.
2015-12-01
It is critical to predict contaminant plume in an aquifer under uncertainty, which can help assess environmental risk and design rational management strategies. An accurate prediction of contaminant plume requires the collection of data to help characterize the system. Due to the limitation of financial resources, ones should estimate the expectative value of data collected from each optional monitoring scheme before carried out. Data-worth analysis is believed to be an effective approach to identify the value of the data in some problems, which quantifies the uncertainty reduction assuming that the plausible data has been collected. However, it is difficult to apply the data-worth analysis to a dynamic simulation of contaminant transportation model owning to its requirement of large number of inverse-modeling. In this study, a novel efficient data-worth analysis framework is proposed by developing the Probabilistic Collocation Method based Ensemble Kalman Filter (PCKF). The PCKF constructs polynomial chaos expansion surrogate model to replace the original complex numerical model. Consequently, the inverse modeling can perform on the proxy rather than the original model. An illustrative example, considering the dynamic change of the contaminant concentration, is employed to demonstrate the proposed approach. The Results reveal that schemes with different sampling frequencies, monitoring networks location, prior data content will have significant impact on the uncertainty reduction of the estimation of contaminant plume. Our proposition is validated to provide the reasonable value of data from various schemes.
A novel ensemble learning method for de novo computational identification of DNA binding sites
Directory of Open Access Journals (Sweden)
Khetani Radhika S
2007-07-01
Full Text Available Abstract Background Despite the diversity of motif representations and search algorithms, the de novo computational identification of transcription factor binding sites remains constrained by the limited accuracy of existing algorithms and the need for user-specified input parameters that describe the motif being sought. Results We present a novel ensemble learning method, SCOPE, that is based on the assumption that transcription factor binding sites belong to one of three broad classes of motifs: non-degenerate, degenerate and gapped motifs. SCOPE employs a unified scoring metric to combine the results from three motif finding algorithms each aimed at the discovery of one of these classes of motifs. We found that SCOPE's performance on 78 experimentally characterized regulons from four species was a substantial and statistically significant improvement over that of its component algorithms. SCOPE outperformed a broad range of existing motif discovery algorithms on the same dataset by a statistically significant margin. Conclusion SCOPE demonstrates that combining multiple, focused motif discovery algorithms can provide a significant gain in performance. By building on components that efficiently search for motifs without user-defined parameters, SCOPE requires as input only a set of upstream sequences and a species designation, making it a practical choice for non-expert users. A user-friendly web interface, Java source code and executables are available at http://genie.dartmouth.edu/scope.
JPPRED: Prediction of Types of J-Proteins from Imbalanced Data Using an Ensemble Learning Method
Directory of Open Access Journals (Sweden)
Lina Zhang
2015-01-01
Full Text Available Different types of J-proteins perform distinct functions in chaperone processes and diseases development. Accurate identification of types of J-proteins will provide significant clues to reveal the mechanism of J-proteins and contribute to developing drugs for diseases. In this study, an ensemble predictor called JPPRED for J-protein prediction is proposed with hybrid features, including split amino acid composition (SAAC, pseudo amino acid composition (PseAAC, and position specific scoring matrix (PSSM. To deal with the imbalanced benchmark dataset, the synthetic minority oversampling technique (SMOTE and undersampling technique are applied. The average sensitivity of JPPRED based on above-mentioned individual feature spaces lies in the range of 0.744–0.851, indicating the discriminative power of these features. In addition, JPPRED yields the highest average sensitivity of 0.875 using the hybrid feature spaces of SAAC, PseAAC, and PSSM. Compared to individual base classifiers, JPPRED obtains more balanced and better performance for each type of J-proteins. To evaluate the prediction performance objectively, JPPRED is compared with previous study. Encouragingly, JPPRED obtains balanced performance for each type of J-proteins, which is significantly superior to that of the existing method. It is anticipated that JPPRED can be a potential candidate for J-protein prediction.
An ensemble method for gene discovery based on DNA microarray data
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
The advent of DNA microarray technology has offered the promise of casting new insights onto deciphering secrets of life by monitoring activities of thousands of genes simultaneously.Current analyses of microarray data focus on precise classification of biological types,for example,tumor versus normal tissues.A further scientific challenging task is to extract disease-relevant genes from the bewildering amounts of raw data,which is one of the most critical themes in the post-genomic era,but it is generally ignored due to lack of an efficient approach.In this paper,we present a novel ensemble method for gene extraction that can be tailored to fulfill multiple biological tasks including(i)precise classification of biological types;(ii)disease gene mining; and(iii)target-driven gene networking.We also give a numerical application for(i)and(ii)using a public microarrary data set and set aside a separate paper to address(iii).
Enhanced Sampling in the Well-Tempered Ensemble
Bonomi, M.; Parrinello, M.
2010-05-01
We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.
Xue, Xiaoming; Zhou, Jianzhong; Xu, Yanhe; Zhu, Wenlong; Li, Chaoshun
2015-10-01
Ensemble empirical mode decomposition (EEMD) represents a significant improvement over the original empirical mode decomposition (EMD) method for eliminating the mode mixing problem. However, the added white noises generate some tough problems including the high computational cost, the determination of the two critical parameters (the amplitude of the added white noise and the number of ensemble trials), and the contamination of the residue noise in the signal reconstruction. To solve these problems, an adaptively fast EEMD (AFEEMD) method combined with complementary EEMD (CEEMD) is proposed in this paper. In the proposed method, the two critical parameters are respectively fixed as 0.01 times standard deviation of the original signal and two ensemble trials. Instead, the upper frequency limit of the added white noise is the key parameter which needs to be prescribed beforehand. Unlike the original EEMD method, only two high-frequency white noises are added to the signal to be investigated with anti-phase in AFEEMD. Furthermore, an index termed relative root-mean-square error is employed for the adaptive selection of the proper upper frequency limit of the added white noises. Simulation test and vibration signals based fault diagnosis of rolling element bearing under different fault types are utilized to demonstrate the feasibility and effectiveness of the proposed method. The analysis results indicate that the AFEEMD method represents a sound improvement over the original EEMD method, and has strong practicability.
International Nuclear Information System (INIS)
The commercial viability of PEMFC (proton exchange membrane fuel cell) systems depends on using effective fault diagnosis technologies in PEMFC systems. However, many researchers have experimentally studied PEMFC (proton exchange membrane fuel cell) systems without considering certain fault conditions. In this paper, an ANN (artificial neural network) ensemble method is presented that improves the stability and reliability of the PEMFC systems. In the first part, a transient model giving it flexibility in application to some exceptional conditions is built. The PEMFC dynamic model is built and simulated using MATLAB. In the second, using this model and experiments, the mechanisms of four different faults in PEMFC systems are analyzed in detail. Third, the ANN ensemble for the fault diagnosis is built and modeled. This model is trained and tested by the data. The test result shows that, compared with the previous method for fault diagnosis of PEMFC systems, the proposed fault diagnosis method has higher diagnostic rate and generalization ability. Moreover, the partial structure of this method can be altered easily, along with the change of the PEMFC systems. In general, this method for diagnosis of PEMFC has value for certain applications. - Highlights: • We analyze the principles and mechanisms of the four faults in PEMFC (proton exchange membrane fuel cell) system. • We design and model an ANN (artificial neural network) ensemble method for the fault diagnosis of PEMFC system. • This method has high diagnostic rate and strong generalization ability
International Nuclear Information System (INIS)
A method of canonical transformations extended to dissipative Hamiltonian systems in a previous article is here applied to the behaviour of an extended charge coupled to the em field which is deductible from a Lagrangian function explicitly dependent on time. The generating function of a transformation which decouples the variables of the system is given, for an elastic applied force, and hence the constants in motion are found by a general method. Some limit cases are examined. (auth)
Design Hybrid method for intrusion detection using Ensemble cluster classification and SOM network
Directory of Open Access Journals (Sweden)
Deepak Rathore
2012-09-01
Full Text Available In current scenario of internet technology security is big challenge. Internet network threats by various cyber-attack and loss the system data and degrade the performance of host computer. In this sense intrusion detection are challenging field of research in concern of network security based on firewall and some rule based detection technique. In this paper we proposed an Ensemble Cluster Classification technique using som network for detection of mixed variable data generated by malicious software for attack purpose in host system. In our methodology SOM network control the iteration of distance of different parameters of ensembling our experimental result show that better empirical evaluation on KDD data set 99 in comparison of existing ensemble classifier.
Lee, Mark D; Ruostekoski, Janne
2016-01-01
We derive equations for the strongly coupled system of light and dense atomic ensembles. The formalism includes an arbitrary internal level structure for the atoms and is not restricted to weak excitation of atoms by light. In the low light intensity limit for atoms with a single electronic ground state, the full quantum field-theoretical representation of the model can be solved exactly by means of classical stochastic electrodynamics simulations for stationary atoms that represent cold atomic ensembles. Simulations for the optical response of atoms in a quantum degenerate regime require one to synthesize a stochastic ensemble of atomic positions that generates the corresponding quantum statistical position correlations between the atoms. In the case of multiple ground levels or at light intensities where saturation becomes important, the classical simulations require approximations that neglect quantum fluctuations between the levels. We show how the model is extended to incorporate corrections due to quant...
Energy Technology Data Exchange (ETDEWEB)
Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao
2015-12-14
Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.
Otsuru, Toru; Tomiku, Reiji; Din, Nazli Bin Che; Okamoto, Noriko; Murakami, Masahiko
2009-06-01
An in-situ measurement technique of a material surface normal impedance is proposed. It includes a concept of "ensemble averaged" surface normal impedance that extends the usage of obtained values to various applications such as architectural acoustics and computational simulations, especially those based on the wave theory. The measurement technique itself is a refinement of a method using a two-microphone technique and environmental anonymous noise, or diffused ambient noise, as proposed by Takahashi et al. [Appl. Acoust. 66, 845-865 (2005)]. Measured impedance can be regarded as time-space averaged normal impedance at the material surface. As a preliminary study using numerical simulations based on the boundary element method, normal incidence and random incidence measurements are compared numerically: results clarify that ensemble averaging is an effective mode of measuring sound absorption characteristics of materials with practical sizes in the lower frequency range of 100-1000 Hz, as confirmed by practical measurements. PMID:19507960
Directory of Open Access Journals (Sweden)
González-Martín, M. I.
2016-03-01
Full Text Available The canonical biplot method (CB is used to determine the discriminatory power of volatile chemical compounds in cheese. These volatile compounds were used as variables in order to differentiate among 6 groups or populations of cheeses (combinations of two seasons (winter and summer with 3 types of cheese (cow, sheep and goat’s milk. We analyzed a total of 17 volatile compounds by means of gas chromatography coupled with mass detection. The compounds included aldehydes and methyl-aldehydes, alcohols (primary, secondary and branched chain, ketones, methyl-ketones and esters in winter (WC and summer (SC cow’s cheeses, winter (WSh and summer (SSh sheep’s cheeses and in winter (WG and summer (SG goat’s cheeses. The CB method allows differences to be found as a function of the elaboration of the cheeses, the seasonality of the milk, and the separation of the six groups of cheeses, characterizing the specific volatile chemical compounds responsible for such differences.El m.todo biplot can.nico (CB se utiliza para determinar el poder discriminatorio de compuestos qu.micos vol.tiles en queso. Los compuestos vol.tiles se utilizan como variables con el fin de diferenciar entre los 6 grupos o poblaciones de quesos (combinaciones de dos temporadas (invierno y verano con 3 tipos de queso (vaca, oveja y cabra. Se analizan un total de 17 compuestos vol.tiles por medio de cromatograf.a de gases acoplada con detecci.n de masas. Los compuestos incluyen aldeh.dos y metil-aldeh.dos, alcoholes (primarios de cadena, secundaria y ramificada, cetonas, metil-cetonas y .steres. Los seis grupos de quesos son, quesos de vaca de invierno (WC y verano (SC; quesos de oveja de invierno (WSh y verano (SSh y quesos de cabra de invierno (WG y verano (SG. El m.todo CB permite la separaci.n de los seis grupos de quesos y encontrar las diferencias en funci.n del tipo y estacionalidad de la leche, caracterizando los compuestos qu.micos vol.tiles espec.ficos responsables de
Dittrich, B.; Höhn, P.A.
2011-01-01
A general canonical formalism for discrete systems is developed which can handle varying phase space dimensions and constraints. The central ingredient is Hamilton's principle function which generates canonical time evolution and ensures that the canonical formalism reproduces the dynamics of the co
Elsheikh, Ahmed H.
2013-06-01
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.
2002-01-01
NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx
Directory of Open Access Journals (Sweden)
J. H. Lee
2012-04-01
Full Text Available Aerodynamic roughness height (Z_{om} is a key parameter required in land surface hydrological model, since errors in heat flux estimations are largely dependent on accurate optimization of this parameter. Despite its significance, it remains an uncertain parameter that is not easily determined. This is mostly because of non-linear relationship in Monin-Obukhov Similarity (MOS and unknown vertical characteristic of vegetation. Previous studies determined aerodynamic roughness using traditional wind profile method, remotely sensed vegetation index, minimization of cost function over MOS relationship or linear regression. However, these are complicated procedures that presume high accuracy for several other related parameters embedded in MOS equations. In order to simplify a procedure and reduce the number of parameters in need, this study suggests a new approach to extract aerodynamic roughness parameter via Ensemble Kalman Filter (EnKF that affords non-linearity and that requires only single or two heat flux measurement. So far, to our knowledge, no previous study has applied EnKF to aerodynamic roughness estimation, while a majority of data assimilation study has paid attention to land surface state variables such as soil moisture or land surface temperature. This approach was applied to grassland in semi-arid Tibetan area and maize on moderately wet condition in Italy. It was demonstrated that aerodynamic roughness parameter can inversely be tracked from data assimilated heat flux analysis. The aerodynamic roughness height estimated in this approach was consistent with eddy covariance result and literature value. Consequently, this newly estimated input adjusted the sensible heat overestimated and latent heat flux underestimated by the original Surface Energy Balance System (SEBS model, suggesting better heat flux estimation especially during the summer Monsoon period. The advantage of this approach over other methodologies is
Babaei, Masoud; Pan, Indranil
2016-06-01
In this paper we defined a relatively complex reservoir engineering optimization problem of maximizing the net present value of the hydrocarbon production in a water flooding process by controlling the water injection rates in multiple control periods. We assessed the performance of a number of response surface surrogate models and their ensembles which are combined by Dempster-Shafer theory and Weighted Averaged Surrogates as found in contemporary literature works. Most of these ensemble methods are based on the philosophy that multiple weak learners can be leveraged to obtain one strong learner which is better than the individual weak ones. Even though these techniques have been shown to work well for test bench functions, we found them not offering a considerable improvement compared to an individually used cubic radial basis function surrogate model. Our simulations on two and three dimensional cases, with varying number of optimization variables suggest that cubic radial basis functions-based surrogate model is reliable, outperforms Kriging surrogates and multivariate adaptive regression splines, and if it does not outperform, it is rarely outperformed by the ensemble surrogate models.
Directory of Open Access Journals (Sweden)
Xiaoning Pan
2015-04-01
Full Text Available Model performance of the partial least squares method (PLS alone and bagging-PLS was investigated in online near-infrared (NIR sensor monitoring of pilot-scale extraction process in Fructus aurantii. High-performance liquid chromatography (HPLC was used as a reference method to identify the active pharmaceutical ingredients: naringin, hesperidin and neohesperidin. Several preprocessing methods and synergy interval partial least squares (SiPLS and moving window partial least squares (MWPLS variable selection methods were compared. Single quantification models (PLS and ensemble methods combined with partial least squares (bagging-PLS were developed for quantitative analysis of naringin, hesperidin and neohesperidin. SiPLS was compared to SiPLS combined with bagging-PLS. Final results showed the root mean square error of prediction (RMSEP of bagging-PLS to be lower than that of PLS regression alone. For this reason, an ensemble method of online NIR sensor is here proposed as a means of monitoring the pilot-scale extraction process in Fructus aurantii, which may also constitute a suitable strategy for online NIR monitoring of CHM.
Ensemble Equivalence for Distinguishable Particles
Directory of Open Access Journals (Sweden)
Antonio Fernández-Peralta
2016-07-01
Full Text Available Statistics of distinguishable particles has become relevant in systems of colloidal particles and in the context of applications of statistical mechanics to complex networks. In this paper, we present evidence that a commonly used expression for the partition function of a system of distinguishable particles leads to huge fluctuations of the number of particles in the grand canonical ensemble and, consequently, to nonequivalence of statistical ensembles. We will show that the alternative definition of the partition function including, naturally, Boltzmann’s correct counting factor for distinguishable particles solves the problem and restores ensemble equivalence. Finally, we also show that this choice for the partition function does not produce any inconsistency for a system of distinguishable localized particles, where the monoparticular partition function is not extensive.
Directory of Open Access Journals (Sweden)
Lina Zhang
2015-09-01
Full Text Available Bacteriophage virion proteins and non-virion proteins have distinct functions in biological processes, such as specificity determination for host bacteria, bacteriophage replication and transcription. Accurate identification of bacteriophage virion proteins from bacteriophage protein sequences is significant to understand the complex virulence mechanism in host bacteria and the influence of bacteriophages on the development of antibacterial drugs. In this study, an ensemble method for bacteriophage virion protein prediction from bacteriophage protein sequences is put forward with hybrid feature spaces incorporating CTD (composition, transition and distribution, bi-profile Bayes, PseAAC (pseudo-amino acid composition and PSSM (position-specific scoring matrix. When performing on the training dataset 10-fold cross-validation, the presented method achieves a satisfactory prediction result with a sensitivity of 0.870, a specificity of 0.830, an accuracy of 0.850 and Matthew’s correlation coefficient (MCC of 0.701, respectively. To evaluate the prediction performance objectively, an independent testing dataset is used to evaluate the proposed method. Encouragingly, our proposed method performs better than previous studies with a sensitivity of 0.853, a specificity of 0.815, an accuracy of 0.831 and MCC of 0.662 on the independent testing dataset. These results suggest that the proposed method can be a potential candidate for bacteriophage virion protein prediction, which may provide a useful tool to find novel antibacterial drugs and to understand the relationship between bacteriophage and host bacteria. For the convenience of the vast majority of experimental Int. J. Mol. Sci. 2015, 16 21735 scientists, a user-friendly and publicly-accessible web-server for the proposed ensemble method is established.
Directory of Open Access Journals (Sweden)
Marin-Garcia Pablo
2010-05-01
Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.
Energy Technology Data Exchange (ETDEWEB)
Yu, Lifeng, E-mail: yu.lifeng@mayo.edu; Vrieze, Thomas J.; Leng, Shuai; Fletcher, Joel G.; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)
2015-05-15
Purpose: The spatial resolution of iterative reconstruction (IR) in computed tomography (CT) is contrast- and noise-dependent because of the nonlinear regularization. Due to the severe noise contamination, it is challenging to perform precise spatial-resolution measurements at very low-contrast levels. The purpose of this study was to measure the spatial resolution of a commercially available IR method using ensemble-averaged images acquired from repeated scans. Methods: A low-contrast phantom containing three rods (7, 14, and 21 HU below background) was scanned on a 128-slice CT scanner at three dose levels (CTDI{sub vol} = 16, 8, and 4 mGy). Images were reconstructed using two filtered-backprojection (FBP) kernels (B40 and B20) and a commercial IR method (sinogram affirmed iterative reconstruction, SAFIRE, Siemens Healthcare) with two strength settings (I40-3 and I40-5). The same scan was repeated 100 times at each dose level. The modulation transfer function (MTF) was calculated based on the edge profile measured on the ensemble-averaged images. Results: The spatial resolution of the two FBP kernels, B40 and B20, remained relatively constant across contrast and dose levels. However, the spatial resolution of the two IR kernels degraded relative to FBP as contrast or dose level decreased. For a given dose level at 16 mGy, the MTF{sub 50%} value normalized to the B40 kernel decreased from 98.4% at 21 HU to 88.5% at 7 HU for I40-3 and from 97.6% to 82.1% for I40-5. At 21 HU, the relative MTF{sub 50%} value decreased from 98.4% at 16 mGy to 90.7% at 4 mGy for I40-3 and from 97.6% to 85.6% for I40-5. Conclusions: A simple technique using ensemble averaging from repeated CT scans can be used to measure the spatial resolution of IR techniques in CT at very low contrast levels. The evaluated IR method degraded the spatial resolution at low contrast and high noise levels.
An Ensemble Method based on Particle of Swarm for the Reduction of Noise, Outlier and Core Point
Directory of Open Access Journals (Sweden)
Satish Dehariya,
2013-04-01
Full Text Available The majority voting and accurate prediction ofclassification algorithm in data mining arechallenging task for data classification. For theimprovement of data classification used differentclassifier along with another classifier in a mannerof ensembleprocess. Ensemble process increase theclassification ratio of classification algorithm, nowsuch par diagram of classification algorithm iscalled ensemble classifier. Ensemble learning is atechnique to improve the performance and accuracyof classification and predication of machinelearning algorithm. Many researchers proposed amodel for ensemble classifier for merging adifferent classification algorithm, but theperformance of ensemble algorithm suffered fromproblem of outlier, noise and core pointproblem ofdata from features selection process. In this paperwe combined core, outlier and noise data (COB forfeatures selection process for ensemble model. Theprocess of best feature selection with appropriateclassifier used particle of swarm optimization.
Enhanced Sampling in the Well-Tempered Ensemble
Bonomi, M.; Parrinello, M
2009-01-01
We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the ...
Directory of Open Access Journals (Sweden)
Krasnogor Natalio
2009-10-01
Full Text Available Abstract Background Statistical analysis of DNA microarray data provides a valuable diagnostic tool for the investigation of genetic components of diseases. To take advantage of the multitude of available data sets and analysis methods, it is desirable to combine both different algorithms and data from different studies. Applying ensemble learning, consensus clustering and cross-study normalization methods for this purpose in an almost fully automated process and linking different analysis modules together under a single interface would simplify many microarray analysis tasks. Results We present ArrayMining.net, a web-application for microarray analysis that provides easy access to a wide choice of feature selection, clustering, prediction, gene set analysis and cross-study normalization methods. In contrast to other microarray-related web-tools, multiple algorithms and data sets for an analysis task can be combined using ensemble feature selection, ensemble prediction, consensus clustering and cross-platform data integration. By interlinking different analysis tools in a modular fashion, new exploratory routes become available, e.g. ensemble sample classification using features obtained from a gene set analysis and data from multiple studies. The analysis is further simplified by automatic parameter selection mechanisms and linkage to web tools and databases for functional annotation and literature mining. Conclusion ArrayMining.net is a free web-application for microarray analysis combining a broad choice of algorithms based on ensemble and consensus methods, using automatic parameter selection and integration with annotation databases.
Directory of Open Access Journals (Sweden)
Eva C Arnspang
Full Text Available The lateral dynamics of proteins and lipids in the mammalian plasma membrane are heterogeneous likely reflecting both a complex molecular organization and interactions with other macromolecules that reside outside the plane of the membrane. Several methods are commonly used for characterizing the lateral dynamics of lipids and proteins. These experimental and data analysis methods differ in equipment requirements, labeling complexities, and further oftentimes give different results. It would therefore be very convenient to have a single method that is flexible in the choice of fluorescent label and labeling densities from single molecules to ensemble measurements, that can be performed on a conventional wide-field microscope, and that is suitable for fast and accurate analysis. In this work we show that k-space image correlation spectroscopy (kICS analysis, a technique which was originally developed for analyzing lateral dynamics in samples that are labeled at high densities, can also be used for fast and accurate analysis of single molecule density data of lipids and proteins labeled with quantum dots (QDs. We have further used kICS to investigate the effect of the label size and by comparing the results for a biotinylated lipid labeled at high densities with Atto647N-strepatvidin (sAv or sparse densities with sAv-QDs. In this latter case, we see that the recovered diffusion rate is two-fold greater for the same lipid and in the same cell-type when labeled with Atto647N-sAv as compared to sAv-QDs. This data demonstrates that kICS can be used for analysis of single molecule data and furthermore can bridge between samples with a labeling densities ranging from single molecule to ensemble level measurements.
Classifying Linear Canonical Relations
Lorand, Jonathan
2015-01-01
In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.
Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach
International Nuclear Information System (INIS)
Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are
An introduction to the theory of canonical matrices
Turnbull, H W
2004-01-01
Thorough and self-contained, this penetrating study of the theory of canonical matrices presents a detailed consideration of all the theory's principal features. Topics include elementary transformations and bilinear and quadratic forms; canonical reduction of equivalent matrices; subgroups of the group of equivalent transformations; and rational and classical canonical forms. The final chapters explore several methods of canonical reduction, including those of unitary and orthogonal transformations. 1952 edition. Index. Appendix. Historical notes. Bibliographies. 275 problems.
Relations between canonical and non-canonical inflation
Energy Technology Data Exchange (ETDEWEB)
Gwyn, Rhiannon [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany); Rummel, Markus [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group
2012-12-15
We look for potential observational degeneracies between canonical and non-canonical models of inflation of a single field {phi}. Non-canonical inflationary models are characterized by higher than linear powers of the standard kinetic term X in the effective Lagrangian p(X,{phi}) and arise for instance in the context of the Dirac-Born-Infeld (DBI) action in string theory. An on-shell transformation is introduced that transforms non-canonical inflationary theories to theories with a canonical kinetic term. The 2-point function observables of the original non-canonical theory and its canonical transform are found to match in the case of DBI inflation.
Ye, Linlin; Yang, Dan; Wang, Xu
2014-06-01
A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal.
Ye, Linlin; Yang, Dan; Wang, Xu
2014-06-01
A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236
[Canon Busting and Cultural Literacy.
National Forum: Phi Kappa Phi Journal, 1989
1989-01-01
Articles on literary canon include: "Educational Anomie" (Stephen W. White); "Why Western Civilization?" (William J. Bennett); "Peace Plan for Canon Wars" (Gerald Graff, William E. Cain); "Canons, Cultural Literacy, and Core Curriculum" (Lynne V. Cheney); "Canon Busting: Basic Issues" (Stanley Fish); "A Truce in Curricular Wars" (Chester E. Finn,…
Energy Technology Data Exchange (ETDEWEB)
Juxiu Tong; Bill X. Hu; Hai Huang; Luanjin Guo; Jinzhong Yang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations, we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.
You, Setthivoine
2015-11-01
A new canonical field theory has been developed to help interpret the interaction between plasma flows and magnetic fields. The theory augments the Lagrangian of general dynamical systems to rigourously demonstrate that canonical helicity transport is valid across single particle, kinetic and fluid regimes, on scales ranging from classical to general relativistic. The Lagrangian is augmented with two extra terms that represent the interaction between the motion of matter and electromagnetic fields. The dynamical equations can then be re-formulated as a canonical form of Maxwell's equations or a canonical form of Ohm's law valid across all non-quantum regimes. The field theory rigourously shows that helicity can be preserved in kinetic regimes and not only fluid regimes, that helicity transfer between species governs the formation of flows or magnetic fields, and that helicity changes little compared to total energy only if density gradients are shallow. The theory suggests a possible interpretation of particle energization partitioning during magnetic reconnection as canonical wave interactions. This work is supported by US DOE Grant DE-SC0010340.
Canonical phylogenetic ordination.
Giannini, Norberto P
2003-10-01
A phylogenetic comparative method is proposed for estimating historical effects on comparative data using the partitions that compose a cladogram, i.e., its monophyletic groups. Two basic matrices, Y and X, are defined in the context of an ordinary linear model. Y contains the comparative data measured over t taxa. X consists of an initial tree matrix that contains all the xj monophyletic groups (each coded separately as a binary indicator variable) of the phylogenetic tree available for those taxa. The method seeks to define the subset of groups, i.e., a reduced tree matrix, that best explains the patterns in Y. This definition is accomplished via regression or canonical ordination (depending on the dimensionality of Y) coupled with Monte Carlo permutations. It is argued here that unrestricted permutations (i.e., under an equiprobable model) are valid for testing this specific kind of groupwise hypothesis. Phylogeny is either partialled out or, more properly, incorporated into the analysis in the form of component variation. Direct extensions allow for testing ecomorphological data controlled by phylogeny in a variation partitioning approach. Currently available statistical techniques make this method applicable under most univariate/multivariate models and metrics; two-way phylogenetic effects can be estimated as well. The simplest case (univariate Y), tested with simulations, yielded acceptable type I error rates. Applications presented include examples from evolutionary ethology, ecology, and ecomorphology. Results showed that the new technique detected previously overlooked variation clearly associated with phylogeny and that many phylogenetic effects on comparative data may occur at particular groups rather than across the entire tree. PMID:14530135
Fu, Mao-Jing; Zhuang, Jian-Jun; Hou, Feng-Zhen; Zhan, Qing-Bo; Shao, Yi; Ning, Xin-Bao
2010-05-01
In this paper, the ensemble empirical mode decomposition (EEMD) is applied to analyse accelerometer signals collected during normal human walking. First, the self-adaptive feature of EEMD is utilised to decompose the accelerometer signals, thus sifting out several intrinsic mode functions (IMFs) at disparate scales. Then, gait series can be extracted through peak detection from the eigen IMF that best represents gait rhythmicity. Compared with the method based on the empirical mode decomposition (EMD), the EEMD-based method has the following advantages: it remarkably improves the detection rate of peak values hidden in the original accelerometer signal, even when the signal is severely contaminated by the intermittent noises; this method effectively prevents the phenomenon of mode mixing found in the process of EMD. And a reasonable selection of parameters for the stop-filtering criteria can improve the calculation speed of the EEMD-based method. Meanwhile, the endpoint effect can be suppressed by using the auto regressive and moving average model to extend a short-time series in dual directions. The results suggest that EEMD is a powerful tool for extraction of gait rhythmicity and it also provides valuable clues for extracting eigen rhythm of other physiological signals.
An Ensemble Method based on Particle of Swarm for the Reduction of Noise, Outlier and Core Point
Directory of Open Access Journals (Sweden)
Satish Dehariya
2013-03-01
Full Text Available The majority voting and accurate prediction of classification algorithm in data mining are challenging task for data classification. For the improvement of data classification used different classifier along with another classifier in a manner of ensemble process. Ensemble process increase the classification ratio of classification algorithm, now such par diagram of classification algorithm is called ensemble classifier. Ensemble learning is a technique to improve the performance and accuracy of classification and predication of machine learning algorithm. Many researchers proposed a model for ensemble classifier for merging a different classification algorithm, but the performance of ensemble algorithm suffered from problem of outlier, noise and core point problem of data from features selection process. In this paper we combined core, outlier and noise data (COB for features selection process for ensemble model. The process of best feature selection with appropriate classifier used particle of swarm optimization. Empirical results with UCI data set prediction on Ecoil and glass dataset indicate that the proposed COB model optimization algorithm can help to improve accuracy and classification.
Canonical affordances in context
Directory of Open Access Journals (Sweden)
Alan Costall
2012-12-01
Full Text Available James Gibson’s concept of affordances was an attempt to undermine the traditional dualism of the objective and subjective. Gibson himself insisted on the continuity of “affordances in general” and those attached to human artifacts. However, a crucial distinction needs to be drawn between “affordances in general” and the “canonical affordances” that are connected primarily to artifacts. Canonical affordances are conventional and normative. It is only in such cases that it makes sense to talk of the affordance of the object. Chairs, for example, are for sitting-on, even though we may also use them in many other ways. A good deal of confusion has arisen in the discussion of affordances from (1 the failure to recognize the normative status of canonical affordances and (2 then generalizing from this special case.
Covariant canonical quantization
Energy Technology Data Exchange (ETDEWEB)
Hippel, G.M. von [University of Regina, Department of Physics, Regina, Saskatchewan (Canada); Wohlfarth, M.N.R. [Universitaet Hamburg, Institut fuer Theoretische Physik, Hamburg (Germany)
2006-09-15
We present a manifestly covariant quantization procedure based on the de Donder-Weyl Hamiltonian formulation of classical field theory. This procedure agrees with conventional canonical quantization only if the parameter space is d=1 dimensional time. In d>1 quantization requires a fundamental length scale, and any bosonic field generates a spinorial wave function, leading to the purely quantum-theoretical emergence of spinors as a byproduct. We provide a probabilistic interpretation of the wave functions for the fields, and we apply the formalism to a number of simple examples. These show that covariant canonical quantization produces both the Klein-Gordon and the Dirac equation, while also predicting the existence of discrete towers of identically charged fermions with different masses. Covariant canonical quantization can thus be understood as a ''first'' or pre-quantization within the framework of conventional QFT. (orig.)
Covariant canonical quantization
Von Hippel, G M; Hippel, Georg M. von; Wohlfarth, Mattias N.R.
2006-01-01
We present a manifestly covariant quantization procedure based on the de Donder-Weyl Hamiltonian formulation of classical field theory. Covariant canonical quantization agrees with conventional canonical quantization only if the parameter space is d=1 dimensional time. In d>1 quantization requires a fundamental length scale, and any bosonic field generates a spinorial wave function, leading to the purely quantum-theoretical emergence of spinors as a byproduct. We provide a probabilistic interpretation of the wave functions for the fields, and apply the formalism to a number of simple examples. These show that covariant canonical quantization produces both the Klein-Gordon and the Dirac equation, while also predicting the existence of discrete towers of identically charged fermions with different masses.
A Classifier Ensemble of Binary Classifier Ensembles
Directory of Open Access Journals (Sweden)
Sajad Parvin
2011-09-01
Full Text Available This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of classifiers. Although a better performance for classifier is defined the more accurate classifier, but turning to the best classifier is not always the best option to obtain the best quality in classification. It means to reach the best classification there is another alternative to use many inaccurate or weak classifiers each of them is specialized for a sub-space in the problem space and using their consensus vote as the final classifier. So this paper proposes a heuristic classifier ensemble to improve the performance of classification learning. It is specially deal with multiclass problems which their aim is to learn the boundaries of each class from many other classes. Based on the concept of multiclass problems classifiers are divided into two different categories: pairwise classifiers and multiclass classifiers. The aim of a pairwise classifier is to separate one class from another one. Because of pairwise classifiers just train for discrimination between two classes, decision boundaries of them are simpler and more effective than those of multiclass classifiers.The main idea behind the proposed method is to focus classifier in the erroneous spaces of problem and use of pairwise classification concept instead of multiclass classification concept. Indeed although usage of pairwise classification concept instead of multiclass classification concept is not new, we propose a new pairwise classifier ensemble with a very lower order. In this paper, first the most confused classes are determined and then some ensembles of classifiers are created. The classifiers of each of these ensembles jointly work using majority weighting votes. The results of these ensembles
Ensemble approach combining multiple methods improves human transcription start site prediction.
LENUS (Irish Health Repository)
Dineen, David G
2010-01-01
The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.
Directory of Open Access Journals (Sweden)
S. Skachko
2014-01-01
Full Text Available The Ensemble Kalman filter (EnKF assimilation method is applied to the tracer transport using the same stratospheric transport model as in the 4D-Var assimilation system BASCOE. This EnKF version of BASCOE was built primarily to avoid the large costs associated with the maintenance of an adjoint model. The EnKF developed in BASCOE accounts for two adjustable parameters: a parameter α controlling the model error term and a parameter r controlling the observational error. The EnKF system is shown to be markedly sensitive to these two parameters, which are adjusted based on the monitoring of a χ2-test measuring the misfit between the control variable and the observations. The performance of the EnKF and 4D-Var versions was estimated through the assimilation of Aura-MLS ozone observations during an 8 month period which includes the formation of the 2008 Antarctic ozone hole. To ensure a proper comparison, despite the fundamental differences between the two assimilation methods, both systems use identical and carefully calibrated input error statistics. We provide the detailed procedure for these calibrations, and compare the two sets of analyses with a focus on the lower and middle stratosphere where the ozone lifetime is much larger than the observational update frequency. Based on the Observation-minus-Forecast statistics, we show that the analyses provided by the two systems are markedly similar, with biases smaller than 5% and standard deviation errors smaller than 10% in most of the stratosphere. Since the biases are markedly similar, they have most probably the same causes: these can be deficiencies in the model and in the observation dataset, but not in the assimilation algorithm nor in the error calibration. The remarkably similar performance also shows that in the context of stratospheric transport, the choice of the assimilation method can be based on application-dependent factors, such as CPU cost or the ability to generate an ensemble
Xu, Jing; Wang, Zhongbin; Tan, Chao; Si, Lei; Liu, Xinhua
2015-01-01
In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD) and Probabilistic Neural Network (PNN) is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF) components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method. PMID:26528985
Wen, Yalu; Lu, Qing
2016-09-01
Although compelling evidence suggests that the genetic etiology of complex diseases could be heterogeneous in subphenotype groups, little attention has been paid to phenotypic heterogeneity in genetic association analysis of complex diseases. Simply ignoring phenotypic heterogeneity in association analysis could result in attenuated estimates of genetic effects and low power of association tests if subphenotypes with similar clinical manifestations have heterogeneous underlying genetic etiologies. To facilitate the family-based association analysis allowing for phenotypic heterogeneity, we propose a clustered multiclass likelihood-ratio ensemble (CMLRE) method. The proposed method provides an alternative way to model the complex relationship between disease outcomes and genetic variants. It allows for heterogeneous genetic causes of disease subphenotypes and can be applied to various pedigree structures. Through simulations, we found CMLRE outperformed the commonly adopted strategies in a variety of underlying disease scenarios. We further applied CMLRE to a family-based dataset from the International Consortium to Identify Genes and Interactions Controlling Oral Clefts (ICOC) to investigate the genetic variants and interactions predisposing to subphenotypes of oral clefts. The analysis suggested that two subphenotypes, nonsyndromic cleft lip without palate (CL) and cleft lip with palate (CLP), shared similar genetic etiologies, while cleft palate only (CP) had its own genetic mechanism. The analysis further revealed that rs10863790 (IRF6), rs7017252 (8q24), and rs7078160 (VAX1) were jointly associated with CL/CLP, while rs7969932 (TBK1), rs227731 (17q22), and rs2141765 (TBK1) jointly contributed to CP.
Directory of Open Access Journals (Sweden)
Jing Xu
2015-10-01
Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.
Schneeweis, Lumelle A; Obenauer-Kutner, Linda; Kaur, Parminder; Yamniuk, Aaron P; Tamura, James; Jaffe, Neil; O'Mara, Brian W; Lindsay, Stuart; Doyle, Michael; Bryson, James
2015-12-01
Domain antibodies (dAbs) are single immunoglobulin domains that form the smallest functional unit of an antibody. This study investigates the behavior of these small proteins when covalently attached to the polyethylene glycol (PEG) moiety that is necessary for extending the half-life of a dAb. The effect of the 40 kDa PEG on hydrodynamic properties, particle behavior, and receptor binding of the dAb has been compared by both ensemble solution and surface methods [light scattering, isothermal titration calorimetry (ITC), surface Plasmon resonance (SPR)] and single-molecule atomic force microscopy (AFM) methods (topography, recognition imaging, and force microscopy). The large PEG dominates the properties of the dAb-PEG conjugate such as a hydrodynamic radius that corresponds to a globular protein over four times its size and a much reduced association rate. We have used AFM single-molecule studies to determine the mechanism of PEG-dependent reductions in the effectiveness of the dAb observed by SPR kinetic studies. Recognition imaging showed that all of the PEGylated dAb molecules are active, suggesting that some may transiently become inactive if PEG sterically blocks binding. This helps explain the disconnect between the SPR, determined kinetically, and the force microscopy and ITC results that demonstrated that PEG does not change the binding energy.
Regularized canonical correlation analysis with unlabeled data
Institute of Scientific and Technical Information of China (English)
Xi-chuan ZHOU; Hai-bin SHEN
2009-01-01
In standard canonical correlation analysis (CCA), the data from definite datasets are used to estimate their canonical correlation. In real applications, for example in bilingual text retrieval, it may have a great portion of data that we do not know which set it belongs to. This part of data is called unlabeled data, while the rest from definite datasets is called labeled data. We propose a novel method called regularized canonical correlation analysis (RCCA), which makes use of both labeled and unlabeled samples. Specifically, we learn to approximate canonical correlation as if all data were labeled. Then. we describe a generalization of RCCA for the multi-set situation. Experiments on four real world datasets, Yeast, Cloud, Iris, and Haberman, demonstrate that,by incorporating the unlabeled data points, the accuracy of correlation coefficients can be improved by over 30%.
Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves
Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley
2016-09-01
Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.
Botnet analysis using ensemble classifier
Directory of Open Access Journals (Sweden)
Anchit Bijalwan
2016-09-01
Full Text Available This paper analyses the botnet traffic using Ensemble of classifier algorithm to find out bot evidence. We used ISCX dataset for training and testing purpose. We extracted the features of both training and testing datasets. After extracting the features of this dataset, we bifurcated these features into two classes, normal traffic and botnet traffic and provide labelling. Thereafter using modern data mining tool, we have applied ensemble of classifier algorithm. Our experimental results show that the performance for finding bot evidence using ensemble of classifiers is better than single classifier. Ensemble based classifiers perform better than single classifier by either combining powers of multiple algorithms or introducing diversification to the same classifier by varying input in bot analysis. Our results are showing that by using voting method of ensemble based classifier accuracy is increased up to 96.41% from 93.37%.
A composite state method for ensemble data assimilation with multiple limited-area models
Directory of Open Access Journals (Sweden)
Matthew Kretschmer
2015-04-01
Full Text Available Limited-area models (LAMs allow high-resolution forecasts to be made for geographic regions of interest when resources are limited. Typically, boundary conditions for these models are provided through one-way boundary coupling from a coarser resolution global model. Here, data assimilation is considered in a situation in which a global model supplies boundary conditions to multiple LAMs. The data assimilation method presented combines information from all of the models to construct a single ‘composite state’, on which data assimilation is subsequently performed. The analysis composite state is then used to form the initial conditions of the global model and all of the LAMs for the next forecast cycle. The method is tested by using numerical experiments with simple, chaotic models. The results of the experiments show that there is a clear forecast benefit to allowing LAM states to influence one another during the analysis. In addition, adding LAM information at analysis time has a strong positive impact on global model forecast performance, even at points not covered by the LAMs.
Rodin, Alexander E
2010-01-01
The algorithm of the ensemble pulsar time scale (PT$_{\\rm ens}$) based on the optimal Wiener filtration method has been proposed. This algorithm allows the separation of the contributions to the post-fit pulsar timing residuals of the atomic clock and pulsar itself. Filters were designed with the use of the cross-spectra of the timing residuals. The method has been applied to the timing data of six millisecond pulsars. Direct comparison with the classical method of the weighted average showed that use of the optimal Wiener filters before averaging allows noticeably to improve the fractional instability of the ensemble time scale. Application of the proposed method to the most stable millisecond pulsars with the fractional instability $\\sigma_z < 10^{-15}$ may improve the fractional instability of PT$_{\\rm ens}$ up to the level $\\sim 10^{-16}$.
Realizations of the Canonical Representation
Indian Academy of Sciences (India)
M K Vemuri
2008-02-01
A characterisation of the maximal abelian subalgebras of the bounded operators on Hilbert space that are normalised by the canonical representation of the Heisenberg group is given. This is used to classify the perfect realizations of the canonical representation.
一种集成式不确定推理方法研究%Research on an Ensemble Method of Uncertainty Reasoning
Institute of Scientific and Technical Information of China (English)
贺怀清; 李建伏
2011-01-01
Ensemble learning is a machine learning paradigm where multiple models are strategically generated and combined to obtain better predictive performance than a single learning method.It was proven that ensemble learning is feasible and tends to yield better results.Uncertainty reasoning is one of the important directions in artificial intelligence.Various uncertainty reasoning methods have been developed and all have their own advantages and disadvantages.Motivated by ensemble learning, an ensemble method of uncertainty reasoning was proposed.The main idea of the new method is in accordance with the basic framework of ensemble learning,where multiple uncertainty reasoning methods is used in time and the result of various reasoning methods is integrated by some rules as the final result.Finally, theoretical analysis and experimental tests show that the ensemble uncertainty reasoning method is effective and feasible.%集成学习是采用某种规则把一系列学习器的结果进行整合以获得比单个学习器更好的学习效果的一种机器学习方法.研究表明集成学习是可行的,能取得比传统学习方法更好的性能.不确定推理是人工智能的重要研究方向之一,目前已经开发出了多种不确定推理方法,这些方法在实际应用中各有优缺点.借鉴集成学习,提出一种集成式不确定推理方法,其基本思想是按照一定的策略集成多种不确定推理方法,以提高推理的准确性.理论分析和实验结果验证了方法的合理性和可行性.
Measuring sub-canopy evaporation in a forested wetland using an ensemble of methods
Allen, S. T.; Edwards, B.; Reba, M. L.; Keim, R.
2013-12-01
and humidity gradients. This suggests the need to use combined methods during periods with problematic boundary layer conditions.
Canonical quantization of macroscopic electromagnetism
Philbin, T G
2010-01-01
Application of the standard canonical quantization rules of quantum field theory to macroscopic electromagnetism has encountered obstacles due to material dispersion and absorption. This has led to a phenomenological approach to macroscopic quantum electrodynamics where no canonical formulation is attempted. In this paper macroscopic electromagnetism is canonically quantized. The results apply to any linear, inhomogeneous, magnetoelectric medium with dielectric functions that obey the Kramers-Kronig relations. The prescriptions of the phenomenological approach are derived from the canonical theory.
On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles
Luo, Xiaodong
2010-09-19
The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].
Canonical Strangeness Enhancement
Sollfrank, J; Redlich, Krzysztof; Satz, Helmut
1998-01-01
According to recent experimental data and theoretical developments we discuss three distinct topics related to strangeness enhancement in nuclear reactions. We investigate the compatibility of multi-strange particle ratios measured in a restricted phase space with thermal model parameters extracted recently in 4pi. We study the canonical suppression as a possible reason for the observed strangeness enhancement and argue that a connection between QGP formation and the undersaturation of strangeness is not excluded.
Canonical quantization of macroscopic electromagnetism
Philbin, Thomas Gerard
2010-01-01
Application of the standard canonical quantization rules of quantum field theory to macroscopic electromagnetism has encountered obstacles due to material dispersion and absorption. This has led to a phenomenological approach to macroscopic quantum electrodynamics where no canonical formulation is attempted. In this paper macroscopic electromagnetism is canonically quantized. The results apply to any linear, inhomogeneous, magnetodielectric medium with dielectric functions that obey the Krame...
A Framework for Non-Equilibrium Statistical Ensemble Theory
Institute of Scientific and Technical Information of China (English)
BI Qiao; HE Zu-Tan; LIU Jie
2011-01-01
Since Gibbs synthesized a general equilibrium statistical ensemble theory, many theorists have attempted to generalized the Gibbsian theory to non-equilibrium phenomena domain, however the status of the theory of nonequilibrium phenomena can not be said as firm as well established as the Gibbsian ensemble theory. In this work, we present a framework for the non-equilibrium statistical ensemble formalism based on a subdynamic kinetic equation (SKE) rooted from the Brussels-Austin school and followed by some up-to-date works. The constructed key is to use a similarity transformation between Gibbsian ensembles formalism based on Liouville equation and the subdynamic ensemble formalism based on the SKE. Using this formalism, we study the spin-Boson system, as cases of weak coupling or strongly coupling, and obtain the reduced density operators for the Canonical ensembles easily.
Energy Technology Data Exchange (ETDEWEB)
Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Héberger, Károly [Research Centre for Natural Sciences, Hungarian Academy of Sciences, Pusztaszeri út 59-67, 1025 Budapest (Hungary); Andries, Erik [Center for Advanced Research Computing, University of New Mexico, Albuquerque, NM 87106 (United States); Department of Mathematics, Central New Mexico Community College, Albuquerque, NM 87106 (United States)
2015-04-15
Highlights: • Sum of ranking differences (SRD) used for tuning parameter selection based on fusion of multicriteria. • No weighting scheme is needed for the multicriteria. • SRD allows automatic selection of one model or a collection of models if so desired. • SRD allows simultaneous comparison of different calibration methods with tuning parameter selection. • New MATLAB programs are described and made available. - Abstract: Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a “good” tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user’s preference for the degree of balance between bias and variance ultimately decides the merits used in SRD
Alorizi, Seyed Morteza Emami; Nimruzi, Majid
2016-01-01
Background: Stroke has a huge negative impact on the society and more adversely affect women. There is scarce evidence about any neuroprotective effects of commonly used drug in acute stroke. Bushnell et al. provided a guideline focusing on the risk factors of stroke unique to women, including reproductive factors, metabolic syndrome, obesity, atrial fibrillation, and migraine with aura. The ten variables cited by Avicenna in Canon of Medicine would compensate for the gaps mentioned in this guideline. The prescribed drugs should be selected qualitatively opposite to Mizaj (warm-cold and wet-dry qualities induced by disease state) of the disease and according to ten variables, including the nature of the affected organ, intensity of disease, sex, age, habit, season, place of living, occupation, stamina and physical status. Methods: Information related to stroke was searched in Canon of Medicine, which is an outstanding book in traditional Persian medicine written by Avicenna. Results: A hemorrhagic stroke is the result of increasing sanguine humor in the body. Sanguine has warm-wet quality, and should be treated with food and drugs that quench the abundance of blood in the body. An acute episode of ischemic stroke is due to the abundance of phlegm that causes a blockage in the cerebral vessels. Phlegm has cold-wet quality and treatment should be started with compound medicines that either solve the phlegm or eject it from the body. Conclusion: Avicenna has cited in Canon of Medicine that women have cold and wet temperament compared to men. For this reason, they are more prone to accumulation of phlegm in their body organs including the liver, joints and vessels, and consequently in the risk of fatty liver, degenerative joint disease, atherosclerosis, and stroke especially the ischemic one. This is in accordance with epidemiological studies that showed higher rate of ischemic stroke in women rather than hemorrhagic one. PMID:26722147
Similarity measures for protein ensembles
DEFF Research Database (Denmark)
Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper
2009-01-01
Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations...... a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single...
Meaning of temperature in different thermostatistical ensembles.
Hänggi, Peter; Hilbert, Stefan; Dunkel, Jörn
2016-03-28
Depending on the exact experimental conditions, the thermodynamic properties of physical systems can be related to one or more thermostatistical ensembles. Here, we survey the notion of thermodynamic temperature in different statistical ensembles, focusing in particular on subtleties that arise when ensembles become non-equivalent. The 'mother' of all ensembles, the microcanonical ensemble, uses entropy and internal energy (the most fundamental, dynamically conserved quantity) to derive temperature as a secondary thermodynamic variable. Over the past century, some confusion has been caused by the fact that several competing microcanonical entropy definitions are used in the literature, most commonly the volume and surface entropies introduced by Gibbs. It can be proved, however, that only the volume entropy satisfies exactly the traditional form of the laws of thermodynamics for a broad class of physical systems, including all standard classical Hamiltonian systems, regardless of their size. This mathematically rigorous fact implies that negative 'absolute' temperatures and Carnot efficiencies more than 1 are not achievable within a standard thermodynamical framework. As an important offspring of microcanonical thermostatistics, we shall briefly consider the canonical ensemble and comment on the validity of the Boltzmann weight factor. We conclude by addressing open mathematical problems that arise for systems with discrete energy spectra. PMID:26903095
Canonical brackets of a toy model for the Hodge theory without its canonical conjugate momenta
Shukla, D; Malik, R P
2014-01-01
We consider the toy model of a rigid rotor as an example of the Hodge theory within the framework of the Becchi-Rouet-Stora-Tyutin (BRST) formalism and show that the internal symmetries of this theory lead to the derivation of canonical brackets amongst the creation and annihilation operators of the dynamical variables where the definition of the canonical conjugate momenta is not required. We invoke only the spin-statistics theorem, normal ordering and basic concepts of continuous symmetries (and their generators) to derive the canonical brackets for the model of a one (0 + 1)-dimensional (1D) rigid rotor without using the definition of the canonical conjugate momenta anywhere. Our present method of derivation of the basic brackets is conjectured to be true for a class of theories that provide a set of tractable physical examples for the Hodge theory.
Belayneh, A.; Adamowski, J.; Khalil, B.; Quilty, J.
2016-05-01
This study explored the ability of coupled machine learning models and ensemble techniques to predict drought conditions in the Awash River Basin of Ethiopia. The potential of wavelet transforms coupled with the bootstrap and boosting ensemble techniques to develop reliable artificial neural network (ANN) and support vector regression (SVR) models was explored in this study for drought prediction. Wavelet analysis was used as a pre-processing tool and was shown to improve drought predictions. The Standardized Precipitation Index (SPI) (in this case SPI 3, SPI 12 and SPI 24) is a meteorological drought index that was forecasted using the aforementioned models and these SPI values represent short and long-term drought conditions. The performances of all models were compared using RMSE, MAE, and R2. The prediction results indicated that the use of the boosting ensemble technique consistently improved the correlation between observed and predicted SPIs. In addition, the use of wavelet analysis improved the prediction results of all models. Overall, the wavelet boosting ANN (WBS-ANN) and wavelet boosting SVR (WBS-SVR) models provided better prediction results compared to the other model types evaluated.
Yin, D. S.; Gao, Y. P.; Zhao, S. H.
2016-05-01
Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observational data are not evenly sampled, and the internals between data points range from several hours to more than half a month. What's more, these data sets are sparse. And all these make it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, we use cubic spline interpolation to densify the data set, and make the intervals between data points even. Then, we employ the Vondrak filter to smooth the data set, and get rid of high-frequency noise, finally adopt the weighted average method to generate the ensemble pulsar time scale. The pulsar timing residuals represent clock difference between the pulsar time and atomic time, and the high precision pulsar timing data mean the clock difference measurement between the pulsar time and atomic time with a high signal to noise ratio, which is fundamental to generate pulsar time. We use the latest released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set to generate the ensemble pulsar time scale. This data set is from the newest NANOGRAV data release, which includes 9-year observational data of 37 millisecond pulsars using the 100-meter Green Bank telescope and 305-meter Arecibo telescope. We find that the algorithm used in this paper can lower the influence caused by noises in timing residuals, and improve long-term stability of pulsar time. Results show that the long-term (> 1 yr) frequency stability of the pulsar time is better than 3.4×10-15.
Canonical analysis based on mutual information
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2015-01-01
combinations with the information theoretical measure mutual information (MI). We term this type of analysis canonical information analysis (CIA). MI allows for the actual joint distribution of the variables involved and not just second order statistics. While CCA is ideal for Gaussian data, CIA facilitates......Canonical correlation analysis (CCA) is an established multi-variate statistical method for finding similarities between linear combinations of (normally two) sets of multivariate observations. In this contribution we replace (linear) correlation as the measure of association between the linear...
Boundary conditions in first order gravity: Hamiltonian and Ensemble
Aros, Rodrigo
2005-01-01
In this work two different boundary conditions for first order gravity, corresponding to a null and a negative cosmological constant respectively, are studied. Both boundary conditions allows to obtain the standard black hole thermodynamics. Furthermore both boundary conditions define a canonical ensemble. Additionally the quasilocal energy definition is obtained for the null cosmological constant case.
Iterative algorithms to approximate canonical Gabor windows: Computational aspects
DEFF Research Database (Denmark)
Janssen, A.J.E.M; Søndergaard, Peter Lempel
In this paper we investigate the computational aspects of some recently proposed iterative methods for approximating the canonical tight and canonical dual window of a Gabor frame (g,a,b). The iterations start with the window g while the iteration steps comprise the window g, the k^th iterand...
Ensemble algorithms in reinforcement learning.
Wiering, Marco A; van Hasselt, Hado
2008-08-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380
Bouallegue, Zied Ben; Theis, Susanne E; Pinson, Pierre
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (...
Directory of Open Access Journals (Sweden)
J. D. Giraldo
2011-04-01
Full Text Available The Sudano-Sahelian zone of West Africa, one of the poorest of the Earth, is characterized by high rainfall variability and rapid population growth. In this region, heavy storm events frequently cause extensive damage. Nonetheless, the projections for change in extreme rainfall values have shown a great divergence between Regional Climate Models (RCM, increasing the forecast uncertainty. Novel methodologies should be applied, taking into account both the variability provided by different RCMs, as well as the non-stationary nature of time series for the building of hazard maps of extreme rainfall events. The present work focuses in the probability density functions (PDFs-based evaluation and a simple quantitative measure of how well each RCM considered can capture the observed annual maximum daily rainfall (AMDR series on the Senegal River basin. Since meaningful trends have been detected in historical rainfall time series for the region, non-stationary probabilistic models were used to fit the PDF parameters to the AMDR time series. In the development of PDF ensemble by bootstrapping techniques, Reliability Ensemble Averaging (REA maps were applied to score the RCMs. The REA factors were computed using a metric to evaluate the agreement between observed -or best estimated- PDFs, and that simulated with each RCM. The assessment of plausible regional trends associated to the return period, from the hazard maps of AMDR, showed a general rise, owing to an increase in the mean and the variability of extreme precipitation. These spatial-temporal distributions could be considered by local stakeholders in such a way as to reach a better balance between mitigation and adaptation.
Data assimilation the ensemble Kalman filter
Evensen, Geir
2006-01-01
Covers data assimilation and inverse methods, including both traditional state estimation and parameter estimation. This text and reference focuses on various popular data assimilation methods, such as weak and strong constraint variational methods and ensemble filters and smoothers.
Charoenkwan, Phasit; Shoombuatong, Watshara; Lee, Hua-Chin; Chaijaruwanich, Jeerayut; Huang, Hui-Ling; Ho, Shinn-Ying
2013-01-01
Existing methods for predicting protein crystallization obtain high accuracy using various types of complemented features and complex ensemble classifiers, such as support vector machine (SVM) and Random Forest classifiers. It is desirable to develop a simple and easily interpretable prediction method with informative sequence features to provide insights into protein crystallization. This study proposes an ensemble method, SCMCRYS, to predict protein crystallization, for which each classifier is built by using a scoring card method (SCM) with estimating propensity scores of p-collocated amino acid (AA) pairs (p=0 for a dipeptide). The SCM classifier determines the crystallization of a sequence according to a weighted-sum score. The weights are the composition of the p-collocated AA pairs, and the propensity scores of these AA pairs are estimated using a statistic with optimization approach. SCMCRYS predicts the crystallization using a simple voting method from a number of SCM classifiers. The experimental results show that the single SCM classifier utilizing dipeptide composition with accuracy of 73.90% is comparable to the best previously-developed SVM-based classifier, SVM_POLY (74.6%), and our proposed SVM-based classifier utilizing the same dipeptide composition (77.55%). The SCMCRYS method with accuracy of 76.1% is comparable to the state-of-the-art ensemble methods PPCpred (76.8%) and RFCRYS (80.0%), which used the SVM and Random Forest classifiers, respectively. This study also investigates mutagenesis analysis based on SCM and the result reveals the hypothesis that the mutagenesis of surface residues Ala and Cys has large and small probabilities of enhancing protein crystallizability considering the estimated scores of crystallizability and solubility, melting point, molecular weight and conformational entropy of amino acids in a generalized condition. The propensity scores of amino acids and dipeptides for estimating the protein crystallizability can aid
Canonical quantization of constrained systems
Energy Technology Data Exchange (ETDEWEB)
Bouzas, A.; Epele, L.N.; Fanchiotti, H.; Canal, C.A.G. (Laboratorio de Fisica Teorica, Departamento de Fisica, Universidad Nacional de La Plata, Casilla de Correo No. 67, 1900 La Plata, Argentina (AR))
1990-07-01
The consideration of first-class constraints together with gauge conditions as a set of second-class constraints in a given system is shown to be incorrect when carrying out its canonical quantization.
A COMPREHENSIVE EVOLUTIONARY APPROACH FOR NEURAL NETWORK ENSEMBLES AUTOMATIC DESIGN
Bukhtoyarov, V.; Semenkin, E.
2010-01-01
A new comprehensive approach for neural network ensembles design is proposed. It consists of a method of neural networks automatic design and a method of automatic formation of an ensemble solution on the basis of separate neural networks solutions. It is demonstrated that the proposed approach is not less effective than a number of other approaches for neural network ensembles design.
Ensemble algorithms in reinforcement learning
Wiering, Marco A; van Hasselt, Hado
2008-01-01
This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and imple
The canon as text for a biblical theology
Directory of Open Access Journals (Sweden)
James A. Loader
2005-10-01
Full Text Available The novelty of the canonical approach is questioned and its fascination at least partly traced to the Reformation, as well as to the post-Reformation’s need for a clear and authoritative canon to perform the function previously performed by the church. This does not minimise the elusiveness and deeply contradictory positions both within the canon and triggered by it. On the one hand, the canon itself is a centripetal phenomenon and does play an important role in exegesis and theology. Even so, on the other hand, it not only contains many difficulties, but also causes various additional problems of a formal as well as a theological nature. The question is mooted whether the canonical approach alleviates or aggravates the dilemma. Since this approach has become a major factor in Christian theology, aspects of the Christian canon are used to gauge whether “canon” is an appropriate category for eliminating difficulties that arise by virtue of its own existence. Problematic uses and appropriations of several Old Testament canons are advanced, as well as evidence in the New Testament of a consciousness that the “old” has been surpassed(“Überbietungsbewußtsein”. It is maintained that at least the Childs version of the canonical approach fails to smooth out these and similar difficulties. As a method it can cater for the New Testament’s (superior role as the hermeneutical standard for evaluating the Old, but flounders on its inability to create the theological unity it claims can solve religious problems exposed by Old Testament historical criticism. It is concluded that canon as a category cannot be dispensed with, but is useful for the opposite of the purpose to which it is conventionally put: far from bringing about theological “unity” or producing a standard for “correct” exegesis, it requires different readings of different canons.
Asymptotic distributions in the projection pursuit based canonical correlation analysis
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this paper, associations between two sets of random variables based on the projection pursuit (PP) method are studied. The asymptotic normal distributions of estimators of the PP based canonical correlations and weighting vectors are derived.
Canonical approach to 2D induced gravity
Popovic, D
2001-01-01
Using canonical method the Liouville theory has been obtained as a gravitational Wess-Zumino action of the Polyakov string. From this approach it is clear that the form of the Liouville action is the consequence of the bosonic representation of the Virasoro algebra, and that the coefficient in front of the action is proportional to the central charge and measures the quantum braking of the classical symmetry.
On Complex Supermanifolds with Trivial Canonical Bundle
Groeger, Josua
2016-01-01
We give an algebraic characterisation for the triviality of the canonical bundle of a complex supermanifold in terms of a certain Batalin-Vilkovisky superalgebra structure. As an application, we study the Calabi-Yau case, in which an explicit formula in terms of the Levi-Civita connection is achieved. Our methods include the use of complex integral forms and the recently developed theory of superholonomy.
Online Learning with Ensembles
Urbanczik, R
1999-01-01
Supervised online learning with an ensemble of students randomized by the choice of initial conditions is analyzed. For the case of the perceptron learning rule, asymptotically the same improvement in the generalization error of the ensemble compared to the performance of a single student is found as in Gibbs learning. For more optimized learning rules, however, using an ensemble yields no improvement. This is explained by showing that for any learning rule $f$ a transform $\\tilde{f}$ exists,...
ENCORE: Software for Quantitative Ensemble Comparison.
Directory of Open Access Journals (Sweden)
Matteo Tiberti
2015-10-01
Full Text Available There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can
Periodicity, the Canon and Sport
Directory of Open Access Journals (Sweden)
Thomas F. Scanlon
2015-10-01
Full Text Available The topic according to this title is admittedly a broad one, embracing two very general concepts of time and of the cultural valuation of artistic products. Both phenomena are, in the present view, largely constructed by their contemporary cultures, and given authority to a great extent from the prestige of the past. The antiquity of tradition brings with it a certain cachet. Even though there may be peripheral debates in any given society which question the specifics of periodization or canonicity, individuals generally accept the consensus designation of a sequence of historical periods and they accept a list of highly valued artistic works as canonical or authoritative. We will first examine some of the processes of periodization and of canon-formation, after which we will discuss some specific examples of how these processes have worked in the sport of two ancient cultures, namely Greece and Mesoamerica.
A 4D-Ensemble-Variational System for Data Assimilation and Ensemble Initialization
Bowler, Neill; Clayton, Adam; Jardak, Mohamed; Lee, Eunjoo; Jermey, Peter; Lorenc, Andrew; Piccolo, Chiara; Pring, Stephen; Wlasak, Marek; Barker, Dale; Inverarity, Gordon; Swinbank, Richard
2016-04-01
The Met Office has been developing a four-dimensional ensemble variational (4DEnVar) data assimilation system over the past four years. The 4DEnVar system is intended both as data assimilation system in its own right and also an improved means of initializing the Met Office Global and Regional Ensemble Prediction System (MOGREPS). The global MOGREPS ensemble has been initialized by running an ensemble of 4DEnVars (En-4DEnVar). The scalability and maintainability of ensemble data assimilation methods make them increasingly attractive, and 4DEnVar may be adopted in the context of the Met Office's LFRic project to redevelop the technical infrastructure to enable its Unified Model (MetUM) to be run efficiently on massively parallel supercomputers. This presentation will report on the results of the 4DEnVar development project, including experiments that have been run using ensemble sizes of up to 200 members.
Layered Ensemble Architecture for Time Series Forecasting.
Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin
2016-01-01
Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods. PMID:25751882
Multilevel ensemble Kalman filtering
Hoel, Håkon; Law, Kody J. H.; Tempone, Raul
2015-01-01
This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (ENKF), thereby yielding a multilevel ensemble Kalman filter (MLENKF) which has provably superior asymptotic cost to a given accuracy level. The theoretical results are illustrated numerically.
Zerbino, Daniel R; Johnson, Nathan; Juetteman, Thomas; Sheppard, Dan; Wilder, Steven P; Lavidas, Ilias; Nuhn, Michael; Perry, Emily; Raffaillac-Desfosses, Quentin; Sobral, Daniel; Keefe, Damian; Gräf, Stefan; Ahmed, Ikhlak; Kinsella, Rhoda; Pritchard, Bethan; Brent, Simon; Amode, Ridwan; Parker, Anne; Trevanion, Steven; Birney, Ewan; Dunham, Ian; Flicek, Paul
2016-01-01
New experimental techniques in epigenomics allow researchers to assay a diversity of highly dynamic features such as histone marks, DNA modifications or chromatin structure. The study of their fluctuations should provide insights into gene expression regulation, cell differentiation and disease. The Ensembl project collects and maintains the Ensembl regulation data resources on epigenetic marks, transcription factor binding and DNA methylation for human and mouse, as well as microarray probe mappings and annotations for a variety of chordate genomes. From this data, we produce a functional annotation of the regulatory elements along the human and mouse genomes with plans to expand to other species as data becomes available. Starting from well-studied cell lines, we will progressively expand our library of measurements to a greater variety of samples. Ensembl's regulation resources provide a central and easy-to-query repository for reference epigenomes. As with all Ensembl data, it is freely available at http://www.ensembl.org, from the Perl and REST APIs and from the public Ensembl MySQL database server at ensembldb.ensembl.org. Database URL: http://www.ensembl.org. PMID:26888907
Existence of log canonical closures
Hacon, Christopher D
2011-01-01
Let $f:X\\to U$ be a projective morphism of normal varieties and $(X,\\Delta)$ a dlt pair. We prove that if there is an open set $U^0\\subset U$, such that $(X,\\Delta)\\times_U U^0$ has a good minimal model over $U^0$ and the images of all the non-klt centers intersect $U^0$, then $(X,\\Delta)$ has a good minimal model over $U$. As consequences we show the existence of log canonical compactifications for open log canonical pairs, and the fact that the moduli functor of stable schemes satisfies the valuative criterion for properness.
Saito, Kazuo; Hara, Masahiro; Kunii, Masaru; Seko, Hiromu; Yamaguchi, Munehiko
2011-05-01
Different initial perturbation methods for the mesoscale ensemble prediction were compared by the Meteorological Research Institute (MRI) as a part of the intercomparison of mesoscale ensemble prediction systems (EPSs) of the World Weather Research Programme (WWRP) Beijing 2008 Olympics Research and Development Project (B08RDP). Five initial perturbation methods for mesoscale ensemble prediction were developed for B08RDP and compared at MRI: (1) a downscaling method of the Japan Meteorological Agency (JMA)'s operational one-week EPS (WEP), (2) a targeted global model singular vector (GSV) method, (3) a mesoscale model singular vector (MSV) method based on the adjoint model of the JMA non-hydrostatic model (NHM), (4) a mesoscale breeding growing mode (MBD) method based on the NHM forecast and (5) a local ensemble transform (LET) method based on the local ensemble transform Kalman filter (LETKF) using NHM. These perturbation methods were applied to the preliminary experiments of the B08RDP Tier-1 mesoscale ensemble prediction with a horizontal resolution of 15 km. To make the comparison easier, the same horizontal resolution (40 km) was employed for the three mesoscale model-based initial perturbation methods (MSV, MBD and LET). The GSV method completely outperformed the WEP method, confirming the advantage of targeting in mesoscale EPS. The GSV method generally performed well with regard to root mean square errors of the ensemble mean, large growth rates of ensemble spreads throughout the 36-h forecast period, and high detection rates and high Brier skill scores (BSSs) for weak rains. On the other hand, the mesoscale model-based initial perturbation methods showed good detection rates and BSSs for intense rains. The MSV method showed a rapid growth in the ensemble spread of precipitation up to a forecast time of 6 h, which suggests suitability of the mesoscale SV for short-range EPSs, but the initial large growth of the perturbation did not last long. The
Pool, René; Heringa, Jaap; Hoefling, Martin; Schulz, Roland; Smith, Jeremy C; Feenstra, K Anton
2012-05-01
We report on a python interface to the GROMACS molecular simulation package, GromPy (available at https://github.com/GromPy). This application programming interface (API) uses the ctypes python module that allows function calls to shared libraries, for example, written in C. To the best of our knowledge, this is the first reported interface to the GROMACS library that uses direct library calls. GromPy can be used for extending the current GROMACS simulation and analysis modes. In this work, we demonstrate that the interface enables hybrid Monte-Carlo/molecular dynamics (MD) simulations in the grand-canonical ensemble, a simulation mode that is currently not implemented in GROMACS. For this application, the interplay between GromPy and GROMACS requires only minor modifications of the GROMACS source code, not affecting the operation, efficiency, and performance of the GROMACS applications. We validate the grand-canonical application against MD in the canonical ensemble by comparison of equations of state. The results of the grand-canonical simulations are in complete agreement with MD in the canonical ensemble. The python overhead of the grand-canonical scheme is only minimal.
Application of canonical coordinates for solving single-freedom constraint mechanical systems
Institute of Scientific and Technical Information of China (English)
高芳; 张晓波; 傅景礼
2014-01-01
This paper introduces the canonical coordinates method to obtain the first integral of a single-degree freedom constraint mechanical system that contains conserva-tive and non-conservative constraint homonomic systems. The definition and properties of canonical coordinates are introduced. The relation between Lie point symmetries and the canonical coordinates of the constraint mechanical system are expressed. By this re-lation, the canonical coordinates can be obtained. Properties of the canonical coordinates and the Lie symmetry theory are used to seek the first integrals of constraint mechanical system. Three examples are used to show applications of the results.
DEFF Research Database (Denmark)
Sunyer Pinya, Maria Antonia; Gregersen, Ida Bülow; Rosbjerg, Dan;
2015-01-01
change method for extreme events, a weather generator combined with a disaggregation method and a climate analogue method. All three methods rely on different assumptions and use different outputs from the regional climate models (RCMs). The results of the three methods point towards an increase...... in extreme precipitation but the magnitude of the change varies depending on the RCM used and the spatial location. In general, a similar mean change is obtained for the three methods. This adds confidence in the results as each method uses different information from the RCMs. The results of this study...... highlight the need of using a range of statistical downscaling methods as well as RCMs to assess changes in extreme precipitation. © 2014 Royal Meteorological Society....
Romanticism, Sexuality, and the Canon.
Rowe, Kathleen K.
1990-01-01
Traces the Romanticism in the work and persona of film director Jean-Luc Godard. Examines the contradictions posed by Godard's politics and representations of sexuality. Asserts, that by bringing an ironic distance to the works of such canonized directors, viewers can take pleasure in those works despite their contradictions. (MM)
Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique
Directory of Open Access Journals (Sweden)
Jan D. Keller
2008-12-01
Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.
Application of the Clustering Method in Molecular Dynamics Simulation of the Diffusion Coefficient
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Using molecular dynamics (MD) simulation, the diffusion of oxygen, methane, ammonia and carbon dioxide in water was simulated in the canonical NVT ensemble, and the diffusion coefficient was analyzed by the clustering method. By comparing to the conventional method (using the Einstein model) and the differentiation-interval variation method, we found that the results obtained by the clustering method used in this study are more close to the experimental values. This method proved to be more reasonable than the other two methods.
Support Vector Machine Ensemble Based on Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
LI Ye; YIN Ru-po; CAI Yun-ze; XU Xiao-ming
2006-01-01
Support vector machines (SVMs) have been introduced as effective methods for solving classification problems.However, due to some limitations in practical applications,their generalization performance is sometimes far from the expected level. Therefore, it is meaningful to study SVM ensemble learning. In this paper, a novel genetic algorithm based ensemble learning method, namely Direct Genetic Ensemble (DGE), is proposed. DGE adopts the predictive accuracy of ensemble as the fitness function and searches a good ensemble from the ensemble space. In essence, DGE is also a selective ensemble learning method because the base classifiers of the ensemble are selected according to the solution of genetic algorithm. In comparison with other ensemble learning methods, DGE works on a higher level and is more direct. Different strategies of constructing diverse base classifiers can be utilized in DGE.Experimental results show that SVM ensembles constructed by DGE can achieve better performance than single SVMs,bagged and boosted SVM ensembles. In addition, some valuable conclusions are obtained.
Work producing reservoirs: Stochastic thermodynamics with generalized Gibbs ensembles.
Horowitz, Jordan M; Esposito, Massimiliano
2016-08-01
We develop a consistent stochastic thermodynamics for environments composed of thermodynamic reservoirs in an external conservative force field, that is, environments described by the generalized or Gibbs canonical ensemble. We demonstrate that small systems weakly coupled to such reservoirs exchange both heat and work by verifying a local detailed balance relation for the induced stochastic dynamics. Based on this analysis, we help to rationalize the observation that nonthermal reservoirs can increase the efficiency of thermodynamic heat engines. PMID:27627226
Work producing reservoirs: Stochastic thermodynamics with generalized Gibbs ensembles
Horowitz, Jordan M.; Esposito, Massimiliano
2016-08-01
We develop a consistent stochastic thermodynamics for environments composed of thermodynamic reservoirs in an external conservative force field, that is, environments described by the generalized or Gibbs canonical ensemble. We demonstrate that small systems weakly coupled to such reservoirs exchange both heat and work by verifying a local detailed balance relation for the induced stochastic dynamics. Based on this analysis, we help to rationalize the observation that nonthermal reservoirs can increase the efficiency of thermodynamic heat engines.
Transition from Poisson to circular unitary ensemble
Indian Academy of Sciences (India)
Vinayak; Akhilesh Pandey
2009-09-01
Transitions to universality classes of random matrix ensembles have been useful in the study of weakly-broken symmetries in quantum chaotic systems. Transitions involving Poisson as the initial ensemble have been particularly interesting. The exact two-point correlation function was derived by one of the present authors for the Poisson to circular unitary ensemble (CUE) transition with uniform initial density. This is given in terms of a rescaled symmetry breaking parameter Λ. The same result was obtained for Poisson to Gaussian unitary ensemble (GUE) transition by Kunz and Shapiro, using the contour-integral method of Brezin and Hikami. We show that their method is applicable to Poisson to CUE transition with arbitrary initial density. Their method is also applicable to the more general ℓ CUE to CUE transition where CUE refers to the superposition of ℓ independent CUE spectra in arbitrary ratio.
Spectral diagonal ensemble Kalman filters
Kasanický, Ivan; Vejmelka, Martin
2015-01-01
A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.
Canonical and non-canonical pathways of osteoclast formation
Knowles, H.J.; Athanasou, N A
2009-01-01
Physiological and pathological bone resorption is mediated by osteoclasts, multinucleated cells which are formed by the fusion of monocyte / macrophage precursors. The canonical pathway of osteoclast formation requires the presence of the receptor activator for NFkB ligand (RANKL) and macrophage colony stimulating factor (M-CSF). Noncanonical pathways of osteoclast formation have been described in which cytokines / growth factors can substitute for RANKL or M-CSF to...
DEFF Research Database (Denmark)
Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.;
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts...... is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast...... from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error...
Quantum statistical ensemble for emissive correlated systems
Shakirov, Alexey M.; Shchadilova, Yulia E.; Rubtsov, Alexey N.
2016-06-01
Relaxation dynamics of complex quantum systems with strong interactions towards the steady state is a fundamental problem in statistical mechanics. The steady state of subsystems weakly interacting with their environment is described by the canonical ensemble which assumes the probability distribution for energy to be of the Boltzmann form. The emergence of this probability distribution is ensured by the detailed balance of the transitions induced by the interaction with the environment. Here we consider relaxation of an open correlated quantum system brought into contact with a reservoir in the vacuum state. We refer to such a system as emissive since particles irreversibly evaporate into the vacuum. The steady state of the system is a statistical mixture of the stable eigenstates. We found that, despite the absence of the detailed balance, the stationary probability distribution over these eigenstates is of the Boltzmann form in each N -particle sector. A quantum statistical ensemble corresponding to the steady state is characterized by different temperatures in the different sectors, in contrast to the Gibbs ensemble. We investigate the transition rates between the eigenstates to understand the emergence of the Boltzmann distribution and find their exponential dependence on the transition energy. We argue that this property of transition rates is generic for a wide class of emissive quantum many-body systems.
基于度量学习的邻域k凸包集成方法%Neighbor k-convex-hull ensemble method based on metric learning
Institute of Scientific and Technical Information of China (English)
牟廉明
2013-01-01
k局部凸包分类方法通过改进k近邻算法在处理小样本问题时的决策边界而显著提高分类性能,k子凸包分类方法通过克服k凸包分类对类数和样本环状分布的敏感性而改善了分类性能.但是,该方法仍然对样本距离度量方法敏感,并且在k邻域内不同类的样本数经常严重失衡,导致分类性能下降.针对上述问题,文章提出了一种邻域k凸包分类方法,并通过引入距离度量学习和集成学习技术来提高算法对样本空间度量的鲁棒性.大量实验表明,文中提出的基于度量学习的邻域k凸包集成方法具有显著的分类性能优势.%The k-local convex distance nearest neighbor classifier(CKNN) corrects the decision boundary of kNN when the amount of the training data is small,thus improving the performance of kNN.The k sub-convex-hull classifier(kCH) weakens the sensitivity of CKNN to the number of classes and the ring structure of samples distribution,hence improves the classification performance.But this method is still sensitive to the distance metric.Moreover,different types of samples in k nearest neighbors of a test instance are often seriously imbalanced,which leads to the decline of classification performance.In this paper,a neighbor k-convex-hull classifier(NCH) is proposed to address these problems.The robustness of the neighbor k-convex-hull classifier is improved by the techniques of metric learning and ensemble learning.Experimental results show that the proposed neighbor k-convex-hull classifier ensemble method,which is based on metric learning,is significantly superior to some state-of-the-art nearest neighbor classifiers.
Institute of Scientific and Technical Information of China (English)
杨娜; 秦志远; 张俊
2013-01-01
基于支持向量机的无限集成学习方法(SVM-based IEL)是机器学习领域新兴起的一种集成学习方法.本文将SVM-based IEL引入遥感图像的分类领域,并同时将SVM、Bagging、AdaBoost和SVM-based IEL等方法应用于遥感图像分类.实验表明:Bagging方法可以提高遥感图像的分类精度,而AdaBoost却降低了遥感图像的分类精度；同时,与SVM、有限集成的学习方法相比,SVM-based IEL方法具有可以显著地提高遥感图像的分类精度、分类效率的优势.%Support-vector-machines-based Infinite Ensemble Learning method ( SVM-based IEL) is one of the ensemble learning methods in the field of machine learning. In this paper, the SVM-based IEL was applied to the classification of remotely sensed imagery besides classic ensemble learning methods such as Bagging, AdaBoost and SVM etc. SVM was taken as the base classifier in Bagging, AdaBoost The experiments showed that the classic ensemble learning methods have different performances compared to SVM. In detail , the Bagging was capable of enhancing the classification accuracy but the AdaBoost was decreasing the classification accuracy. Furthermore, the experiments suggested that compared to SVM and classic ensemble learning methods, SVM-based IEL has many merits such as increasing both of the classification accuracy and classification efficiency.
International Nuclear Information System (INIS)
Grinding is usually done in the final finishing of a component. As a result, the surface quality of finished products, e.g., surface roughness, hardness and residual stress, are affected by the grinding procedure. However, the lack of methods for monitoring of grinding makes it difficult to control the quality of the process. This paper focuses on the monitoring approaches for the surface burn phenomenon in grinding. A non-destructive burn detection method based on acoustic emission (AE) and ensemble empirical mode decomposition (EEMD) was proposed for this purpose. To precisely extract the AE features caused by phase transformation during burn formation, artificial burn was produced to mimic grinding burn by means of laser irradiation, since laser-induced burn involves less mechanical and electrical noise. The burn formation process was monitored by an AE sensor. The frequency band ranging from 150 to 400 kHz was believed to be related to surface burn formation in the laser irradiation process. The burn-sensitive frequency band was further used to instruct feature extraction during the grinding process based on EEMD. Linear classification results evidenced a distinct margin between samples with and without surface burn. This work provides a practical means for grinding burn detection. (paper)
Institute of Scientific and Technical Information of China (English)
夏崇坤; 苏成利; 曹江涛; 李平
2016-01-01
Fault diagnosis plays an important role in complicated industrial process. It is a challenging task to detect, identify and locate faults quickly and accurately for large-scale process system. To solve the problem, a novel MultiBoost-based integrated ENN (extension neural network) fault diagnosis method is proposed. Fault data of complicated chemical process have some difficult-to-handle characteristics, such as high-dimension, non-linear and non-Gaussian distribution, so we use margin discriminant projection(MDP) algorithm to reduce dimensions and extract main features. Then, the affinity propagation (AP) clustering method is used to select core data and boundary data as training samples to reduce memory consumption and shorten learning time. Afterwards, an integrated ENN classifier based on MultiBoost strategy is constructed to identify fault types. The artificial data sets are tested to verify the effectiveness of the proposed method and make a detailed sensitivity analysis for the key parameters. Finally, a real industrial system—Tennessee Eastman (TE) process is employed to evaluate the performance of the proposed method. And the results show that the proposed method is efficient and capable to diagnose various types of faults in complicated chemical process.
Ensemble learning incorporating uncertain registration.
Simpson, Ivor J A; Woolrich, Mark W; Andersson, Jesper L R; Groves, Adrian R; Schnabel, Julia A
2013-04-01
This paper proposes a novel approach for improving the accuracy of statistical prediction methods in spatially normalized analysis. This is achieved by incorporating registration uncertainty into an ensemble learning scheme. A probabilistic registration method is used to estimate a distribution of probable mappings between subject and atlas space. This allows the estimation of the distribution of spatially normalized feature data, e.g., grey matter probability maps. From this distribution, samples are drawn for use as training examples. This allows the creation of multiple predictors, which are subsequently combined using an ensemble learning approach. Furthermore, extra testing samples can be generated to measure the uncertainty of prediction. This is applied to separating subjects with Alzheimer's disease from normal controls using a linear support vector machine on a region of interest in magnetic resonance images of the brain. We show that our proposed method leads to an improvement in discrimination using voxel-based morphometry and deformation tensor-based morphometry over bootstrap aggregating, a common ensemble learning framework. The proposed approach also generates more reasonable soft-classification predictions than bootstrap aggregating. We expect that this approach could be applied to other statistical prediction tasks where registration is important. PMID:23288332
DEFF Research Database (Denmark)
Christensen, Eva Arnspang; Schwartzentruber, J.; Clausen, M. P.;
2013-01-01
The lateral dynamics of proteins and lipids in the mammalian plasma membrane are heterogeneous likely reflecting both a complex molecular organization and interactions with other macromolecules that reside outside the plane of the membrane. Several methods are commonly used for characterizing...... the lateral dynamics of lipids and proteins. These experimental and data analysis methods differ in equipment requirements, labeling complexities, and further oftentimes give different results. It would therefore be very convenient to have a single method that is flexible in the choice of fluorescent label...... for analyzing lateral dynamics in samples that are labeled at high densities, can also be used for fast and accurate analysis of single molecule density data of lipids and proteins labeled with quantum dots (QDs). We have further used kICS to investigate the effect of the label size and by comparing the results...
Titchmarsh-Weyl theory for canonical systems
Directory of Open Access Journals (Sweden)
Keshav Raj Acharya
2014-11-01
Full Text Available The main purpose of this paper is to develop Titchmarsh- Weyl theory of canonical systems. To this end, we first observe the fact that Schrodinger and Jacobi equations can be written into canonical systems. We then discuss the theory of Weyl m-function for canonical systems and establish the relation between the Weyl m-functions of Schrodinger equations and that of canonical systems which involve Schrodinger equations.
Three Dimensional Canonical Quantum Gravity
Matschull, Hans-Juergen
1995-01-01
General aspects of vielbein representation, ADM formulation and canonical quantization of gravity are reviewed using pure gravity in three dimensions as a toy model. The classical part focusses on the role of observers in general relativity, which will later be identified with quantum observers. A precise definition of gauge symmetries and a classification of inequivalent solutions of Einstein's equations in dreibein formalism is given as well. In the quantum part the construction of the phys...
Canonical formalism for coupled beam optics
Energy Technology Data Exchange (ETDEWEB)
Kheifets, S.A.
1989-09-01
Beam optics of a lattice with an inter-plane coupling is treated using canonical Hamiltonian formalism. The method developed is equally applicable both to a circular (periodic) machine and to an open transport line. A solution of the equation of a particle motion (and correspondingly transfer matrix between two arbitrary points of the lattice) are described in terms of two amplitude functions (and their derivatives and corresponding phases of oscillations) and four coupling functions, defined by a solution of the system of the first-order nonlinear differential equations derived in the paper. Thus total number of independent parameters is equal to ten. 8 refs.
Kato expansion in quantum canonical perturbation theory
Nikolaev, Andrey
2016-06-01
This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson's ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. We compare the efficiency of the corresponding computational algorithm with the efficiencies of the Van Vleck and Magnus methods for high perturbative orders.
Controlling balance in an ensemble Kalman filter
G. A. Gottwald
2014-01-01
We present a method to control unbalanced fast dynamics in an ensemble Kalman filter by introducing a weak constraint on the imbalance in a spatially sparse observational network. We show that the balance constraint produces significantly more balanced analyses than ensemble Kalman filters without balance constraints and than filters implementing incremental analysis updates (IAU). Furthermore, our filter with the weak constraint on imbalance produces good rms error statisti...
基于集成学习的核电站故障诊断方法%Fault Diagnosis Method for Nuclear Power Plant Based on Ensemble Learning
Institute of Scientific and Technical Information of China (English)
慕昱; 夏虹; 刘永阔
2012-01-01
核电站系统复杂,需要采集和监测的变量较多,给核电站的故障诊断增加了困难.针对该问题提出集成学习算法,对核电站的失水事故、给水管道破裂、蒸汽发生器U型管破裂和主蒸汽管道破裂等4种典型故障进行训练学习,并分别在正常情况下和参数缺失情况下进行仿真实验.仿真结果表明,该算法在参数缺失的情况下仍能得到较好的诊断结果,具有良好的容错能力和泛化能力.%Nuclear power plant (NPP) is a very complex system, which needs to collect and monitor vast parameters, so it's hard to diagnose the faults of NPP. An ensemble learning method was proposed according to the problem. And the method was applied to learn from training samples which were the typical faults of nuclear power plant, i. e. , loss of coolant accident (LOCA) , feed water pipe rupture, steam generator tube rupture (SGTR), main steam pipe rupture. And the simulation results were carried out on the condition of normal and invalid and absent parameters respectively. The simulation results show that this method can get a good result on the condition of invalid and absent parameters. The method shows very good generalization performance and fault tolerance.
Tong, J.
2014-12-01
With the development of modern agriculture, large amount of fertilizer and pesticide outflow from farming land causes great wastes and contributes to serious pollution of surface water and groundwater, which threatens ecological environment and human life. In this paper, laboratory experiments are conducted to simulate adsorbed Cr(VI) transfer from soil into runoff. A two-layer in-mixing analytical model is developed to to analyze laboratory experimental results. A data assimilation (DA) method via the ensemble Kalman filter (EnKF) is used to update parameters and improve predictions. In comparison with the observed data, DA results are much better than forward model predictions. Based on the used rainfall and relevant physical principles, the updated value of the incomplete mixing coefficient is about 7.4 times of the value of the incomplete mixing coefficient in experiment 1 and about 14.0 times in experiment 2, which indicates the loss of Cr(VI) in soil solute is mainly due to infiltration, rather than surface runoff. With the increase of soil adsorption ability and the mixing layer depth, the loss of soil solute will decrease. These results provide information for preventing and reducing the agricultural nonpoint source pollution.
Various multistage ensembles for prediction of heating energy consumption
Directory of Open Access Journals (Sweden)
Radisa Jovanovic
2015-04-01
Full Text Available Feedforward neural network models are created for prediction of daily heating energy consumption of a NTNU university campus Gloshaugen using actual measured data for training and testing. Improvement of prediction accuracy is proposed by using neural network ensemble. Previously trained feed-forward neural networks are first separated into clusters, using k-means algorithm, and then the best network of each cluster is chosen as member of an ensemble. Two conventional averaging methods for obtaining ensemble output are applied; simple and weighted. In order to achieve better prediction results, multistage ensemble is investigated. As second level, adaptive neuro-fuzzy inference system with various clustering and membership functions are used to aggregate the selected ensemble members. Feedforward neural network in second stage is also analyzed. It is shown that using ensemble of neural networks can predict heating energy consumption with better accuracy than the best trained single neural network, while the best results are achieved with multistage ensemble.
Gross, D H E
2005-01-01
Conventional thermo-statistics address infinite homogeneous systems within the canonical ensemble. (Only in this case this is equivalent to the fundamental microcanonical ensemble.) However, some 170 years ago the original motivation of thermodynamics was the description of steam engines, i.e. boiling water. Its essential physics is the separation of the gas phase from the liquid. Of course, boiling water is inhomogeneous and as such cannot be treated by conventional thermo-statistics. Then it is not astonishing, that a phase transition of first order is signaled canonically by a Yang-Lee singularity. Thus it is only treated correctly by microcanonical Boltzmann-Planck statistics. It turns out that the Boltzmann-Planck statistics is much richer and gives fundamental insight into statistical mechanics and especially into entropy. This can be done to a far extend rigorously and analytically. As no extensivity, no thermodynamic limit, no concavity, no homogeneity is needed, it also applies to astro-physical syst...
Canonical path integral quantization of Einstein's gravitational field
Muslih, Sami I.
2000-01-01
The connection between the canonical and the path integral formulations of Einstein's gravitational field is discussed using the Hamilton - Jacobi method. Unlike conventional methods, it is shown that our path integral method leads to obtain the measure of integration with no $\\delta$- functions, no need to fix any gauge and so no ambiguous deteminants will appear.
Canonical group quantization and boundary conditions
Energy Technology Data Exchange (ETDEWEB)
Jung, Florian
2012-07-16
In the present thesis, we study quantization of classical systems with non-trivial phase spaces using the group-theoretical quantization technique proposed by Isham. Our main goal is a better understanding of global and topological aspects of quantum theory. In practice, the group-theoretical approach enables direct quantization of systems subject to constraints and boundary conditions in a natural and physically transparent manner -- cases for which the canonical quantization method of Dirac fails. First, we provide a clarification of the quantization formalism. In contrast to prior treatments, we introduce a sharp distinction between the two group structures that are involved and explain their physical meaning. The benefit is a consistent and conceptually much clearer construction of the Canonical Group. In particular, we shed light upon the 'pathological' case for which the Canonical Group must be defined via a central Lie algebra extension and emphasise the role of the central extension in general. In addition, we study direct quantization of a particle restricted to a half-line with 'hard wall' boundary condition. Despite the apparent simplicity of this example, we show that a naive quantization attempt based on the cotangent bundle over the half-line as classical phase space leads to an incomplete quantum theory; the reflection which is a characteristic aspect of the 'hard wall' is not reproduced. Instead, we propose a different phase space that realises the necessary boundary condition as a topological feature and demonstrate that quantization yields a suitable quantum theory for the half-line model. The insights gained in the present special case improve our understanding of the relation between classical and quantum theory and illustrate how contact interactions may be incorporated.
Canonical group quantization and boundary conditions
International Nuclear Information System (INIS)
In the present thesis, we study quantization of classical systems with non-trivial phase spaces using the group-theoretical quantization technique proposed by Isham. Our main goal is a better understanding of global and topological aspects of quantum theory. In practice, the group-theoretical approach enables direct quantization of systems subject to constraints and boundary conditions in a natural and physically transparent manner -- cases for which the canonical quantization method of Dirac fails. First, we provide a clarification of the quantization formalism. In contrast to prior treatments, we introduce a sharp distinction between the two group structures that are involved and explain their physical meaning. The benefit is a consistent and conceptually much clearer construction of the Canonical Group. In particular, we shed light upon the 'pathological' case for which the Canonical Group must be defined via a central Lie algebra extension and emphasise the role of the central extension in general. In addition, we study direct quantization of a particle restricted to a half-line with 'hard wall' boundary condition. Despite the apparent simplicity of this example, we show that a naive quantization attempt based on the cotangent bundle over the half-line as classical phase space leads to an incomplete quantum theory; the reflection which is a characteristic aspect of the 'hard wall' is not reproduced. Instead, we propose a different phase space that realises the necessary boundary condition as a topological feature and demonstrate that quantization yields a suitable quantum theory for the half-line model. The insights gained in the present special case improve our understanding of the relation between classical and quantum theory and illustrate how contact interactions may be incorporated.
Data assimilation with the weighted ensemble Kalman filter
Papadakis, Nicolas; Mémin, Etienne; Cuzol, Anne; Gengembre, Nicolas
2010-01-01
In this paper, two data assimilation methods based on sequential Monte Carlo sampling are studied and compared: the ensemble Kalman filter and the particle filter. Each of these techniques has its own advantages and drawbacks. In this work, we try to get the best of each method by combining them. The proposed algorithm, called the weighted ensemble Kalman filter, consists to rely on the Ensemble Kalman Filter updates of samples in order to define a proposal distribution for the particle filte...
Banerjee, Monami; Okun, Michael S; Vaillancourt, David E; Vemuri, Baba C
2016-01-01
Parkinson's disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID
Banerjee, Monami; Okun, Michael S.; Vaillancourt, David E.; Vemuri, Baba C.
2016-01-01
Parkinson’s disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future. PMID
Banerjee, Monami; Okun, Michael S; Vaillancourt, David E; Vemuri, Baba C
2016-01-01
Parkinson's disease (PD) is a common and debilitating neurodegenerative disorder that affects patients in all countries and of all nationalities. Magnetic resonance imaging (MRI) is currently one of the most widely used diagnostic imaging techniques utilized for detection of neurologic diseases. Changes in structural biomarkers will likely play an important future role in assessing progression of many neurological diseases inclusive of PD. In this paper, we derived structural biomarkers from diffusion MRI (dMRI), a structural modality that allows for non-invasive inference of neuronal fiber connectivity patterns. The structural biomarker we use is the ensemble average propagator (EAP), a probability density function fully characterizing the diffusion locally at a voxel level. To assess changes with respect to a normal anatomy, we construct an unbiased template brain map from the EAP fields of a control population. Use of an EAP captures both orientation and shape information of the diffusion process at each voxel in the dMRI data, and this feature can be a powerful representation to achieve enhanced PD brain mapping. This template brain map construction method is applicable to small animal models as well as to human brains. The differences between the control template brain map and novel patient data can then be assessed via a nonrigid warping algorithm that transforms the novel data into correspondence with the template brain map, thereby capturing the amount of elastic deformation needed to achieve this correspondence. We present the use of a manifold-valued feature called the Cauchy deformation tensor (CDT), which facilitates morphometric analysis and automated classification of a PD versus a control population. Finally, we present preliminary results of automated discrimination between a group of 22 controls and 46 PD patients using CDT. This method may be possibly applied to larger population sizes and other parkinsonian syndromes in the near future.
Canonical metrics on complex manifold
Institute of Scientific and Technical Information of China (English)
YAU Shing-Tung
2008-01-01
@@ Complex manifolds are topological spaces that are covered by coordinate charts where the Coordinate changes are given by holomorphic transformations. For example, Riemann surfaces are one dimensional complex manifolds. In order to understand complex manifolds, it is useful to introduce metrics that are compatible with the complex structure. In general, we should have a pair (M, ds2M) where ds2M is the metric. The metric is said to be canonical if any biholomorphisms of the complex manifolds are automatically isometries. Such metrics can naturally be used to describe invariants of the complex structures of the manifold.
Canonical metrics on complex manifold
Institute of Scientific and Technical Information of China (English)
YAU; Shing-Tung(Yau; S.-T.)
2008-01-01
Complex manifolds are topological spaces that are covered by coordinate charts where the coordinate changes are given by holomorphic transformations.For example,Riemann surfaces are one dimensional complex manifolds.In order to understand complex manifolds,it is useful to introduce metrics that are compatible with the complex structure.In general,we should have a pair(M,ds~2_M)where ds~2_M is the metric.The metric is said to be canonical if any biholomorphisms of the complex manifolds are automatically isometries.Such metrics can naturally be used to describe invariants of the complex structures of the manifold.
Dibaryons as canonically quantized biskyrmions
Krupovnickas, T; Riska, D O
2000-01-01
The characteristic feature of the ground state configuration of the Skyrme model description of nuclei is the absence of recognizable individual nucleons. The ground state of the skyrmion with baryon number 2 is axially symmetric, and is well approximated by a simple rational map, which represents a direct generalization of Skyrme's hedgehog ansatz for the nucleon. If the Lagrangian density is canonically quantized this configuration may support excitations that lie close and possible below the threshold for pion decay, and therefore describe dibaryons. The quantum corrections stabilize these solutions, the mass density of which have the correct exponential fall off at large distances.
Imprinting and recalling cortical ensembles.
Carrillo-Reid, Luis; Yang, Weijian; Bando, Yuki; Peterka, Darcy S; Yuste, Rafael
2016-08-12
Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion. PMID:27516599
Calibrating ensemble reliability whilst preserving spatial structure
Directory of Open Access Journals (Sweden)
Jonathan Flowerdew
2014-03-01
Full Text Available Ensemble forecasts aim to improve decision-making by predicting a set of possible outcomes. Ideally, these would provide probabilities which are both sharp and reliable. In practice, the models, data assimilation and ensemble perturbation systems are all imperfect, leading to deficiencies in the predicted probabilities. This paper presents an ensemble post-processing scheme which directly targets local reliability, calibrating both climatology and ensemble dispersion in one coherent operation. It makes minimal assumptions about the underlying statistical distributions, aiming to extract as much information as possible from the original dynamic forecasts and support statistically awkward variables such as precipitation. The output is a set of ensemble members preserving the spatial, temporal and inter-variable structure from the raw forecasts, which should be beneficial to downstream applications such as hydrological models. The calibration is tested on three leading 15-d ensemble systems, and their aggregation into a simple multimodel ensemble. Results are presented for 12 h, 1° scale over Europe for a range of surface variables, including precipitation. The scheme is very effective at removing unreliability from the raw forecasts, whilst generally preserving or improving statistical resolution. In most cases, these benefits extend to the rarest events at each location within the 2-yr verification period. The reliability and resolution are generally equivalent or superior to those achieved using a Local Quantile-Quantile Transform, an established calibration method which generalises bias correction. The value of preserving spatial structure is demonstrated by the fact that 3×3 averages derived from grid-scale precipitation calibration perform almost as well as direct calibration at 3×3 scale, and much better than a similar test neglecting the spatial relationships. Some remaining issues are discussed regarding the finite size of the output
Directory of Open Access Journals (Sweden)
Leandro Machado Colli
Full Text Available INTRODUCTION: Canonical and non-canonical Wnt pathways are involved in the genesis of multiple tumors; however, their role in pituitary tumorigenesis is mostly unknown. OBJECTIVE: This study evaluated gene and protein expression of Wnt pathways in pituitary tumors and whether these expression correlate to clinical outcome. MATERIALS AND METHODS: Genes of the WNT canonical pathway: activating ligands (WNT11, WNT4, WNT5A, binding inhibitors (DKK3, sFRP1, β-catenin (CTNNB1, β-catenin degradation complex (APC, AXIN1, GSK3β, inhibitor of β-catenin degradation complex (AKT1, sequester of β-catenin (CDH1, pathway effectors (TCF7, MAPK8, NFAT5, pathway mediators (DVL-1, DVL-2, DVL-3, PRICKLE, VANGL1, target genes (MYB, MYC, WISP2, SPRY1, TP53, CCND1; calcium dependent pathway (PLCB1, CAMK2A, PRKCA, CHP; and planar cell polarity pathway (PTK7, DAAM1, RHOA were evaluated by QPCR, in 19 GH-, 18 ACTH-secreting, 21 non-secreting (NS pituitary tumors, and 5 normal pituitaries. Also, the main effectors of canonical (β-catenin, planar cell polarity (JNK, and calcium dependent (NFAT5 Wnt pathways were evaluated by immunohistochemistry. RESULTS: There are no differences in gene expression of canonical and non-canonical Wnt pathways between all studied subtypes of pituitary tumors and normal pituitaries, except for WISP2, which was over-expressed in ACTH-secreting tumors compared to normal pituitaries (4.8x; p = 0.02, NS pituitary tumors (7.7x; p = 0.004 and GH-secreting tumors (5.0x; p = 0.05. β-catenin, NFAT5 and JNK proteins showed no expression in normal pituitaries and in any of the pituitary tumor subtypes. Furthermore, no association of the studied gene or protein expression was observed with tumor size, recurrence, and progressive disease. The hierarchical clustering showed a regular pattern of genes of the canonical and non-canonical Wnt pathways randomly distributed throughout the dendrogram. CONCLUSIONS: Our data reinforce previous reports
Institute of Scientific and Technical Information of China (English)
Ren Wen-Xiu; Alatancang
2007-01-01
Using factorization viewpoint of differential operator, this paper discusses how ti transform a nonlinear evolution equation to infinite-dimensional Hamiltonian linear canonical formulation. It proves a sufficient condition of canonical factorization of operator, and provides a kind of mechanical algebraic method to achieve canonical '(δ)/(δ)x'-type expression, correspondingly. Then three examples are given, which show the application of the obtained algorithm. Thus a novel idea for inverse problem can be derived fegibly.
Heuser, Frank
2011-01-01
Public school music education in the USA remains wedded to large ensemble performance. Instruction tends to be teacher directed, relies on styles from the Western canon and exhibits little concern for musical interests of students. The idea that a fundamental purpose of education is the creation of a just society is difficult for many music…
Face hallucination using orthogonal canonical correlation analysis
Zhou, Huiling; Lam, Kin-Man
2016-05-01
A two-step face-hallucination framework is proposed to reconstruct a high-resolution (HR) version of a face from an input low-resolution (LR) face, based on learning from LR-HR example face pairs using orthogonal canonical correlation analysis (orthogonal CCA) and linear mapping. In the proposed algorithm, face images are first represented using principal component analysis (PCA). Canonical correlation analysis (CCA) with the orthogonality property is then employed, to maximize the correlation between the PCA coefficients of the LR and the HR face pairs to improve the hallucination performance. The original CCA does not own the orthogonality property, which is crucial for information reconstruction. We propose using orthogonal CCA, which is proven by experiments to achieve a better performance in terms of global face reconstruction. In addition, in the residual-compensation process, a linear-mapping method is proposed to include both the inter- and intrainformation about manifolds of different resolutions. Compared with other state-of-the-art approaches, the proposed framework can achieve a comparable, or even better, performance in terms of global face reconstruction and the visual quality of face hallucination. Experiments on images with various parameter settings and blurring distortions show that the proposed approach is robust and has great potential for real-world applications.
DEFF Research Database (Denmark)
Hansen, Lars Kai; Salamon, Peter
1990-01-01
We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....
De praeceptis ferendis: good practice in multi-model ensembles
Directory of Open Access Journals (Sweden)
I. Kioutsioukis
2014-06-01
Full Text Available Ensembles of air quality models have been formally and empirically shown to outperform single models in many cases. Evidence suggests that ensemble error is reduced when the members form a diverse and accurate ensemble. Diversity and accuracy are hence two factors that should be taken care of while designing ensembles in order for them to provide better predictions. There exists a trade-off between diversity and accuracy for which one cannot be gained without expenses of the other. Theoretical aspects like the bias-variance-covariance decomposition and the accuracy-diversity decomposition are linked together and support the importance of creating ensemble that incorporates both the elements. Hence, the common practice of unconditional averaging of models without prior manipulation limits the advantages of ensemble averaging. We demonstrate the importance of ensemble accuracy and diversity through an inter-comparison of ensemble products for which a sound mathematical framework exists, and provide specific recommendations for model selection and weighting for multi model ensembles. To this end we have devised statistical tools that can be used for diagnostic evaluation of ensemble modelling products, complementing existing operational methods.
Integral canonical models for Spin Shimura varieties
Pera, Keerthi Madapusi
2012-01-01
We construct regular integral canonical models for Shimura varieties attached to Spin groups at (possibly ramified) odd primes. We exhibit these models as schemes of 'relative PEL type' over integral canonical models of larger Spin Shimura varieties with good reduction. Work of Vasiu-Zink then shows that the classical Kuga-Satake construction extends over the integral model and that the integral models we construct are canonical in a very precise sense. We also construct good compactification...
Enhanced ensemble-based 4DVar scheme for data assimilation
Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne
2015-01-01
International audience Ensemble based optimal control schemes combine the components of ensemble Kalman filters and variational data assimilation (4DVar). They are trendy because they are easier to implement than 4DVar. In this paper, we evaluate a modified version of an ensemble based optimal control strategy for image data assimilation. This modified method is assessed with a Shallow Water model combined with synthetic data and original incomplete experimental depth sensor observations. ...
Black Hole Statistical Mechanics and The Angular Velocity Ensemble
Thomson, Mitchell; Dyer, Charles C.
2012-01-01
An new ensemble - the angular velocity ensemble - is derived using Jaynes' method of maximising entropy subject to prior information constraints. The relevance of the ensemble to black holes is motivated by a discussion of external parameters in statistical mechanics and their absence from the Hamiltonian of general relativity. It is shown how this leads to difficulty in deriving entropy as a function of state and recovering the first law of thermodynamics from the microcanonical and canonica...
A Localized Ensemble Kalman Smoother
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
Visualizing ensembles in structural biology.
Melvin, Ryan L; Salsbury, Freddie R
2016-06-01
Displaying a single representative conformation of a biopolymer rather than an ensemble of states mistakenly conveys a static nature rather than the actual dynamic personality of biopolymers. However, there are few apparent options due to the fixed nature of print media. Here we suggest a standardized methodology for visually indicating the distribution width, standard deviation and uncertainty of ensembles of states with little loss of the visual simplicity of displaying a single representative conformation. Of particular note is that the visualization method employed clearly distinguishes between isotropic and anisotropic motion of polymer subunits. We also apply this method to ligand binding, suggesting a way to indicate the expected error in many high throughput docking programs when visualizing the structural spread of the output. We provide several examples in the context of nucleic acids and proteins with particular insights gained via this method. Such examples include investigating a therapeutic polymer of FdUMP (5-fluoro-2-deoxyuridine-5-O-monophosphate) - a topoisomerase-1 (Top1), apoptosis-inducing poison - and nucleotide-binding proteins responsible for ATP hydrolysis from Bacillus subtilis. We also discuss how these methods can be extended to any macromolecular data set with an underlying distribution, including experimental data such as NMR structures. PMID:27179343
Visualizing ensembles in structural biology.
Melvin, Ryan L; Salsbury, Freddie R
2016-06-01
Displaying a single representative conformation of a biopolymer rather than an ensemble of states mistakenly conveys a static nature rather than the actual dynamic personality of biopolymers. However, there are few apparent options due to the fixed nature of print media. Here we suggest a standardized methodology for visually indicating the distribution width, standard deviation and uncertainty of ensembles of states with little loss of the visual simplicity of displaying a single representative conformation. Of particular note is that the visualization method employed clearly distinguishes between isotropic and anisotropic motion of polymer subunits. We also apply this method to ligand binding, suggesting a way to indicate the expected error in many high throughput docking programs when visualizing the structural spread of the output. We provide several examples in the context of nucleic acids and proteins with particular insights gained via this method. Such examples include investigating a therapeutic polymer of FdUMP (5-fluoro-2-deoxyuridine-5-O-monophosphate) - a topoisomerase-1 (Top1), apoptosis-inducing poison - and nucleotide-binding proteins responsible for ATP hydrolysis from Bacillus subtilis. We also discuss how these methods can be extended to any macromolecular data set with an underlying distribution, including experimental data such as NMR structures.
Canonical Entropy and Phase Transition of Rotating Black Hole
Institute of Scientific and Technical Information of China (English)
ZHAO Ren; WU Yue-Qin; ZHANG Li-Chun
2008-01-01
Recently, the Hawking radiation of a black hole has been studied using the tunnel effect method. The radiation spectrum of a black hole is derived. By discussing the correction to spectrum of the rotating black hole, we obtain the canonical entropy. The derived canonical entropy is equal to the sum of Bekenstein-Hawking entropy and correction term. The correction term near the critical point is different from the one near others. This difference plays an important role in studying the phase transition of the black hole. The black hole thermal capacity diverges at the critical point. However, the canonical entropy is not a complex number at this point. Thus we think that the phase transition created by this critical point is the second order phase transition. The discussed black hole is a five-dimensional Kerr-AdS black hole. We provide a basis for discussing thermodynamic properties of a higher-dimensional rotating black hole.
Convolution theorems for the linear canonical transform and their applications
Institute of Scientific and Technical Information of China (English)
DENG Bing; TAO Ran; WANG Yue
2006-01-01
As generalization of the fractional Fourier transform (FRFT), the linear canonical transform (LCT) has been used in several areas, including optics and signal processing. Many properties for this transform are already known, but the convolution theorems, similar to the version of the Fourier transform, are still to be determined. In this paper, the authors derive the convolution theorems for the LCT, and explore the sampling theorem and multiplicative filter for the band limited signal in the linear canonical domain. Finally, the sampling and reconstruction formulas are deduced, together with the construction methodology for the above mentioned multiplicative filter in the time domain based on fast Fourier transform (FFT), which has much lower computational load than the construction method in the linear canonical domain.
Generalized Gibbs ensemble in a nonintegrable system with an extensive number of local symmetries
Hamazaki, Ryusuke; Ikeda, Tatsuhiko N.; Ueda, Masahito
2016-03-01
We numerically study the unitary time evolution of a nonintegrable model of hard-core bosons with an extensive number of local Z2 symmetries. We find that the expectation values of local observables in the stationary state are described better by the generalized Gibbs ensemble (GGE) than by the canonical ensemble. We also find that the eigenstate thermalization hypothesis fails for the entire spectrum but holds true within each symmetry sector, which justifies the GGE. In contrast, if the model has only one global Z2 symmetry or a size-independent number of local Z2 symmetries, we find that the stationary state is described by the canonical ensemble. Thus, the GGE is necessary to describe the stationary state even in a nonintegrable system if it has an extensive number of local symmetries.
Kernel canonical-correlation Granger causality for multiple time series
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
Directory of Open Access Journals (Sweden)
Karina Beatriz Lemes
2010-11-01
Full Text Available Intentaremos mostrar cómo venimos trabajando con la reconstrucción de la memoria literaria de la provincia de Misiones a partir de la recopilación de los manuscritos de sus autores más representativos. Hemos utilizado para nuestra lectura, en cruce con la crítica genética, las relaciones que Fernando Ainsa establece entre canon y periferia, espacios de la memoria y construcción de la utopía. Ainsa concibe la escritura como proceso genético que en su origen es personal, visceral y solitario, una búsqueda constante de identidad que se enriquece en contacto con el mundo, con la apertura de fronteras. Estas vinculaciones nos han permitido interpretar las prácticas sociales que fundaron actividades estéticas en la distancia de los centros de poder argentinos.This paper shows some findings of our ongoing research project dealing with the recuperation of literary memory in the province of Misiones by analysing a compilation of the literary manuscripts by the most representative authors of this northern region of Argentina. Here, we follow Fernado Ainsa’s notions of canon and periphery, of memory spaces and construction of utopias. Ainsa sees the act of writing as a genetic process for it originates within a personal, visceral, and solitary realm. For Ainsa, writing is also a permanent search for identity which becomes richer when in contact with the world, when frontiers open up. These concepts allow us to interpret the social practices that gave birth to these aesthetic projects far away from Argentina’s power centers.
Bayesian ensemble refinement by replica simulations and reweighting
Hummer, Gerhard
2015-01-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We find that the strength of the restraint scales with the number of replicas and we show that this sca...
Partition Function of Interacting Calorons Ensemble
Deldar, Sedigheh
2015-01-01
We present a method for computing the partition function of a caloron ensemble taking into account the interaction of calorons. We focus on caloron-Dirac string interaction and show that the metric that Diakonov and Petrov offered works well in the limit where this interaction occurs. We suggest computing the correlation function of two polyakov loops by applying Ewald's method.
Partition function of interacting calorons ensemble
Deldar, S.; Kiamari, M.
2016-01-01
We present a method for computing the partition function of a caloron ensemble taking into account the interaction of calorons. We focus on caloron-Dirac string interaction and show that the metric that Diakonov and Petrov offered, works well in the limit where this interaction occurs. We suggest computing the correlation function of two polyakov loops by applying Ewald's method.
Simulations in generalized ensembles through noninstantaneous switches
Giovannelli, Edoardo; Cardini, Gianni; Chelli, Riccardo
2015-10-01
Generalized-ensemble simulations, such as replica exchange and serial generalized-ensemble methods, are powerful simulation tools to enhance sampling of free energy landscapes in systems with high energy barriers. In these methods, sampling is enhanced through instantaneous transitions of replicas, i.e., copies of the system, between different ensembles characterized by some control parameter associated with thermodynamical variables (e.g., temperature or pressure) or collective mechanical variables (e.g., interatomic distances or torsional angles). An interesting evolution of these methodologies has been proposed by replacing the conventional instantaneous (trial) switches of replicas with noninstantaneous switches, realized by varying the control parameter in a finite time and accepting the final replica configuration with a Metropolis-like criterion based on the Crooks nonequilibrium work (CNW) theorem. Here we revise these techniques focusing on their correlation with the CNW theorem in the framework of Markovian processes. An outcome of this report is the derivation of the acceptance probability for noninstantaneous switches in serial generalized-ensemble simulations, where we show that explicit knowledge of the time dependence of the weight factors entering such simulations is not necessary. A generalized relationship of the CNW theorem is also provided in terms of the underlying equilibrium probability distribution at a fixed control parameter. Illustrative calculations on a toy model are performed with serial generalized-ensemble simulations, especially focusing on the different behavior of instantaneous and noninstantaneous replica transition schemes.
AN ALGORITHM FOR JORDAN CANONICAL FORM OF A QUATERNION MATRIX
Institute of Scientific and Technical Information of China (English)
姜同松; 魏木生
2003-01-01
In this paper, we first introduce a concept of companion vector, and studythe Jordan canonical forms of quaternion matrices by using the methods of complex representation and companion vector, not only give out a practical algorithm for Jordancanonical form J of a quaternion matrix A, but also provide a practical algorithm forcorresponding nonsingular matrix P with P- 1 AP = J.
CANONICAL COMPUTATIONAL FORMS FOR AR 2-D SYSTEMS
ROCHA, P; WILLEMS, JC
1990-01-01
A canonical form for AR 2-D systems representations is introduced. This yields a method for computing the system trajectories by means of a line-by-line recursion, and displays some relevant information about the system structure such as the choice of inputs and initial conditions.
On the Canonical Formalism for a Higher-Curvature Gravity
Ezawa, Y; Kajihara, M; Soda, J; Yano, T; Ezawa, Yasuo; Kiminami, Masahiko; Kajihara, Masahiro; Soda, Jiro; Yano, Tadasi
1999-01-01
Following the method of Buchbinder and Lyahovich, we carry out a canonical formalism for a higher-curvature gravity in which the Lagrangian density ${\\cal L}$ is given in terms of a function of the salar curvature $R$ as ${\\cal L}=\\sqrt{-\\det g_{\\mu\
A multisite seasonal ensemble streamflow forecasting technique
Bracken, Cameron; Rajagopalan, Balaji; Prairie, James
2010-03-01
We present a technique for providing seasonal ensemble streamflow forecasts at several locations simultaneously on a river network. The framework is an integration of two recent approaches: the nonparametric multimodel ensemble forecast technique and the nonparametric space-time disaggregation technique. The four main components of the proposed framework are as follows: (1) an index gauge streamflow is constructed as the sum of flows at all the desired spatial locations; (2) potential predictors of the spring season (April-July) streamflow at this index gauge are identified from the large-scale ocean-atmosphere-land system, including snow water equivalent; (3) the multimodel ensemble forecast approach is used to generate the ensemble flow forecast at the index gauge; and (4) the ensembles are disaggregated using a nonparametric space-time disaggregation technique resulting in forecast ensembles at the desired locations and for all the months within the season. We demonstrate the utility of this technique in skillful forecast of spring seasonal streamflows at four locations in the Upper Colorado River Basin at different lead times. Where applicable, we compare the forecasts to the Colorado Basin River Forecast Center's Ensemble Streamflow Prediction (ESP) and the National Resource Conservation Service "coordinated" forecast, which is a combination of the ESP, Statistical Water Supply, a principal component regression technique, and modeler knowledge. We find that overall, the proposed method is equally skillful to existing operational models while tending to better predict wet years. The forecasts from this approach can be a valuable input for efficient planning and management of water resources in the basin.
Nimon, Kim; Henson, Robin K.; Gates, Michael S.
2010-01-01
In the face of multicollinearity, researchers face challenges interpreting canonical correlation analysis (CCA) results. Although standardized function and structure coefficients provide insight into the canonical variates produced, they fall short when researchers want to fully report canonical effects. This article revisits the interpretation of…
Bayesian ensemble refinement by replica simulations and reweighting
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Kalinovsky, Yakiv O.; Boyarinova, Yuliya E.; Khitsko, Iana V.
2015-01-01
Digital filter construction method, which is optimal by parametric sensitivity, based on using of non-canonical hypercomplex number systems is proposed and investigated. It is shown that the use of non-canonical hypercomplex number system with greater number of non-zero structure constants in multiplication table can significantly improve the sensitivity of the digital filter.
The Basic Concepts of the General Linear Model (GLM): Canonical Correlation Analysis (CCA) as a GLM.
Kimbell, Anne-Marie
This paper illustrates how canonical correlation analysis can be used to implement all the parametric tests that canonical methods subsume as special cases. The point is heuristic: all analyses are correlational, apply weights to measured variables to create synthetic variables, and require the interpretation of both weights and structure…
De canon : een oude katholieke kerkstructuur?
Smit, P.B.A.
2011-01-01
Op 30 november 2011 houdt theoloog prof. dr. Peter-Ben Smit zijn oratie aan de Universiteit Utrecht. Daarin gaat hij na hoe de canon van het Nieuwe Testament tot stand kwam binnen de vroege kerk, en wat de functie van de canon was bij de uitleg - oftewel exegese - van de Schrift. Dit onderwerp kwam
CANONICAL EXTENSIONS OF SYMMETRIC LINEAR RELATIONS
Sandovici, Adrian; Davidson, KR; Gaspar, D; Stratila, S; Timotin, D; Vasilescu, FH
2006-01-01
The concept of canonical extension of Hermitian operators has been recently introduced by A. Kuzhel. This paper deals with a generalization of this notion to the case of symmetric linear relations. Namely, canonical regular extensions of symmetric linear relations in Hilbert spaces are studied. The
The Current Canon in British Romantics Studies.
Linkin, Harriet Kramer
1991-01-01
Describes and reports on a survey of 164 U.S. universities to ascertain what is taught as the current canon of British Romantic literature. Asserts that the canon may now include Mary Shelley with the former standard six major male Romantic poets, indicating a significant emergence of a feminist perspective on British Romanticism in the classroom.…
UNIVARIATE DECOMPOSE-ENSEMBLE METHOD BASED MILK DEMAND FORECASTING%基于单变量分解集成的牛奶消费需求预测研究
Institute of Scientific and Technical Information of China (English)
王帅; 汤铃; 余乐安
2013-01-01
Prediction for future market demand of milk is important for stabilizing milk price, developing marketing strategies and production planning decisions. This paper proposes a novel univariate decompose-ensemble methodology which uses the ensemble empirical mode decomposition (EEMD), wavelet decomposition, and least squares support vector regression (LSSVR) to predict milk consumption in China. At the same time, the single LSSVR method is applied for comparison purpose. In the decompose-ensemble methods, EEMD and wavelet decomposition methods are first used to decompose the original data and then LSSVR approach is used to predict the separated components. Finally, the prediction results of different components are combined to formulate the ensemble result. It can be seen from the forecasting results that the milk demand from 2010 to 2012 will increase. Based on the result, the related departments should take actions to ensure the healthy development of dairy market in China.%牛奶消费需求预测对牛奶价格的稳定以及奶业生产的计划安排、销售决策具有重要意义.选取牛奶的全国年度总消费量作为研究对象,提出基于集合经验模态分解(Ensemble Empirical Mode Decomposition,EEMD)/小波(Wavelet)分解和最小二乘支持向量回归(Least Squares Support Vector Regression,LSSVR)的单变量分解集成方法,以对牛奶消费需求量进行预测研究.实证检验表明,所提出的单变量分解集成预测方法相比单一预测模型能更为有效地预测牛奶消费需求.外推预测结果显示:2010-2012年我国牛奶消费量将呈现出上升的趋势.牛奶预测精度的有效提高将有助于有关决策部门提前做好调控工作,从而保证奶业市场的健康发展.
Effective Visualization of Temporal Ensembles.
Hao, Lihua; Healey, Christopher G; Bass, Steffen A
2016-01-01
An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions. PMID:26529728
Effective Visualization of Temporal Ensembles.
Hao, Lihua; Healey, Christopher G; Bass, Steffen A
2016-01-01
An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions.
Canon, Jubilees 23 and Psalm 90
Directory of Open Access Journals (Sweden)
Pieter M. Venter
2014-02-01
Full Text Available There never existed only one form of the biblical canon. This can be seen in the versions as well as editions of the Hebrew and Greek Bibles. History and circumstances played a central role in the gradual growth of eventually different forms of the biblical canon. This process can be studied using the discipline of intertextuality. There always was a movement from traditum to traditio in the growth of these variant forms of biblical canon. This can be seen in an analysis of the intertextuality in Jubilees 23:8–32. The available canon of the day was interpreted there, not according to a specific demarcated volume of canonical scriptures, but in line with the theology presented in those materials, especially that of Psalm 90.
Control and Synchronization of Neuron Ensembles
Li, Jr-Shin; Ruths, Justin
2011-01-01
Synchronization of oscillations is a phenomenon prevalent in natural, social, and engineering systems. Controlling synchronization of oscillating systems is motivated by a wide range of applications from neurological treatment of Parkinson's disease to the design of neurocomputers. In this article, we study the control of an ensemble of uncoupled neuron oscillators described by phase models. We examine controllability of such a neuron ensemble for various phase models and, furthermore, study the related optimal control problems. In particular, by employing Pontryagin's maximum principle, we analytically derive optimal controls for spiking single- and two-neuron systems, and analyze the applicability of the latter to an ensemble system. Finally, we present a robust computational method for optimal control of spiking neurons based on pseudospectral approximations. The methodology developed here is universal to the control of general nonlinear phase oscillators.
Efficient inference of protein structural ensembles
Lane, Thomas J; Beauchamp, Kyle A; Pande, Vijay S
2014-01-01
It is becoming clear that traditional, single-structure models of proteins are insufficient for understanding their biological function. Here, we outline one method for inferring, from experiments, not only the most common structure a protein adopts (native state), but the entire ensemble of conformations the system can adopt. Such ensemble mod- els are necessary to understand intrinsically disordered proteins, enzyme catalysis, and signaling. We suggest that the most difficult aspect of generating such a model will be finding a small set of configurations to accurately model structural heterogeneity and present one way to overcome this challenge.
Reconstruction of the coupling architecture in an ensemble of coupled time-delay systems
Sysoev, I. V.; Ponomarenko, V. I.; Prokhorov, M. D.
2012-08-01
A method for reconstructing the coupling architecture and values in an ensemble of time-delay interacting systems with an arbitrary number of couplings between ensemble elements is proposed. This method is based on reconstruction of the model equations of ensemble elements and diagnostics of the coupling significance by successive trial exclusion or adding coupling coefficients to the model.
Total probabilities of ensemble runoff forecasts
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2016-04-01
Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative
Multilevel ensemble Kalman filtering
Hoel, Hakon
2016-06-14
This work embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. The resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume
2013-01-01
Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.
The canonical form of the Rabi hamiltonian
Szopa, M; Ceulemans, A; Szopa, Marek; Mys, Geert; Ceulemans, Arnout
1996-01-01
The Rabi Hamiltonian, describing the coupling of a two-level system to a single quantized boson mode, is studied in the Bargmann-Fock representation. The corresponding system of differential equations is transformed into a canonical form in which all regular singularities between zero and infinity have been removed. The canonical or Birkhoff-transformed equations give rise to a two-dimensional eigenvalue problem, involving the energy and a transformational parameter which affects the coupling strength. The known isolated exact solutions of the Rabi Hamiltonian are found to correspond to the uncoupled form of the canonical system.
Tibetan Song and Dance Ensemble
Institute of Scientific and Technical Information of China (English)
1996-01-01
THE chief members of the TibetanSong and Dance Ensemble areTibetan,but also include Hui,Lhoba and Monba artists.This ensemble mainly performs Tibetan traditional music,dance and Tibetan opera.Programs can be divided into three categories,folk,traditional palace and monastery styles.The program of this ensemble includes the Tibetan symphony instrumental the Tibetan symphony instrumental suite "Ceremony in the Snowy Region."the palace dance "Karer"passages of the traditional Tibetan
Institute of Scientific and Technical Information of China (English)
朱群雄; 赵乃伟; 徐圆
2012-01-01
Chemical processes are complex, for which traditional neural network models usually can not lead to satisfactory accuracy. Selective neural network ensemble is an effective way to enhance the generalization accuracy of networks, but there are some problems, e.g., lacking of unified definition of diversity among component neural networks and difficult to improve the accuracy by selecting if the diversities of available networks are small. In this study, the output errors of networks are vectorized, the diversity of networks is defined based on the error vectors, and the size of ensemble is analyzed. Then an error vectorization based selective neural network ensemble (EVSNE) is proposed, in which the error vector of each network can offset that of the other networks by training the component networks orderly. Thus the component networks have large diversity. Experiments and comparisons over standard data sets and actual chemical process data set for production of high-density polyethylene demonstrate that EVSNE performs better in generalization ability.
Interpolation of property-values between electron numbers is inconsistent with ensemble averaging
Miranda-Quintana, Ramón Alain; Ayers, Paul W.
2016-06-01
In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integer electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem's surroundings in determining its properties.
Ensemble Forecast: A New Approach to Uncertainty and Predictability
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
Ensemble techniques have been used to generate daily numerical weather forecasts since the 1990s in numerical centers around the world due to the increase in computation ability. One of the main purposes of numerical ensemble forecasts is to try to assimilate the initial uncertainty (initial error) and the forecast uncertainty (forecast error) by applying either the initial perturbation method or the multi-model/multiphysics method. In fact, the mean of an ensemble forecast offers a better forecast than a deterministic (or control) forecast after a short lead time (3 5 days) for global modelling applications. There is about a 1-2-day improvement in the forecast skill when using an ensemble mean instead of a single forecast for longer lead-time. The skillful forecast (65% and above of an anomaly correlation) could be extended to 8 days (or longer) by present-day ensemble forecast systems. Furthermore, ensemble forecasts can deliver a probabilistic forecast to the users, which is based on the probability density function (PDF)instead of a single-value forecast from a traditional deterministic system. It has long been recognized that the ensemble forecast not only improves our weather forecast predictability but also offers a remarkable forecast for the future uncertainty, such as the relative measure of predictability (RMOP) and probabilistic quantitative precipitation forecast (PQPF). Not surprisingly, the success of the ensemble forecast and its wide application greatly increase the confidence of model developers and research communities.
Canonical variables and quasilocal energy in general relativity
Lau, Stephen
1993-01-01
Recently Brown and York have devised a new method for defining quasilocal energy in general relativity. Their method yields expressions for the quasilocal energy and momentum surface densities associated with the two-boundary of a spacelike slice of a spatially bounded spacetime. These expressions are essentially Arnowitt-Deser-Misner variables, but with canonical conjugacy defined with respect to the time history of the two-boundary. This paper introduces Ashtekar-type variables on the time ...
Nandi, Debottam
2016-01-01
In this work, we present a consistent Hamiltonian analysis of cosmological perturbations for generalized non-canonical scalar fields. In order to do so, we introduce a new phase-space variable that is uniquely defined for different non-canonical scalar fields. We also show that this is the simplest and efficient way of expressing the Hamiltonian. We extend the Hamiltonian approach of [arXiv:1512.02539] to non-canonical scalar field and obtain a new definition of speed of sound in phase-space. In order to invert generalized phase-space Hamilton's equations to Euler-Lagrange equations of motion, we prescribe a general inversion formulae and show that our approach for non-canonical scalar field is consistent. We also obtain the third and fourth order interaction Hamiltonian for generalized non-canonical scalar fields and briefly discuss the extension of our method to generalized Galilean scalar fields.
The Literary Canon in the Age of New Media
DEFF Research Database (Denmark)
Backe, Hans-Joachim
2015-01-01
The article offers a comparative overview of the diverging courses of the canon debate in Anglophone and Germanophone contexts. While the Anglophone canon debate has focused on the politics of canon composition, the Germanophone canon debate has been more concerned with the malleability and media......The article offers a comparative overview of the diverging courses of the canon debate in Anglophone and Germanophone contexts. While the Anglophone canon debate has focused on the politics of canon composition, the Germanophone canon debate has been more concerned with the malleability...
Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar; Karmakar, Sougata; Mazumdar, Debasis
2010-10-01
Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.
Ensemble Deep Learning for Biomedical Time Series Classification
Directory of Open Access Journals (Sweden)
Lin-peng Jin
2016-01-01
Full Text Available Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Filtering single atoms from Rydberg blockaded mesoscopic ensembles
Petrosyan, David; Mølmer, Klaus
2015-01-01
We propose an efficient method to filter out single atoms from trapped ensembles with unknown number of atoms. The method employs stimulated adiabatic passage to reversibly transfer a single atom to the Rydberg state which blocks subsequent Rydberg excitation of all the other atoms within the ensemble. This triggers the excitation of Rydberg blockaded atoms to short lived intermediate states and their subsequent decay to untrapped states. Using an auxiliary microwave field to carefully engineer the dissipation, we obtain a nearly deterministic single-atom source. Our method is applicable to small atomic ensembles in individual microtraps and in lattice arrays.
Ensemble Deep Learning for Biomedical Time Series Classification
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Subsets of configurations and canonical partition functions
DEFF Research Database (Denmark)
Bloch, J.; Bruckmann, F.; Kieburg, M.;
2013-01-01
We explain the physical nature of the subset solution to the sign problem in chiral random matrix theory: the subset sum over configurations is shown to project out the canonical determinant with zero quark charge from a given configuration. As the grand canonical chiral random matrix partition f...... function is independent of the chemical potential, the zero-quark-charge sector provides the full result. © 2013 American Physical Society....
Canonical equations of Hamilton with beautiful symmetry
Liang, Guo; Guo, Qi
2012-01-01
The Hamiltonian formulation plays the essential role in constructing the framework of modern physics. In this paper, a new form of canonical equations of Hamilton with the complete symmetry is obtained, which are valid not only for the first-order differential system, but also for the second-order differential system. The conventional form of the canonical equations without the symmetry [Goldstein et al., Classical Mechanics, 3rd ed, Addison-Wesley, 2001] are only for the second-order differe...
Construction of High-accuracy Ensemble of Classifiers
Directory of Open Access Journals (Sweden)
Hedieh Sajedi
2014-04-01
Full Text Available There have been several methods developed to construct ensembles. Some of these methods, such as Bagging and Boosting are meta-learners, i.e. they can be applied to any base classifier. The combination of methods should be selected in order that classifiers cover each other weaknesses. In ensemble, the output of several classifiers is used only when they disagree on some inputs. The degree of disagreement is called diversity of the ensemble. Another factor that plays a significant role in performing an ensemble is accuracy of the basic classifiers. It can be said that all the procedures of constructing ensembles seek to achieve a balance between these two parameters, and successful methods can reach a better balance. The diversity of the members of an ensemble is known as an important factor in determining its generalization error. In this paper, we present a new approach for generating ensembles. The proposed approach uses Bagging and Boosting as the generators of base classifiers. Subsequently, the classifiers are partitioned by means of a clustering algorithm. We introduce a selection phase for construction the final ensemble and three different selection methods are proposed for applying in this phase. In the first proposed selection method, a classifier is selected randomly from each cluster. The second method selects the most accurate classifier from each cluster and the third one selects the nearest classifier to the center of each cluster to construct the final ensemble. The results of the experiments on well-known datasets demonstrate the strength of our proposed approach, especially applying the selection of the most accurate classifiers from clusters and employing Bagging generator.
Refining inflation using non-canonical scalars
Energy Technology Data Exchange (ETDEWEB)
Unnikrishnan, Sanil; Sahni, Varun [Inter-University Centre for Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007 (India); Toporensky, Aleksey, E-mail: sanil@iucaa.ernet.in, E-mail: varun@iucaa.ernet.in, E-mail: atopor@rambler.ru [Sternberg Astronomical Institute, Moscow State University, Universitetsky Prospekt, 13, Moscow 119992 (Russian Federation)
2012-08-01
This paper revisits the Inflationary scenario within the framework of scalar field models possessing a non-canonical kinetic term. We obtain closed form solutions for all essential quantities associated with chaotic inflation including slow roll parameters, scalar and tensor power spectra, spectral indices, the tensor-to-scalar ratio, etc. We also examine the Hamilton-Jacobi equation and demonstrate the existence of an inflationary attractor. Our results highlight the fact that non-canonical scalars can significantly improve the viability of inflationary models. They accomplish this by decreasing the tensor-to-scalar ratio while simultaneously increasing the value of the scalar spectral index, thereby redeeming models which are incompatible with the cosmic microwave background (CMB) in their canonical version. For instance, the non-canonical version of the chaotic inflationary potential, V(φ) ∼ λφ{sup 4}, is found to agree with observations for values of λ as large as unity! The exponential potential can also provide a reasonable fit to CMB observations. A central result of this paper is that steep potentials (such as V∝φ{sup −n}) usually associated with dark energy, can drive inflation in the non-canonical setting. Interestingly, non-canonical scalars violate the consistency relation r = −8n{sub T}, which emerges as a smoking gun test for this class of models.
Active Diverse Learning Neural Network Ensemble Approach for Power Transformer Fault Diagnosis
Directory of Open Access Journals (Sweden)
Yu Xu
2010-10-01
Full Text Available An ensemble learning algorithm was proposed in this paper by analyzing the error function of neural network ensembles, by which, individual neural networks were actively guided to learn diversity. By decomposing the ensemble error function, error correlation terms were included in the learning criterion function of individual networks. And all the individual networks in the ensemble were leaded to learn diversity through cooperative training. The method was applied in Dissolved Gas Analysis based fault diagnosis of power transformer. Experiment results show that, the algorithm has higher accuracy than IEC method and BP network. In addition, the performance is more stable than conventional ensemble method, i.e., Bagging and Boosting.
Ensemble-based Probabilistic Forecasting at Horns Rev
DEFF Research Database (Denmark)
Pinson, Pierre; Madsen, Henrik
2009-01-01
of probabilistic forecasts, the resolution of which may be maximized by using meteorological ensemble predictions as input. The paper concentrates on the test case of the Horns Rev wind form over a period of approximately 1 year, in order to describe, apply and discuss a complete ensemble-based probabilistic...... forecasting methodology. In a first stage, ensemble forecasts of meteorological variables are converted to power through a suitable power curve model. This modelemploys local polynomial regression, and is adoptively estimated with an orthogonal fitting method. The obtained ensemble forecasts of wind power...... the benefit of yielding predictive distributions that are of increased reliability (in a probabilistic sense) in comparison with the raw ensemble forecasts, at the some time taking advantage of their high resolution. Copyright (C) 2008 John Wiley & Sons, Ltd....
Excitations and benchmark ensemble density functional theory for two electrons
Pribram-Jones, Aurora; Trail, John R; Burke, Kieron; Needs, Richard J; Ullrich, Carsten A
2014-01-01
A new method for extracting ensemble Kohn-Sham potentials from accurate excited state densities is applied to a variety of two electron systems, exploring the behavior of exact ensemble density functional theory. The issue of separating the Hartree energy and the choice of degenerate eigenstates is explored. A new approximation, spin eigenstate Hartree-exchange (SEHX), is derived. Exact conditions that are proven include the signs of the correlation energy components, the virial theorem for both exchange and correlation, and the asymptotic behavior of the potential for small weights of the excited states. Many energy components are given as a function of the weights for two electrons in a one-dimensional flat box, in a box with a large barrier to create charge transfer excitations, in a three-dimensional harmonic well (Hooke's atom), and for the He atom singlet-triplet ensemble, singlet-triplet-singlet ensemble, and triplet bi-ensemble.
Directory of Open Access Journals (Sweden)
Alexander M Many
Full Text Available The characterization of mammary stem cells, and signals that regulate their behavior, is of central importance in understanding developmental changes in the mammary gland and possibly for targeting stem-like cells in breast cancer. The canonical Wnt/β-catenin pathway is a signaling mechanism associated with maintenance of self-renewing stem cells in many tissues, including mammary epithelium, and can be oncogenic when deregulated. Wnt1 and Wnt3a are examples of ligands that activate the canonical pathway. Other Wnt ligands, such as Wnt5a, typically signal via non-canonical, β-catenin-independent, pathways that in some cases can antagonize canonical signaling. Since the role of non-canonical Wnt signaling in stem cell regulation is not well characterized, we set out to investigate this using mammosphere formation assays that reflect and quantify stem cell properties. Ex vivo mammosphere cultures were established from both wild-type and Wnt1 transgenic mice and were analyzed in response to manipulation of both canonical and non-canonical Wnt signaling. An increased level of mammosphere formation was observed in cultures derived from MMTV-Wnt1 versus wild-type animals, and this was blocked by treatment with Dkk1, a selective inhibitor of canonical Wnt signaling. Consistent with this, we found that a single dose of recombinant Wnt3a was sufficient to increase mammosphere formation in wild-type cultures. Surprisingly, we found that Wnt5a also increased mammosphere formation in these assays. We confirmed that this was not caused by an increase in canonical Wnt/β-catenin signaling but was instead mediated by non-canonical Wnt signals requiring the receptor tyrosine kinase Ror2 and activity of the Jun N-terminal kinase, JNK. We conclude that both canonical and non-canonical Wnt signals have positive effects promoting stem cell activity in mammosphere assays and that they do so via independent signaling mechanisms.
Institute of Scientific and Technical Information of China (English)
杜良敏; 张培群; 周月华; 肖莺; 徐桂荣
2011-01-01
A short-term climate prediction modeI(NMF-CCA), based on non-negative matrix factorization and canonical correlation, is designed. This model can predict meteorological elements, through establishing the relation between forecasting object and proper impact factor. By means of preceding winter snow in Tibetan Plateau from NCEP/NCAR, and forecast fields of summer rainfall in Central China from 1971 to 2008, EOF-CCA and NMF-CCA respectively are used to do cross validation in 1999-2008. The results show that two methods obtain good result, but NMF-CCA is better than EOF-CCA, by comparing average scores of ACC, Ps and Ss in ten years. The three different grading results of NMF-CCA are 0.33, 76.68 and 0. 12, respectively. It indicates that NMF-CCA has certain application value.%设计了一种基于非负矩阵分解和典型相关的短期气候预测模型(NMF-CCA),通过选择适当的影响要素场,建立与预测对象之间的联系,从而实现对气象场序列的预测.并将NCEP再分析前冬青藏高原积雪作为因子场,华中五个省市的1971-2008年近38年夏季(6～8月)降水资料作为预报对象场,分别使用EOF-CCA和NMF-CCA 2种预报模型,做1999-2008年10年独立样本交叉检验.结果表明,EOF-CCA与NMF-CCA都取得了较好预报效果,但NMF-CCA模型近10年交叉检验的3种评分的平均值要略好于EOF-CCA方法,ACC,Ps,Ss评分方法的评分分别为0.33,76.68和0.12,显示出该模型具有一定的应用价值.
Modality-Driven Classification and Visualization of Ensemble Variance
Energy Technology Data Exchange (ETDEWEB)
Bensema, Kevin; Gosink, Luke J.; Obermaier, Harald; Joy, Kenneth
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Improving land resource evaluation using fuzzy neural network ensembles
XUE, Y.-J.; HU, Y.-M.; Liu, S.-G.; YANG, J.-F.; CHEN, Q.-C.; BAO, S.-T.
2007-01-01
Land evaluation factors often contain continuous-, discrete- and nominal-valued attributes. In traditional land evaluation, these different attributes are usually graded into categorical indexes by land resource experts, and the evaluation results rely heavily on experts' experiences. In order to overcome the shortcoming, we presented a fuzzy neural network ensemble method that did not require grading the evaluation factors into categorical indexes and could evaluate land resources by using the three kinds of attribute values directly. A fuzzy back propagation neural network (BPNN), a fuzzy radial basis function neural network (RBFNN), a fuzzy BPNN ensemble, and a fuzzy RBFNN ensemble were used to evaluate the land resources in Guangdong Province. The evaluation results by using the fuzzy BPNN ensemble and the fuzzy RBFNN ensemble were much better than those by using the single fuzzy BPNN and the single fuzzy RBFNN, and the error rate of the single fuzzy RBFNN or fuzzy RBFNN ensemble was lower than that of the single fuzzy BPNN or fuzzy BPNN ensemble, respectively. By using the fuzzy neural network ensembles, the validity of land resource evaluation was improved and reliance on land evaluators' experiences was considerably reduced. ?? 2007 Soil Science Society of China.
DEFF Research Database (Denmark)
2004-01-01
Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output is...
Representative Ensembles in Statistical Mechanics
V. I. YUKALOV
2007-01-01
The notion of representative statistical ensembles, correctly representing statistical systems, is strictly formulated. This notion allows for a proper description of statistical systems, avoiding inconsistencies in theory. As an illustration, a Bose-condensed system is considered. It is shown that a self-consistent treatment of the latter, using a representative ensemble, always yields a conserving and gapless theory.
A simple grand canonical approach to compute the vapor pressure of bulk and finite size systems
Energy Technology Data Exchange (ETDEWEB)
Factorovich, Matías H.; Scherlis, Damián A. [Departamento de Química Inorgánica, Analítica y Química Física/INQUIMAE, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pab. II, Buenos Aires C1428EHA (Argentina); Molinero, Valeria [Department of Chemistry, University of Utah, 315 South 1400 East, Salt Lake City, Utah 84112-0850 (United States)
2014-02-14
In this article we introduce a simple grand canonical screening (GCS) approach to accurately compute vapor pressures from molecular dynamics or Monte Carlo simulations. This procedure entails a screening of chemical potentials using a conventional grand canonical scheme, and therefore it is straightforward to implement for any kind of interface. The scheme is validated against data obtained from Gibbs ensemble simulations for water and argon. Then, it is applied to obtain the vapor pressure of the coarse-grained mW water model, and it is shown that the computed value is in excellent accord with the one formally deduced using statistical thermodynamics arguments. Finally, this methodology is used to calculate the vapor pressure of a water nanodroplet of 94 molecules. Interestingly, the result is in perfect agreement with the one predicted by the Kelvin equation for a homogeneous droplet of that size.
Institute of Scientific and Technical Information of China (English)
任宏利; 张培群; 李维京; 陈丽娟
2014-01-01
Focusing on the monthly forecasting problem based on the Atmospheric General Circulation Model (AGCM),a method of the dynamical-analogue ensemble forecasting (DAEF)is proposed to effectively reduce prediction errors and increase prediction skills.This method aims to the intrinsic combination of the dynamical model and statistical-empirical methods,which can establish perturbation members for ensemble forecasting by extracting the historical analogue information of the atmospher-ic general circulation,parameterizing empirically model errors and generating the multi-time-independent analogue forcing.Ap-plying this new ensemble method to the operational AGCM in Beijing Climate Center (BCC AGCM1),a 10-yr monthly forecas-ting experiment under a quasi-operational condition shows encouraging results.Compared with the operational ensemble fore-casts by the BCC AGCM1,the DAEF method is capable to improve effectively prediction skills of the monthly-mean and daily atmospheric circulation forecasts in which the former almost reaches the standard,available in the BCC operation,through ef-fectively improving predictions of the zonal mean,ultra-long waves and long waves of the circulation.The results also show that prediction errors for the DAEF are significantly reduced and its spread of the ensemble members is reasonably increased, indicating an improvement in the relationship between the prediction errors and the spread.This study suggests a big potential application of the DAEF method in the BCC monthly forecasting operation.%针对基于大气环流模式的月预报问题，提出了一种能有效减小预报误差并提高预报技巧的动力相似集合预报新方法。该方法着眼于动力模式与统计经验的内在结合，在模式积分过程中通过提取大气环流历史相似性信息，对模式误差进行参数化处理，形成多个时变的相似强迫量来扰动生成预报的集合成员。将这一集合新方法应用到中国国家气候中
基于选择性SVM集成的模拟电路故障诊断方法%A Method of Analog Circuit Fault Diagnosis Based on Selective SVM Ensemble
Institute of Scientific and Technical Information of China (English)
吴杰长; 刘海松; 陈国钧
2011-01-01
为克服支持向量机在故障诊断应用中存在的不足,设计了基于聚类分析的选择性支持向量机集成学习算法,并应用于模拟电路故障诊断.该方法采用K-means聚类算法去除相似冗余个体,提高剩余个体学习机的差异性,增强了支持向量机集成模型的泛化能力.以ITC' 97标准电路中的Leap-Frog滤波电路为诊断实例进行了仿真实验.%A method of analog circuit fault diagnosis based on selective SVM ensemble is pres-ented in this paper. K - means clustering algorithm is used to improve the diversity of individuals in SVM ensemble, he method overcomes disadvantages of single SVM and greatly improves the generation ability. Simulation experiments on a Leap -Frog filter circuit are carried out.
AUC-Maximizing Ensembles through Metalearning.
LeDell, Erin; van der Laan, Mark J; Peterson, Maya
2016-05-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
The Hydrologic Ensemble Prediction Experiment (HEPEX)
Wood, Andy; Wetterhall, Fredrik; Ramos, Maria-Helena
2015-04-01
The Hydrologic Ensemble Prediction Experiment was established in March, 2004, at a workshop hosted by the European Center for Medium Range Weather Forecasting (ECMWF), and co-sponsored by the US National Weather Service (NWS) and the European Commission (EC). The HEPEX goal was to bring the international hydrological and meteorological communities together to advance the understanding and adoption of hydrological ensemble forecasts for decision support. HEPEX pursues this goal through research efforts and practical implementations involving six core elements of a hydrologic ensemble prediction enterprise: input and pre-processing, ensemble techniques, data assimilation, post-processing, verification, and communication and use in decision making. HEPEX has grown through meetings that connect the user, forecast producer and research communities to exchange ideas, data and methods; the coordination of experiments to address specific challenges; and the formation of testbeds to facilitate shared experimentation. In the last decade, HEPEX has organized over a dozen international workshops, as well as sessions at scientific meetings (including AMS, AGU and EGU) and special issues of scientific journals where workshop results have been published. Through these interactions and an active online blog (www.hepex.org), HEPEX has built a strong and active community of nearly 400 researchers & practitioners around the world. This poster presents an overview of recent and planned HEPEX activities, highlighting case studies that exemplify the focus and objectives of HEPEX.
Modulating the phase structure of black D6 branes in canonical ensemble
Lu, J X
2013-01-01
In [1], we find that the phase structure of charged black D5 system can be changed qualitatively by adding the delocalized D1 branes but the similar change resists to happen for the charged black D6 system through adding the delocalized D2 branes, giving rise to the same type of D(p - 4)/Dp. Adding the delocalized D4 branes to the black D6 branes doesn't work, either. In this paper, we consider further to add the delocalized D0 branes, the only remaining lower dimensional branes, to the black D6 system for this purpose. We find that the delocalized charged black D0-branes alone share the same phase structure as the charged black D6 branes, having no van der Waals-Maxwell liquid-gas type. However, when the two are combined to form D0/D6, the resulting phase diagram has finally been changed dramatically to the wanted one, containing now the above liquid-gas type. This change arises from the interaction between the delocalized D0 and D6.
Unified expression for the calculation of thermal conductivity in the canonical ensemble
Chialvo, Ariel A.; Cummings, Peter T.
A proof of the theoretical equivalence between the E. Helfand, 1960, Phys. Rev., 119, 1 and the D. McQuarrie, 1976, Statistical Mechanics (Harper & Row), Chap. 21 equations for the calculation of thermal conductivity via the Einsteintype relations is presented here. Some theoretical implications of that equivalence are also discussed, such as the unification of the thermal conductivity expressions into one similar to that given for linear transport coefficients by F. C. Andrews, 1967, J. Chem. Phys., 47, 3161.
Large Ensembles of Regional Climate Projections
Massey, Neil; Allen, Myles; Hall, Jim
2016-04-01
Projections of regional climate change have great utility for impact assessment at a local scale. The CORDEX climate projection framework presents a method of providing these regional projections by driving a regional climate model (RCM) with output from CMIP5 climate projection runs of global climate models (GCM). This produces an ensemble of regional climate projections, sampling the model uncertainty, the forcing uncertainty and the uncertainty of the response of the climate system to the increase in greenhouse gas (GHG) concentrations. Using the weather@home project to compute large ensembles of RCMs via volunteer distributed computing presents another method of generating projections of climate variables and also allows the sampling of the uncertainty due to internal variability. weather@home runs both a RCM and GCM on volunteer's home computers, with the free-running GCM driving the boundaries of the RCM. The GCM is an atmosphere only model and requires forcing at the lower boundary with sea-surface temperature (SST) and sea-ice concentration (SIC) data. By constructing SST and SIC projections, using projections of GHG and other atmospheric gases, and running the weather@home RCM and GCM with these forcings, large ensembles of projections of climate variables at regional scales can be made. To construct the SSTs and SICs, a statistical model is built to represent the response of SST and SIC to increases in GHG concentrations in the CMIP5 ensemble, for both the RCP4.5 and RCP8.5 scenarios. This statistical model uses empirical orthogonal functions (EOFs) to represent the change in the long term trend of SSTs in the CMIP5 projections. A multivariate distribution of the leading principle components (PC) is produced using a copula and sampled to produce a timeseries of PCs which are recombined with the EOFs to generate a timeseries of SSTs, with internal variability added from observations. Hence, a large ensemble of SST projections is generated, with each SST
Description du canon à gaz DEMETER et des chaînes de mesures associées
Chartagnac, P.; Jimenez, B
1984-01-01
L'ensemble expérimental qui est décrit dans cet article est destiné à l'étude du comportement des solides en régime de choc plan. Il est constitué du canon à gaz comprimé DEMETER et de plusieurs chaînes de mesures. Le canon utilisant de l'air ou de l'hélium comme gaz moteur, peut propulser des « projectiles » de 110 mm de diamètre avec une vitesse continûment programmable de 100 m/s à 1 150 m/s et une reproductibilité de 1 %. Les chaînes de mesures implantées sur le canon et reliées à un calc...
The canonical Kravchuk basis for discrete quantum mechanics
Hakioglu, Tugrul; Wolf, Kurt Bernardo
2000-04-01
The well known Kravchuk formalism of the harmonic oscillator obtained from the direct discretization method is shown to be a new way of formulating discrete quantum phase space. It is shown that the Kravchuk oscillator Hamiltonian has a well defined unitary canonical partner which we identify with the quantum phase of the Kravchuk oscillator. The generalized discrete Wigner function formalism based on the action and angle variables is applied to the Kravchuk oscillator and its continuous limit is examined.
Canonical reduction for dilatonic gravity in 3+1 dimensions
Scott, T C; Mann, R B; Fee, G J
2016-01-01
We generalize the 1+1-dimensional gravity formalism of Ohta and Mann to 3+1 dimensions by developing the canonical reduction of a proposed formalism applied to a system coupled with a set of point particles. This is done via the Arnowitt-Deser-Misner method and by eliminating the resulting constraints and imposing coordinate conditions. The reduced Hamiltonian is completely determined in terms of the particles' canonical variables (coordinates, dilaton field and momenta). It is found that the equation governing the dilaton field under suitable gauge and coordinate conditions, including the absence of transverse-traceless metric components, is a logarithmic Schroedinger equation. Thus, although different, the 3+1 formalism retains some essential features of the earlier 1+1 formalism, in particular the means of obtaining a quantum theory for dilatonic gravity.
The Ensembl gene annotation system.
Aken, Bronwen L; Ayling, Sarah; Barrell, Daniel; Clarke, Laura; Curwen, Valery; Fairley, Susan; Fernandez Banet, Julio; Billis, Konstantinos; García Girón, Carlos; Hourlier, Thibaut; Howe, Kevin; Kähäri, Andreas; Kokocinski, Felix; Martin, Fergal J; Murphy, Daniel N; Nag, Rishi; Ruffier, Magali; Schuster, Michael; Tang, Y Amy; Vogel, Jan-Hinnerk; White, Simon; Zadissa, Amonida; Flicek, Paul; Searle, Stephen M J
2016-01-01
The Ensembl gene annotation system has been used to annotate over 70 different vertebrate species across a wide range of genome projects. Furthermore, it generates the automatic alignment-based annotation for the human and mouse GENCODE gene sets. The system is based on the alignment of biological sequences, including cDNAs, proteins and RNA-seq reads, to the target genome in order to construct candidate transcript models. Careful assessment and filtering of these candidate transcripts ultimately leads to the final gene set, which is made available on the Ensembl website. Here, we describe the annotation process in detail.Database URL: http://www.ensembl.org/index.html. PMID:27337980
Global canonical symmetry in a quantum system
Institute of Scientific and Technical Information of China (English)
李子平
1996-01-01
Based on the phase-space path integral for a system with a regular or singular Lagrangian the generalized canonical Ward identities under the global symmetry transformation in extended phase space are deduced respectively, thus the relations among Green functions can be found. The connection between canonical symmetries and conservation laws at the quantum level is established. It is pointed out that this connection in classical theories, in general, is no longer always preserved in quantum theories. The advantage of our formulation is that we do not need to carry out the integration over the canonical momenta in phase-space generating functional as usually performed. A precise discussion of quantization for a nonlinear sigma model with Hopf and Chern-Simons terms is reexamined. The property of fractional spin at quantum level has been clarified.
Covariant Gauge Fixing and Canonical Quantization
McKeon, D G C
2011-01-01
Theories that contain first class constraints possess gauge invariance which results in the necessity of altering the measure in the associated quantum mechanical path integral. If the path integral is derived from the canonical structure of the theory, then the choice of gauge conditions used in constructing Faddeev's measure cannot be covariant. This shortcoming is normally overcome either by using the "Faddeev-Popov" quantization procedure, or by the approach of Batalin-Fradkin-Fradkina-Vilkovisky, and then demonstrating that these approaches are equivalent to the path integral constructed from the canonical approach with Faddeev's measure. We propose in this paper an alternate way of defining the measure for the path integral when it is constructed using the canonical procedure for theories containing first class constraints and that this new approach can be used in conjunction with covariant gauges. This procedure follows the Faddeev-Popov approach, but rather than working with the form of the gauge tran...
Functional linear regression via canonical analysis
He, Guozhong; Wang, Jane-Ling; Yang, Wenjing; 10.3150/09-BEJ228
2011-01-01
We study regression models for the situation where both dependent and independent variables are square-integrable stochastic processes. Questions concerning the definition and existence of the corresponding functional linear regression models and some basic properties are explored for this situation. We derive a representation of the regression parameter function in terms of the canonical components of the processes involved. This representation establishes a connection between functional regression and functional canonical analysis and suggests alternative approaches for the implementation of functional linear regression analysis. A specific procedure for the estimation of the regression parameter function using canonical expansions is proposed and compared with an established functional principal component regression approach. As an example of an application, we present an analysis of mortality data for cohorts of medflies, obtained in experimental studies of aging and longevity.
A Canonical Analysis of the Massless Superparticle
McKeon, D G C
2012-01-01
The canonical structure of the action for a massless superparticle is considered in d = 2 + 1 and d = 3 + 1 dimensions. This is done by examining the contribution to the action of each of the components of the spinor {\\theta} present; no attempt is made to maintain manifest covariance. Upon using the Dirac Bracket to eliminate the second class constraints arising from the canonical momenta associated with half of these components, we find that the remaining components have canonical momenta that are all first class constraints. From these first class constraints, it is possible to derive the generator of half of the local Fermionic {\\kappa}-symmetry of Siegel; which half is contingent upon the choice of which half of the momenta associated with the components of {\\theta} are taken to be second class constraints. The algebra of the generator of this Fermionic symmetry transformation is examined.
Universal canonical entropy for gravitating systems
Indian Academy of Sciences (India)
Ashok Chatterjee; Parthasarathi Majumdar
2004-10-01
The thermodynamics of general relativistic systems with boundary, obeying a Hamiltonian constraint in the bulk, is determined solely by the boundary quantum dynamics, and hence by the area spectrum. Assuming, for large area of the boundary, (a) an area spectrum as determined by non-perturbative canonical quantum general relativity (NCQGR), (b) an energy spectrum that bears a power law relation to the area spectrum, (c) an area law for the leading order microcanonical entropy, leading thermal fluctuation corrections to the canonical entropy are shown to be logarithmic in area with a universal coefficient. Since the microcanonical entropy also has universal logarithmic corrections to the area law (from quantum space-time fluctuations, as found earlier) the canonical entropy then has a universal form including logarithmic corrections to the area law. This form is shown to be independent of the index appearing in assumption (b). The index, however, is crucial in ascertaining the domain of validity of our approach based on thermal equilibrium.
Evidence of non-canonical NOTCH signaling
DEFF Research Database (Denmark)
Traustadóttir, Gunnhildur Ásta; Jensen, Charlotte H; Thomassen, Mads;
2016-01-01
suggested to interact with NOTCH1 and act as an antagonist. This non-canonical interaction is, however controversial, and evidence for a direct interaction, still lacking in mammals. In this study, we elucidated the putative DLK1-NOTCH1 interaction in a mammalian context. Taking a global approach and using...... this interaction to occur between EGF domains 5 and 6 of DLK1 and EGF domains 10-15 of NOTCH1. Thus, our data provide the first evidence for a direct interaction between DLK1 and NOTCH1 in mammals, and substantiate that non-canonical NOTCH ligands exist, adding to the complexity of NOTCH signaling....
Jordan Canonical Form Theory and Practice
Weintraub, Steven H
2009-01-01
Jordan Canonical Form (JCF) is one of the most important, and useful, concepts in linear algebra. The JCF of a linear transformation, or of a matrix, encodes all of the structural information about that linear transformation, or matrix. This book is a careful development of JCF. After beginning with background material, we introduce Jordan Canonical Form and related notions: eigenvalues, (generalized) eigenvectors, and the characteristic and minimum polynomials. We decide the question of diagonalizability, and prove the Cayley-Hamilton theorem. Then we present a careful and complete proof of t
Staying Thermal with Hartree Ensemble Approximations
Salle, M; Vink, Jeroen C
2000-01-01
Using Hartree ensemble approximations to compute the real time dynamics of scalar fields in 1+1 dimension, we find that with suitable initial conditions, approximate thermalization is achieved much faster than found in our previous work. At large times, depending on the interaction strength and temperature, the particle distribution slowly changes: the Bose-Einstein distribution of the particle densities develops classical features. We also discuss variations of our method which are numerically more efficient.
Diurnal Ensemble Surface Meteorology Statistics
U.S. Environmental Protection Agency — Excel file containing diurnal ensemble statistics of 2-m temperature, 2-m mixing ratio and 10-m wind speed. This Excel file contains figures for Figure 2 in the...
The entropy of network ensembles
Bianconi, Ginestra
2008-01-01
In this paper we generalize the concept of random networks to describe networks with non trivial features by a statistical mechanics approach. This framework is able to describe ensembles of undirected, directed as well as weighted networks. These networks might have not trivial community structure or, in the case of networks embedded in a given space, non trivial distance dependence of the link probability. These ensembles are characterized by their entropy which evaluate the cardinality of ...
Institute of Scientific and Technical Information of China (English)
陈略; 唐歌实; 訾艳阳; 冯卓楠; 李康
2011-01-01
针对总体平均经验模式分解(EEMD)中参数自动获取问题,提出了一种自适应EEMD方法.首先通过分析白噪声影响经验模式分解效果,建立了EEMD方法中加入白噪声大小的可依据准则,对不同信号可自适应获取加入白噪声大小与总体平均次数两个重要参数,进而得到一种自适应EEMD算法.最后将其应用于心电信号处理中,成功进行心电信号消噪与心率特征提取,验证了该算法的有效性,为复杂背景条件下的航天员心电信号处理提供了一种有效方法.%To solve the problem of parameters automatically obtaining in ensemble empirical mode decomposition (EEMD), a new method called the adaptive EEMD is proposed in this paper. Firstly, the essence of how white noise affects the effect of empirical mode decomposition is revealed. Then a criterion of adding white noise in EEMD method is proposed, which can be used to adaptively obtain two key parameters in EEMD: the added white noise magnitude and the ensemble times. Finally, adaptive EEMD algorithm is put forward, and it is applied to electrocardiogram (ECG) signal processing experiment. The results show that ECG signal denoising and heart rate characteristic extraction is successfully accomplished, the validity of adaptive EEMD is verified. Meanwhile, it provides an effective method for cosmonaut ECG signal processing under a complex background condition.
Prediction of Weather Impacted Airport Capacity using Ensemble Learning
Wang, Yao Xun
2011-01-01
Ensemble learning with the Bagging Decision Tree (BDT) model was used to assess the impact of weather on airport capacities at selected high-demand airports in the United States. The ensemble bagging decision tree models were developed and validated using the Federal Aviation Administration (FAA) Aviation System Performance Metrics (ASPM) data and weather forecast at these airports. The study examines the performance of BDT, along with traditional single Support Vector Machines (SVM), for airport runway configuration selection and airport arrival rates (AAR) prediction during weather impacts. Testing of these models was accomplished using observed weather, weather forecast, and airport operation information at the chosen airports. The experimental results show that ensemble methods are more accurate than a single SVM classifier. The airport capacity ensemble method presented here can be used as a decision support model that supports air traffic flow management to meet the weather impacted airport capacity in order to reduce costs and increase safety.
A canonical theory of dynamic decision-making
Directory of Open Access Journals (Sweden)
John eFox
2013-04-01
Full Text Available Decision-making behaviour is studied in many very different fields, from medicine and economics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI and other technical disciplines. However the conceptualisation of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision making with respect to other high-level cognitive capabilities like problem-solving, planning and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuro-psychology, artificial intelligence, and decision engineering.
The canonical effect in statistical models for relativistic heavy ion collisions
Keranen, A.; Becattini, F.
2001-01-01
Enforcing exact conservation laws instead of average ones in statistical thermal models for relativistic heavy ion reactions gives raise to so called canonical effect, which can be used to explain some enhancement effects when going from elementary (e.g. pp) or small (pA) systems towards large AA systems. We review the recently developed method for computation of canonical statistical thermodynamics, and give an insight when this is needed in analysis of experimental data.
The canonical effect in statistical models for relativistic heavy ion collisions
Keränen, A
2002-01-01
Enforcing exact conservation laws instead of average ones in statistical thermal models for relativistic heavy ion reactions gives raise to so called canonical effect, which can be used to explain some enhancement effects when going from elementary (e.g. pp) or small (pA) systems towards large AA systems. We review the recently developed method for computation of canonical statistical thermodynamics, and give an insight when this is needed in analysis of experimental data.
Change detection in bi-temporal data by canonical information analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2015-01-01
Canonical correlation analysis (CCA) is an established multivariate statistical method for finding similarities between linear combinations of (normally two) sets of multivariate observations. In this contribution we replace (linear) correlation as the measure of association between the linear...... combinations with the information theoretical measure mutual information (MI). We term this type of analysis canonical information analysis (CIA). MI allows for the actual joint distribution of the variables involved and not just second order statistics. Where CCA is ideal for Gaussian data, CIA facilitates...
Non-canonical two-field inflation to order $\\xi^2$
Wang, Yun-Chao
2016-01-01
In non-canonical two-field inflation models, deviations from the canonical model can be captured by a parameter $\\xi$. We show this parameter is usually one half of the slow-roll order and analytically calculate the primordial power spectra to the precision of order $\\xi^2$. The super-horizon perturbations are studied with an improved method, which gives a correction of order $\\xi$. Three typical examples demonstrate that our analytical formulae of power spectra fit well with numerical simulation.
Deformed Ginibre ensembles and integrable systems
Energy Technology Data Exchange (ETDEWEB)
Orlov, A.Yu., E-mail: orlovs@ocean.ru
2014-01-17
We consider three Ginibre ensembles (real, complex and quaternion-real) with deformed measures and relate them to known integrable systems by presenting partition functions of these ensembles in form of fermionic expectation values. We also introduce double deformed Dyson–Wigner ensembles and compare their fermionic representations with those of Ginibre ensembles.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.
Kelly, David; Majda, Andrew J; Tong, Xin T
2015-08-25
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.
The Use of Artificial-Intelligence-Based Ensembles for Intrusion Detection: A Review
Directory of Open Access Journals (Sweden)
Gulshan Kumar
2012-01-01
Full Text Available In supervised learning-based classification, ensembles have been successfully employed to different application domains. In the literature, many researchers have proposed different ensembles by considering different combination methods, training datasets, base classifiers, and many other factors. Artificial-intelligence-(AI- based techniques play prominent role in development of ensemble for intrusion detection (ID and have many benefits over other techniques. However, there is no comprehensive review of ensembles in general and AI-based ensembles for ID to examine and understand their current research status to solve the ID problem. Here, an updated review of ensembles and their taxonomies has been presented in general. The paper also presents the updated review of various AI-based ensembles for ID (in particular during last decade. The related studies of AI-based ensembles are compared by set of evaluation metrics driven from (1 architecture & approach followed; (2 different methods utilized in different phases of ensemble learning; (3 other measures used to evaluate classification performance of the ensembles. The paper also provides the future directions of the research in this area. The paper will help the better understanding of different directions in which research of ensembles has been done in general and specifically: field of intrusion detection systems (IDSs.
Regularized Multiple-Set Canonical Correlation Analysis
Takane, Yoshio; Hwang, Heungsun; Abdi, Herve
2008-01-01
Multiple-set canonical correlation analysis (Generalized CANO or GCANO for short) is an important technique because it subsumes a number of interesting multivariate data analysis techniques as special cases. More recently, it has also been recognized as an important technique for integrating information from multiple sources. In this paper, we…
Canonical Quantization of Higher-Order Lagrangians
Directory of Open Access Journals (Sweden)
Khaled I. Nawafleh
2011-01-01
Full Text Available After reducing a system of higher-order regular Lagrangian into first-order singular Lagrangian using constrained auxiliary description, the Hamilton-Jacobi function is constructed. Besides, the quantization of the system is investigated using the canonical path integral approximation.
Part and Bipartial Canonical Correlation Analysis.
Timm, Neil H.; Carlson, James E.
Part and bi-partial canonical correlations were developed by extending the definitions of part and bi-partial correlation to sets of variates. These coefficients may be used to help researchers explore relationships which exist among several sets of normally distributed variates. (Author)
Kelvin's Canonical Circulation Theorem in Hall Magnetohydrodynamics
Shivamoggi, B K
2016-01-01
The purpose of this paper is to show that, thanks to the restoration of the legitimate connection between the current density and the plasma flow velocity in Hall magnetohydrodynamics (MHD), Kelvin's Circulation Theorem becomes valid in Hall MHD. The ion-flow velocity in the usual circulation integral is now replaced by the canonical ion-flow velocity.
Canonical Transformation to the Free Particle
Glass, E. N.; Scanio, Joseph J. G.
1977-01-01
Demonstrates how to find some canonical transformations without solving the Hamilton-Jacobi equation. Constructs the transformations from the harmonic oscillator to the free particle and uses these as examples of transformations that cannot be maintained when going from classical to quantum systems. (MLH)
Infants' Recognition of Objects Using Canonical Color
Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.
2010-01-01
We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…
Kuidas Canon suureks kasvas / Andres Eilart
Eilart, Andres
2004-01-01
Jaapani kaamerate ja büroomasinate tootja Canon Groupi arengust, tegevusest kolmes regioonis - USA-s, Euroopas ja Aasias ning ettevõtte pikaajalise edu põhjustest - ärifilosoofiast ning ajastatud tootearendusest. Vt. samas: Firma esialgne nimi oli Kwanon; Konkurendid koonduvad
A Spectral Canonical Electrostatic Algorithm
Webb, Stephen D
2015-01-01
Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton's Principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm's energy- and momentum-conserving properties.
Ensemble Forecasting of Major Solar Flares
Guerra, J A; Uritsky, V M
2015-01-01
We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte Carlo-type algorithm by applying a decision threshold $P_{th}$ to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the probabilities and events time series from 13 recent solar active regions (2012 - 2014), we found that a linear combination of probabilities can improve both probabilistic and categorical forecasts. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of $P_{th}$. According to the maximum values of HSS, a performance-based weights ...
Ensemble annealing of complex physical systems
Habeck, Michael
2015-01-01
Algorithms for simulating complex physical systems or solving difficult optimization problems often resort to an annealing process. Rather than simulating the system at the temperature of interest, an annealing algorithm starts at a temperature that is high enough to ensure ergodicity and gradually decreases it until the destination temperature is reached. This idea is used in popular algorithms such as parallel tempering and simulated annealing. A general problem with annealing methods is that they require a temperature schedule. Choosing well-balanced temperature schedules can be tedious and time-consuming. Imbalanced schedules can have a negative impact on the convergence, runtime and success of annealing algorithms. This article outlines a unifying framework, ensemble annealing, that combines ideas from simulated annealing, histogram reweighting and nested sampling with concepts in thermodynamic control. Ensemble annealing simultaneously simulates a physical system and estimates its density of states. The...
Eigenstate Gibbs Ensemble in Integrable Quantum Systems
Nandy, Sourav; Das, Arnab; Dhar, Abhishek
2016-01-01
The Eigenstate Thermalization Hypothesis implies that for a thermodynamically large system in one of its eigenstates, the reduced density matrix describing any finite subsystem is determined solely by a set of {\\it relevant} conserved quantities. In a generic system, only the energy plays that role and hence eigenstates appear locally thermal. Integrable systems, on the other hand, possess an extensive number of such conserved quantities and hence the reduced density matrix requires specification of an infinite number of parameters (Generalized Gibbs Ensemble). However, here we show by unbiased statistical sampling of the individual eigenstates with a given finite energy density, that the local description of an overwhelming majority of these states of even such an integrable system is actually Gibbs-like, i.e. requires only the energy density of the eigenstate. Rare eigenstates that cannot be represented by the Gibbs ensemble can also be sampled efficiently by our method and their local properties are then s...
Rényi entropy, abundance distribution, and the equivalence of ensembles
Mora, Thierry; Walczak, Aleksandra M.
2016-05-01
Distributions of abundances or frequencies play an important role in many fields of science, from biology to sociology, as does the Rényi entropy, which measures the diversity of a statistical ensemble. We derive a mathematical relation between the abundance distribution and the Rényi entropy, by analogy with the equivalence of ensembles in thermodynamics. The abundance distribution is mapped onto the density of states, and the Rényi entropy to the free energy. The two quantities are related in the thermodynamic limit by a Legendre transform, by virtue of the equivalence between the micro-canonical and canonical ensembles. In this limit, we show how the Rényi entropy can be constructed geometrically from rank-frequency plots. This mapping predicts that non-concave regions of the rank-frequency curve should result in kinks in the Rényi entropy as a function of its order. We illustrate our results on simple examples, and emphasize the limitations of the equivalence of ensembles when a thermodynamic limit is not well defined. Our results help choose reliable diversity measures based on the experimental accuracy of the abundance distributions in particular frequency ranges.
Monthly Ensembles in Algal Bloom Predictions on the Baltic Sea
Roiha, Petra; Westerlund, Antti; Stipa, Tapani
2010-05-01
In this work we explore the statistical features of monthly ensembles and their capability to predict biogeochemical conditions in the Baltic Sea. Operational marine environmental modelling has been considered hard, and consequently there are very few operational ecological models. Operational modelling of harmful algal blooms is harder still, since it is difficult to separate the algal species in models, and in general, very little is known of HAB properties. We present results of an ensemble approach to HAB forecasting in the Baltic, and discuss the applicability of the forecasting method to biochemical modelling. It turns out that HABs are indeed possible to forecast with useful accuracy. For modelling the algal blooms in Baltic Sea we used FMI operational 3-dimensional biogeochemical model to produce seasonal ensemble forecasts for different physical, chemical and biological variables. The modelled variables were temperature, salinity, velocity, silicate, phosphate, nitrate, diatoms, flagellates and two species of potentially toxic filamentous cyanobacteria nodularia spumigena and aphanizomenon flos-aquae. In this work we concentrate to the latter two. Ensembles were produced by running the biogeochemical model several times and forcing it on every run with different set of seasonal weather parameters from ECMWF's mathematically perturbed ensemble prediction forecasts. The ensembles were then analysed by statistical methods and the median, quartiles, minimum and maximum values were calculated for estimating the probable amounts of algae. Validation for the forecast method was made by comparing the final results against available and valid in-situ HAB data.
The Algebraic Riccati Matrix Equation for Eigendecomposition of Canonical Forms
Directory of Open Access Journals (Sweden)
M. Nouri
2013-01-01
Full Text Available The algebraic Riccati matrix equation is used for eigendecomposition of special structured matrices. This is achieved by similarity transformation and then using the algebraic Riccati matrix equation to the triangulation of matrices. The process is the decomposition of matrices into small and specially structured submatrices with low dimensions for easy finding of eigenpairs. Here, we show that previous canonical forms I, II, III, and so on are special cases of the presented method. Numerical and structural examples are included to show the efficiency of the present method.
Combining 2-m temperature nowcasting and short range ensemble forecasting
Directory of Open Access Journals (Sweden)
A. Kann
2011-12-01
Full Text Available During recent years, numerical ensemble prediction systems have become an important tool for estimating the uncertainties of dynamical and physical processes as represented in numerical weather models. The latest generation of limited area ensemble prediction systems (LAM-EPSs allows for probabilistic forecasts at high resolution in both space and time. However, these systems still suffer from systematic deficiencies. Especially for nowcasting (0–6 h applications the ensemble spread is smaller than the actual forecast error. This paper tries to generate probabilistic short range 2-m temperature forecasts by combining a state-of-the-art nowcasting method and a limited area ensemble system, and compares the results with statistical methods. The Integrated Nowcasting Through Comprehensive Analysis (INCA system, which has been in operation at the Central Institute for Meteorology and Geodynamics (ZAMG since 2006 (Haiden et al., 2011, provides short range deterministic forecasts at high temporal (15 min–60 min and spatial (1 km resolution. An INCA Ensemble (INCA-EPS of 2-m temperature forecasts is constructed by applying a dynamical approach, a statistical approach, and a combined dynamic-statistical method. The dynamical method takes uncertainty information (i.e. ensemble variance from the operational limited area ensemble system ALADIN-LAEF (Aire Limitée Adaptation Dynamique Développement InterNational Limited Area Ensemble Forecasting which is running operationally at ZAMG (Wang et al., 2011. The purely statistical method assumes a well-calibrated spread-skill relation and applies ensemble spread according to the skill of the INCA forecast of the most recent past. The combined dynamic-statistical approach adapts the ensemble variance gained from ALADIN-LAEF with non-homogeneous Gaussian regression (NGR which yields a statistical mbox{correction} of the first and second moment (mean bias and dispersion for Gaussian distributed continuous
Energy Technology Data Exchange (ETDEWEB)
Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: siepmann@umn.edu [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)
2015-09-21
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard
Phase-selective entrainment of nonlinear oscillator ensembles
Zlotnik, Anatoly; Nagao, Raphael; Kiss, István Z.; Li-Shin, Jr.
2016-03-01
The ability to organize and finely manipulate the hierarchy and timing of dynamic processes is important for understanding and influencing brain functions, sleep and metabolic cycles, and many other natural phenomena. However, establishing spatiotemporal structures in biological oscillator ensembles is a challenging task that requires controlling large collections of complex nonlinear dynamical units. In this report, we present a method to design entrainment signals that create stable phase patterns in ensembles of heterogeneous nonlinear oscillators without using state feedback information. We demonstrate the approach using experiments with electrochemical reactions on multielectrode arrays, in which we selectively assign ensemble subgroups into spatiotemporal patterns with multiple phase clusters. The experimentally confirmed mechanism elucidates the connection between the phases and natural frequencies of a collection of dynamical elements, the spatial and temporal information that is encoded within this ensemble, and how external signals can be used to retrieve this information.
Estimating preselected and postselected ensembles
Energy Technology Data Exchange (ETDEWEB)
Massar, Serge [Laboratoire d' Information Quantique, C.P. 225, Universite libre de Bruxelles (U.L.B.), Av. F. D. Rooselvelt 50, B-1050 Bruxelles (Belgium); Popescu, Sandu [H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Hewlett-Packard Laboratories, Stoke Gifford, Bristol BS12 6QZ (United Kingdom)
2011-11-15
In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.
CME Ensemble Forecasting - A Primer
Pizzo, V. J.; de Koning, C. A.; Cash, M. D.; Millward, G. H.; Biesecker, D. A.; Codrescu, M.; Puga, L.; Odstrcil, D.
2014-12-01
SWPC has been evaluating various approaches for ensemble forecasting of Earth-directed CMEs. We have developed the software infrastructure needed to support broad-ranging CME ensemble modeling, including composing, interpreting, and making intelligent use of ensemble simulations. The first step is to determine whether the physics of the interplanetary propagation of CMEs is better described as chaotic (like terrestrial weather) or deterministic (as in tsunami propagation). This is important, since different ensemble strategies are to be pursued under the two scenarios. We present the findings of a comprehensive study of CME ensembles in uniform and structured backgrounds that reveals systematic relationships between input cone parameters and ambient flow states and resulting transit times and velocity/density amplitudes at Earth. These results clearly indicate that the propagation of single CMEs to 1 AU is a deterministic process. Thus, the accuracy with which one can forecast the gross properties (such as arrival time) of CMEs at 1 AU is determined primarily by the accuracy of the inputs. This is no tautology - it means specifically that efforts to improve forecast accuracy should focus upon obtaining better inputs, as opposed to developing better propagation models. In a companion paper (deKoning et al., this conference), we compare in situ solar wind data with forecast events in the SWPC operational archive to show how the qualitative and quantitative findings presented here are entirely consistent with the observations and may lead to improved forecasts of arrival time at Earth.
Evolutionary Ensemble for In Silico Prediction of Ames Test Mutagenicity
Chen, Huanhuan; Yao, Xin
Driven by new regulations and animal welfare, the need to develop in silico models has increased recently as alternative approaches to safety assessment of chemicals without animal testing. This paper describes a novel machine learning ensemble approach to building an in silico model for the prediction of the Ames test mutagenicity, one of a battery of the most commonly used experimental in vitro and in vivo genotoxicity tests for safety evaluation of chemicals. Evolutionary random neural ensemble with negative correlation learning (ERNE) [1] was developed based on neural networks and evolutionary algorithms. ERNE combines the method of bootstrap sampling on training data with the method of random subspace feature selection to ensure diversity in creating individuals within an initial ensemble. Furthermore, while evolving individuals within the ensemble, it makes use of the negative correlation learning, enabling individual NNs to be trained as accurate as possible while still manage to maintain them as diverse as possible. Therefore, the resulting individuals in the final ensemble are capable of cooperating collectively to achieve better generalization of prediction. The empirical experiment suggest that ERNE is an effective ensemble approach for predicting the Ames test mutagenicity of chemicals.
Joint state and parameter estimation with an iterative ensemble Kalman smoother
M. Bocquet; Sakov, P.
2013-01-01
International audience Both ensemble filtering and variational data assimilation methods have proven useful in the joint estimation of state variables and parameters of geophysical models. Yet, their respective benefits and drawbacks in this task are distinct. An ensemble variational method, known as the iterative ensemble Kalman smoother (IEnKS) has recently been introduced. It is based on an adjoint model-free variational, but flow-dependent, scheme. As such, the IEnKS is a candidate too...
Excitation energies from ensemble DFT
Borgoo, Alex; Teale, Andy M.; Helgaker, Trygve
2015-12-01
We study the evaluation of the Gross-Oliveira-Kohn expression for excitation energies E1-E0=ɛ1-ɛ0+∂E/xc,w[ρ] ∂w | ρ =ρ0. This expression gives the difference between an excitation energy E1 - E0 and the corresponding Kohn-Sham orbital energy difference ɛ1 - ɛ0 as a partial derivative of the exchange-correlation energy of an ensemble of states Exc,w[ρ]. Through Lieb maximisation, on input full-CI density functions, the exchange-correlation energy is evaluated accurately and the partial derivative is evaluated numerically using finite difference. The equality is studied numerically for different geometries of the H2 molecule and different ensemble weights. We explore the adiabatic connection for the ensemble exchange-correlation energy. The latter may prove useful when modelling the unknown weight dependence of the exchange-correlation energy.
DEFF Research Database (Denmark)
Sunyer Pinya, Maria Antonia; Madsen, Henrik; Rosbjerg, Dan;
2013-01-01
all these methods is that the climate models are independent. This study addresses the validity of this assumption for two ensembles of regional climate models (RCMs) from the Ensemble-Based Predictions of Climate Changes and their Impacts (ENSEMBLES) project based on the land cells covering Denmark....... Daily precipitation indices from an ensemble of RCMs driven by the 40-yrECMWFRe-Analysis (ERA-40) and an ensemble of the same RCMs driven by different general circulation models (GCMs) are analyzed. Two different methods are used to estimate the amount of independent information in the ensembles....... These are based on different statistical properties of a measure of climate model error. Additionally, a hierarchical cluster analysis is carried out. Regardless of the method used, the effective number of RCMs is smaller than the total number of RCMs. The estimated effective number of RCMs varies depending...
Introduction to Modern Canonical Quantum General Relativity
Thiemann, T
2001-01-01
This is an introduction to the by now fifteen years old research field of canonical quantum general relativity, sometimes called "loop quantum gravity". The term "modern" in the title refers to the fact that the quantum theory is based on formulating classical general relativity as a theory of connections rather than metrics as compared to in original version due to Arnowitt, Deser and Misner. Canonical quantum general relativity is an attempt to define a mathematically rigorous, non-perturbative, background independent theory of Lorentzian quantum gravity in four spacetime dimensions in the continuum. The approach is minimal in that one simply analyzes the logical consequences of combining the principles of general relativity with the principles of quantum mechanics. The requirement to preserve background independence has lead to new, fascinating mathematical structures which one does not see in perturbative approaches, e.g. a fundamental discreteness of spacetime seems to be a prediction of the theory provi...
Quaternion Fourier and Linear Canonical Inversion Theorems
Hu, Xiao Xiao; Kou, Kit Ian
2016-01-01
The Quaternion Fourier transform (QFT) is one of the key tools in studying color image processing. Indeed, a deep understanding of the QFT has created the color images to be transformed as whole, rather than as color separated component. In addition, understanding the QFT paves the way for understanding other integral transform, such as the Quaternion Fractional Fourier transform (QFRFT), Quaternion linear canonical transform (QLCT) and Quaternion Wigner-Ville distribution. The aim of this pa...
Ensemble teleportation under suboptimal conditions
International Nuclear Information System (INIS)
The possibility of teleportation is certainly the most interesting consequence of quantum non-separability. In the present paper, the feasibility of teleportation is examined on the basis of the rigorous ensemble interpretation of quantum mechanics if non-ideal constraints are imposed on the teleportation scheme. Importance is attached both to the case of noisy Einstein-Podolsky-Rosen (EPR) ensembles and to the conditions under which automatic teleportation is still possible. The success of teleportation is discussed using a new fidelity measure which avoids the weaknesses of previous proposals
The Partition Ensemble Fallacy Fallacy
Nemoto, K; Nemoto, Kae; Braunstein, Samuel L.
2002-01-01
The Partition Ensemble Fallacy was recently applied to claim no quantum coherence exists in coherent states produced by lasers. We show that this claim relies on an untestable belief of a particular prior distribution of absolute phase. One's choice for the prior distribution for an unobservable quantity is a matter of `religion'. We call this principle the Partition Ensemble Fallacy Fallacy. Further, we show an alternative approach to construct a relative-quantity Hilbert subspace where unobservability of certain quantities is guaranteed by global conservation laws. This approach is applied to coherent states and constructs an approximate relative-phase Hilbert subspace.
Efficient computations of quantum canonical Gibbs state in phase space.
Bondar, Denys I; Campos, Andre G; Cabrera, Renan; Rabitz, Herschel A
2016-06-01
The Gibbs canonical state, as a maximum entropy density matrix, represents a quantum system in equilibrium with a thermostat. This state plays an essential role in thermodynamics and serves as the initial condition for nonequilibrium dynamical simulations. We solve a long standing problem for computing the Gibbs state Wigner function with nearly machine accuracy by solving the Bloch equation directly in the phase space. Furthermore, the algorithms are provided yielding high quality Wigner distributions for pure stationary states as well as for Thomas-Fermi and Bose-Einstein distributions. The developed numerical methods furnish a long-sought efficient computation framework for nonequilibrium quantum simulations directly in the Wigner representation. PMID:27415384
Communication: Generalized canonical purification for density matrix minimization
Truflandier, Lionel A.; Dianzinga, Rivo M.; Bowler, David R.
2016-03-01
A Lagrangian formulation for the constrained search for the N-representable one-particle density matrix based on the McWeeny idempotency error minimization is proposed, which converges systematically to the ground state. A closed form of the canonical purification is derived for which no a posteriori adjustment on the trace of the density matrix is needed. The relationship with comparable methods is discussed, showing their possible generalization through the hole-particle duality. The appealing simplicity of this self-consistent recursion relation along with its low computational complexity could prove useful as an alternative to diagonalization in solving dense and sparse matrix eigenvalue problems.
Dayyani, Z; Dehghani, M H
2016-01-01
We investigate the critical behavior of an $(n+1)$-dimensional topological dilaton black holes, in an extended phase space in both canonical and grand-canonical ensembles, when the gauge field is in the form of power-Maxwell field. In order to do this we introduce for the first time the counterterms that remove the divergences of the action in dilaton gravity for the solutions with curved boundary. Using the counterterm method, we calculate the conserved quantities and the action and therefore Gibbs free energy in both the canonical and grand-canonical ensembles. We treat the cosmological constant as a thermodynamic pressure, and its conjugate quantity as a thermodynamic volume. In the presence of power-Maxwell field, we find an analogy between the topological dilaton black holes with van der Walls liquid-gas system in all dimensions provided the dilaton coupling constant $\\alpha$ and the power parameter $p$ are chosen properly. Interestingly enough, we observe that the power-Maxwell dilaton black holes admit...
Canonical Energy is Quantum Fisher Information
Lashkari, Nima
2015-01-01
In quantum information theory, Fisher Information is a natural metric on the space of perturbations to a density matrix, defined by calculating the relative entropy with the unperturbed state at quadratic order in perturbations. In gravitational physics, Canonical Energy defines a natural metric on the space of perturbations to spacetimes with a Killing horizon. In this paper, we show that the Fisher information metric for perturbations to the vacuum density matrix of a ball-shaped region B in a holographic CFT is dual to the canonical energy metric for perturbations to a corresponding Rindler wedge R_B of Anti-de-Sitter space. Positivity of relative entropy at second order implies that the Fisher information metric is positive definite. Thus, for physical perturbations to anti-de-Sitter spacetime, the canonical energy associated to any Rindler wedge must be positive. This second-order constraint on the metric extends the first order result from relative entropy positivity that physical perturbations must sat...
Symmetric Quartic Map in natural canonical coordinates
Baldwin, Danielle; Jones, Bilal; Settle, Talise; Ali, Halima; Punjabi, Alkesh
2015-11-01
The generating function for the simple map is modified by replacing the cubic term in canonical momentum by a quartic term. New parameters are introduced in the modified generating function to control the height and the width of ideal separatrix surface and the poloidal magnetic flux inside ideal separatrix. The new generating function is the generating function for the Symmetric Quartic Map (SQM). The new parameters in the generating function are chosen such that the height, width, elongation, and the poloidal flux inside the separatrix for the SQM are same as the simple map. The resulting generating function for the SQM is then transformed from the physical coordinates to the natural canonical coordinates. The equilibrium separatrix of the SQM is calculated in the natural canonical coordinates. The purpose of this research is to calculate the homoclinic tangle of the SQM and compare with the simple map. The separatrix of the simple map is open and unbounded; while the separatrix of the SQM is closed and compact. Motivation is to see what role the topology of the separatrix plays in its homoclinic tangle in single-null divertor tokamaks. This work is supported by grants DE-FG02-01ER54624, DE-FG02-04ER54793, and DE-FG02-07ER54937.
Improving ensemble forecasting with q-norm bred vectors
Pazo, Diego; Lopez, Juan Manuel; Rodriguez, Miguel Angel
2016-04-01
Error breeding is a popular and simple method to generate initial perturbations for use in ensemble forecasting that is used for operational purposes in many weather/climate centres worldwide. There is a widespread belief among practitioners that the type of norm used in the periodic normalizations of BVs does not have an effect on the performance of ensemble forecasting systems. However, we have recently reported that BVs constructed with different norms have indeed very different dynamical and spatial properties. In particular, BVs constructed with the 0-norm or geometric norm has nice properties (e.g. enhancement of the ensemble diversity), which in principle render it more adequate to construct ensembles than other norm types like the Euclidean one. These advantages are clearly demonstrated here in a simple experiment of ensemble forecasting for the Lorenz-96 model with ensembles of BVs. Our simple numerical assimilation experiment shows how the increased statistical diversity of geometric BVs leads to improved scores regarding forecasting capabilities as compared with BVs constructed with the standard Euclidean norm.
Heteroscedastic Extended Logistic Regression for Post-Processing of Ensemble Guidance
Messner, Jakob W.; Mayr, Georg J.; Wilks, Daniel S.; Zeileis, Achim
2014-05-01
To achieve well-calibrated probabilistic weather forecasts, numerical ensemble forecasts are often statistically post-processed. One recent ensemble-calibration method is extended logistic regression which extends the popular logistic regression to yield full probability distribution forecasts. Although the purpose of this method is to post-process ensemble forecasts, usually only the ensemble mean is used as predictor variable, whereas the ensemble spread is neglected because it does not improve the forecasts. In this study we show that when simply used as ordinary predictor variable in extended logistic regression, the ensemble spread only affects the location but not the variance of the predictive distribution. Uncertainty information contained in the ensemble spread is therefore not utilized appropriately. To solve this drawback we propose a new approach where the ensemble spread is directly used to predict the dispersion of the predictive distribution. With wind speed data and ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) we show that using this approach, the ensemble spread can be used effectively to improve forecasts from extended logistic regression.
A new ensemble feature selection and its application to pattern classification
Institute of Scientific and Technical Information of China (English)
Dongbo ZHANG; Yaonan WANG
2009-01-01
Neural network ensemble based on rough sets reduct is proposed to decrease the computational complexity of conventional ensemble feature selection algorithm. First, a dynamic reduction technology combining genetic algorithm with resampling method is adopted to obtain reducts with good generalization ability. Second, Multiple BP neural networks based on different reducts are built as base classifiers. According to the idea of selective ensemble, the neural network ensemble with best generalization ability can be found by search strategies. Finally, classification based on neural network ensemble is implemented by combining the predictions of component networks with voting. The method has been verified in the experiment of remote sensing image and five UCI datasets classification. Compared with conventional ensemble feature selection algorithms, it costs less time and lower computing complexity, and the classification accuracy is satisfactory.
A new approach to derive Pfaffian structures for random matrix ensembles
International Nuclear Information System (INIS)
Correlation functions for matrix ensembles with orthogonal and unitary-symplectic rotation symmetry are more complicated to calculate than in the unitary case. The supersymmetry method and the orthogonal polynomials are two techniques to tackle this task. Recently, we presented a new method to average ratios of characteristic polynomials over matrix ensembles invariant under the unitary group. Here, we extend this approach to ensembles with orthogonal and unitary-symplectic rotation symmetry. We show that Pfaffian structures can be derived for a wide class of orthogonal and unitary-symplectic rotation invariant ensembles in a unifying way. This also includes those for which this structure was not known previously, as the real Ginibre ensemble and the Gaussian real chiral ensemble with two independent matrices as well.
Multimodel ensembles of wheat growth
DEFF Research Database (Denmark)
Martre, Pierre; Wallach, Daniel; Asseng, Senthold;
2015-01-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but ...
Global Ensemble Forecast System (GEFS) [1 Deg.
National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...
Statistical Mechanics of Linear and Nonlinear Time-Domain Ensemble Learning
Miyoshi, Seiji; Okada, Masato
2006-01-01
Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear mod...
Directory of Open Access Journals (Sweden)
S. Mohammad Motamed-al-Shariati
2012-04-01
Full Text Available Background: Rhinoplasty is one of the most common plastic surgeries. Although patient satisfaction is still the main prerequisite for success, but this method of determining the outcome of surgery is qualitative. A quantitative method is required to compare the results of rhinoplasty surgery results.Materials and Methods: In this pilot study, Canon cosmetics standards were measured in 15 patients undergoing rhinoplasty before and after the surgery. The changes in these standards were presented quantitatively. In addition, the patients’ satisfaction from the surgery was examined through questionnaires. Data were analyzed using statistical SPSS-11 software, dependent t-test and Pearson correlation coefficient.Results: 15 patients were examined in a 6-month period; all patients were female and their average age was 23. The results showed that rhinoplasty makes changes in 5 out of 9 standards of Canon. The lowest patient satisfaction score was 17 and the highest was 24. The average satisfaction score was 22/3, score reduction was shown after rhinoplasty in all Canon standards except for standard 7 and 8 (p <0/05. There was no statistically significant relationship between changes in Canon standards before and after rhinoplasty surgery and patient satisfaction.Conclusion: The results showed that even if Canon standards change after the surgery, patients’ satisfaction depends on other factors rather than the mathematical calculation of changes in face component. In other words, although symmetry is desirable, it is not equivalent to beauty.
Exploring the calibration of a wind forecast ensemble for energy applications
Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne
2015-04-01
In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw
Bayesian Model Averaging for Ensemble-Based Estimates of Solvation Free Energies
Gosink, Luke J; Reehl, Sarah M; Whitney, Paul D; Mobley, David L; Baker, Nathan A
2016-01-01
This paper applies the Bayesian Model Averaging (BMA) statistical ensemble technique to estimate small molecule solvation free energies. There is a wide range methods for predicting solvation free energies, ranging from empirical statistical models to ab initio quantum mechanical approaches. Each of these methods are based on a set of conceptual assumptions that can affect a method's predictive accuracy and transferability. Using an iterative statistical process, we have selected and combined solvation energy estimates using an ensemble of 17 diverse methods from the SAMPL4 blind prediction study to form a single, aggregated solvation energy estimate. The ensemble design process evaluates the statistical information in each individual method as well as the performance of the aggregate estimate obtained from the ensemble as a whole. Methods that possess minimal or redundant information are pruned from the ensemble and the evaluation process repeats until aggregate predictive performance can no longer be improv...
Quantifying Monte Carlo uncertainty in ensemble Kalman filter
Energy Technology Data Exchange (ETDEWEB)
Thulin, Kristian; Naevdal, Geir; Skaug, Hans Julius; Aanonsen, Sigurd Ivar
2009-01-15
This report is presenting results obtained during Kristian Thulin PhD study, and is a slightly modified form of a paper submitted to SPE Journal. Kristian Thulin did most of his portion of the work while being a PhD student at CIPR, University of Bergen. The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated fram the updated ensemble, but because of the low rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with smaller ensemble size to obtain truly independent samples from the posterior distribution. This allows a point-wise confidence interval for the posterior cumulative distribution function (CDF) to be constructed. We present a methodology for finding an optimal combination of ensemble batch size (n) and number of EnKF runs (m) while keeping the total number of ensemble members ( m x n) constant. The optimal combination of n and m is found through minimizing the integrated mean square error (MSE) for the CDFs and we choose to define an EnKF run with 10.000 ensemble members as having zero Monte Carlo error. The methodology is tested on a simplistic, synthetic 2D model, but should be applicable also to larger, more realistic models. (author). 12 refs., figs.,tabs
Sysoev, I. V.; Ponomarenko, V. I.; Prokhorov, M. D.
2016-01-01
A method for the reconstruction of the architecture, strength of couplings, and parameters of elements in ensembles of coupled time-delay systems from their time series is proposed. The effectiveness of the method is demonstrated on chaotic time series of the ensemble of diffusively coupled nonidentical Ikeda equations in the presence of noise.
Ensemble segmentation for GBM brain tumors on MR images using confidence-based averaging
Huo, J.; Okada, K.; Rikxoort, E.M. van; Kim, H.J.; Alger, J.R.; Pope, W.B.; Goldin, J.G.; Brown, M.S.
2013-01-01
Purpose: Ensemble segmentation methods combine the segmentation results of individual methods into a final one, with the goal of achieving greater robustness and accuracy. The goal of this study was to develop an ensemble segmentation framework for glioblastoma multiforme tumors on single-channel T1
A novel hybrid ensemble learning paradigm for nuclear energy consumption forecasting
International Nuclear Information System (INIS)
Highlights: ► A hybrid ensemble learning paradigm integrating EEMD and LSSVR is proposed. ► The hybrid ensemble method is useful to predict time series with high volatility. ► The ensemble method can be used for both one-step and multi-step ahead forecasting. - Abstract: In this paper, a novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EEMD) and least squares support vector regression (LSSVR) is proposed for nuclear energy consumption forecasting, based on the principle of “decomposition and ensemble”. This hybrid ensemble learning paradigm is formulated specifically to address difficulties in modeling nuclear energy consumption, which has inherently high volatility, complexity and irregularity. In the proposed hybrid ensemble learning paradigm, EEMD, as a competitive decomposition method, is first applied to decompose original data of nuclear energy consumption (i.e. a difficult task) into a number of independent intrinsic mode functions (IMFs) of original data (i.e. some relatively easy subtasks). Then LSSVR, as a powerful forecasting tool, is implemented to predict all extracted IMFs independently. Finally, these predicted IMFs are aggregated into an ensemble result as final prediction, using another LSSVR. For illustration and verification purposes, the proposed learning paradigm is used to predict nuclear energy consumption in China. Empirical results demonstrate that the novel hybrid ensemble learning paradigm can outperform some other popular forecasting models in both level prediction and directional forecasting, indicating that it is a promising tool to predict complex time series with high volatility and irregularity.
Canonical Notch activation in osteocytes causes osteopetrosis.
Canalis, Ernesto; Bridgewater, David; Schilling, Lauren; Zanotti, Stefano
2016-01-15
Activation of Notch1 in cells of the osteoblastic lineage inhibits osteoblast differentiation/function and causes osteopenia, whereas its activation in osteocytes causes a distinct osteopetrotic phenotype. To explore mechanisms responsible, we established the contributions of canonical Notch signaling (Rbpjκ dependent) to osteocyte function. Transgenics expressing Cre recombinase under the control of the dentin matrix protein-1 (Dmp1) promoter were crossed with Rbpjκ conditional mice to generate Dmp1-Cre(+/-);Rbpjκ(Δ/Δ) mice. These mice did not have a skeletal phenotype, indicating that Rbpjκ is dispensable for osteocyte function. To study the Rbpjκ contribution to Notch activation, Rosa(Notch) mice, where a loxP-flanked STOP cassette is placed between the Rosa26 promoter and the NICD coding sequence, were crossed with Dmp1-Cre transgenic mice and studied in the context (Dmp1-Cre(+/-);Rosa(Notch);Rbpjκ(Δ/Δ)) or not (Dmp1-Cre(+/-);Rosa(Notch)) of Rbpjκ inactivation. Dmp1-Cre(+/-);Rosa(Notch) mice exhibited increased femoral trabecular bone volume and decreased osteoclasts and bone resorption. The phenotype was reversed in the context of the Rbpjκ inactivation, demonstrating that Notch canonical signaling was accountable for the phenotype. Notch activation downregulated Sost and Dkk1 and upregulated Axin2, Tnfrsf11b, and Tnfsf11 mRNA expression, and these effects were not observed in the context of the Rbpjκ inactivation. In conclusion, Notch activation in osteocytes suppresses bone resorption and increases bone volume by utilization of canonical signals that also result in the inhibition of Sost and Dkk1 and upregulation of Wnt signaling. PMID:26578715
Multistage ensemble of feedforward neural networks for prediction of heating energy consumption
Directory of Open Access Journals (Sweden)
Jovanović Radiša Ž.
2016-01-01
Full Text Available Feedforward neural network models are created for prediction of heating energy consumption of a university campus. Actual measured data are used for training and testing the models. Multistage neural network ensemble is proposed for the possible improvement of prediction accuracy. Previously trained feed-forward neural networks are first separated into clusters, using k-means algorithm, and then the best network of each cluster is chosen as a member of the ensemble. Three different averaging methods (simple, weighted and median for obtaining ensemble output are applied. Besides this conventional approach, single radial basis neural network in the second level is used to aggregate the selected ensemble members. It is shown that heating energy consumption can be predicted with better accuracy by using ensemble of neural networks than using the best trained single neural network, while the best results are achieved with multistage ensemble.
Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data
Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter
Luo, Xiaodong
2011-12-01
A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.
Institute of Scientific and Technical Information of China (English)
李平红; 陶晓玲; 王勇
2014-01-01
针对多分类器集成方法产生的流量分类器在泛化能力方面的局限性,提出一种选择性集成网络流量分类框架,以满足流量分类对分类器高效的需求.基于此框架,提出一种多分类器选择性集成的网络流量分类方法MCSE(Multiple Classifiers Selective Ensemble network traffic classification method),解决多分类器的选取问题.该方法首先利用半监督学习技术提升基分类器的精度,然后改进不一致性度量方法对分类器差异性的度量策略,降低多分类器集成方法实现网络流量分类的复杂性,有效减少选择最优分类器的计算开销.实验表明,与Bagging算法和GASEN算法相比,MCSE方法能更充分利用基分类器间的互补性,具有更高效的流量分类性能.
Reservoir History Matching Using Ensemble Kalman Filters with Anamorphosis Transforms
Aman, Beshir M.
2012-12-01
This work aims to enhance the Ensemble Kalman Filter performance by transforming the non-Gaussian state variables into Gaussian variables to be a step closer to optimality. This is done by using univariate and multivariate Box-Cox transformation. Some History matching methods such as Kalman filter, particle filter and the ensemble Kalman filter are reviewed and applied to a test case in the reservoir application. The key idea is to apply the transformation before the update step and then transform back after applying the Kalman correction. In general, the results of the multivariate method was promising, despite the fact it over-estimated some variables.
An adaptive additive inflation scheme for Ensemble Kalman Filters
Sommer, Matthias; Janjic, Tijana
2016-04-01
Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.
Ensemble Forecasting of Major Solar Flares -- First Results
Pulkkinen, A. A.; Guerra, J. A.; Uritsky, V. M.
2015-12-01
We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte-Carlo-type algorithm that applies a decision threshold PthP_{th} to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the data for 13 recent solar active regions between years 2012 - 2014, we found that linear combination methods can improve the overall probabilistic prediction and improve the categorical prediction for certain values of decision thresholds. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of PthP_{th}. According to the maximum values of HSS, a performance-based weights calculated by averaging over the sample, performed similarly to a equally weighted model. The values PthP_{th} for which the ensemble forecast performs the best are 25 % for M-class flares and 15 % for X-class flares. When the human-adjusted probabilities from NOAA are excluded from the ensemble, the ensemble performance in terms of the Heidke score, is reduced.
Völker, Jens; Gindikin, Vera; Klump, Horst H.; Plum, G. Eric; Breslauer, Kenneth J.
2012-01-01
DNA repeat domains can form ensembles of canonical and noncanonical states, including stable and metastable DNA secondary structures. Such sequence-induced structural diversity creates complex conformational landscapes for DNA processing pathways, including those triplet expansion events that accompany replication, recombination, and/or repair. Here we demonstrate further levels of conformational complexity within repeat domains. Specifically, we show that bulge loop structures within an exte...
Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics
Scheuerer, Michael
2013-01-01
Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...
The Application of Canonical Correlation to Two-Dimensional Contingency Tables
Alberto F. Restori; Gary S. Katz; Howard B. Lee
2010-01-01
This paper re-introduces and demonstrates the use of Mickeys (1970) canonical correlation method in analyzing large two-dimensional contingency tables. This method of analysis supplements the traditional analysis using the Pearson chi-square. Examples and a MATLAB source listing are provided.
The Application of Canonical Correlation to Two-Dimensional Contingency Tables
Directory of Open Access Journals (Sweden)
Alberto F. Restori
2010-03-01
Full Text Available This paper re-introduces and demonstrates the use of Mickeys (1970 canonical correlation method in analyzing large two-dimensional contingency tables. This method of analysis supplements the traditional analysis using the Pearson chi-square. Examples and a MATLAB source listing are provided.
Study on ETKF-Based Initial Perturbation Scheme for GRAPES Global Ensemble Prediction
Institute of Scientific and Technical Information of China (English)
MA Xulin; XUE Jishan; LU Weisong
2009-01-01
Initial perturbation scheme is one of the important problems for ensemble prediction. In this paper,ensemble initial perturbation scheme for Global/Regional Assimilation and PrEdiction System (GRAPES)global ensemble prediction is developed in terms of the ensemble transform Kalman filter (ETKF) method.A new GRAPES global ensemble prediction system (GEPS) is also constructed. The spherical simplex 14-member ensemble prediction experiments, using the simulated observation network and error character-lstics of simulated observations and innovation-based inflation, are carried out for about two months. The structure characters and perturbation amplitudes of the ETKF initial perturbations and the perturbation growth characters are analyzed, and their qualities and abilities for the ensemble initial perturbations are given.The preliminary experimental results indicate that the ETKF-based GRAPES ensemble initial perturba- tions could identify main normal structures of analysis error variance and reflect the perturbation amplitudes.The initial perturbations and the spread are reasonable. The initial perturbation variance, which is approx-imately equal to the forecast error variance, is found to respond to changes in the observational spatial variations with simulated observational network density. The perturbations generated through the simplex method are also shown to exhibit a very high degree of consistency between initial analysis and short-range forecast perturbations. The appropriate growth and spread of ensemble perturbations can be maintained up to 96-h lead time. The statistical results for 52-day ensemble forecasts show that the forecast scores of ensemble average for the Northern Hemisphere are higher than that of the control forecast. Provided that using more ensemble members, a real-time observational network and a more appropriate inflation factor,better effects of the ETKF-based initial scheme should be shown.
A Comparison of ETKF and Downscaling in a Regional Ensemble Prediction System
Directory of Open Access Journals (Sweden)
Hanbin Zhang
2015-03-01
Full Text Available Based on the operational regional ensemble prediction system (REPS in China Meteorological Administration (CMA, this paper carried out comparison of two initial condition perturbation methods: an ensemble transform Kalman filter (ETKF and a dynamical downscaling of global ensemble perturbations. One month consecutive tests are implemented to evaluate the performance of both methods in the operational REPS environment. The perturbation characteristics are analyzed and ensemble forecast verifications are conducted; furthermore, a TC case is investigated. The main conclusions are as follows: the ETKF perturbations contain more power at small scales while the ones derived from downscaling contain more power at large scales, and the relative difference of the two types of perturbations on scales become smaller with forecast lead time. The growth of downscaling perturbations is more remarkable, and the downscaling perturbations have larger magnitude than ETKF perturbations at all forecast lead times. However, the ETKF perturbation variance can represent the forecast error variance better than downscaling. Ensemble forecast verification shows slightly higher skill of downscaling ensemble over ETKF ensemble. A TC case study indicates that the overall performance of the two systems are quite similar despite the slightly smaller error of DOWN ensemble than ETKF ensemble at long range forecast lead times.
Symanzik flow on HISQ ensembles
Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2013-01-01
We report on a scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The lattice scale $w_0/a$, originally proposed by the BMW collaboration, is computed using Symanzik flow at four lattice spacings ranging from 0.15 to 0.06 fm. With a Taylor series ansatz, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We give a preliminary determination of the scale $w_0$ in physical units, along with associated systematic errors, and compare with results from other groups. We also present a first estimate of autocorrelation lengths as a function of flowtime for these ensembles.
Canonical Transformations can Dramatically Simplify Supersymmetry
Dixon, John
2016-01-01
A useful way to keep track of the SUSY invariance of a theory is by formulating it with a BRST Poisson Bracket. It turns out that there is a crucial subtlety that is hidden in this formulation. When the theory contains a Chiral Multiplet, the relevant BRST Poisson Bracket has a very important Canonical Transformation that leaves it invariant. This Canonical Transformation takes all or part of the Scalar Field $A$ and replaces it with a Zinn Source $J_A$, and also takes the related Zinn Source $\\Gamma_A$ and replaces it with an `Antighost' Field $\\eta_A$. Naively, this looks like it is just a change of notation. But in fact the interpretation means that one has moved some of the conserved Noether SUSY current from the Field Action, and placed it partly in the Zinn Sources Action, and so the SUSY current in the Field part of the Action is no longer conserved, because the Zinn Sources do not satisfy any equations of motion. They are not quantized, because they are Sources. So it needs to be recognized that SUSY ...
New constraints for canonical general relativity
Reisenberger, M
1995-01-01
Ashtekar's canonical theory of classical complex Euclidean GR (no Lorentzian reality conditions) is found to be invariant under the full algebra of infinitesimal 4-diffeomorphisms, but non-invariant under some finite proper 4-diffeos when the densitized dreibein, \\tilE^a_i, is degenerate. The breakdown of 4-diffeo invariance appears to be due to the inability of the Ashtekar Hamiltonian to generate births and deaths of \\tilE flux loops (leaving open the possibility that a new `causality condition' forbidding the birth of flux loops might justify the non-invariance of the theory). A fully 4-diffeo invariant canonical theory in Ashtekar's variables, derived from Plebanski's action, is found to have constraints that are stronger than Ashtekar's for rank\\tilE < 2. The corresponding Hamiltonian generates births and deaths of \\tilE flux loops. It is argued that this implies a finite amplitude for births and deaths of loops in the physical states of quantum GR in the loop representation, thus modifying this (part...
Canon Fodder: Young Adult Literature as a Tool for Critiquing Canonicity
Hateley, Erica
2013-01-01
Young adult literature is a tool of socialisation and acculturation for young readers. This extends to endowing "reading" with particular significance in terms of what literature should be read and why. This paper considers some recent young adult fiction with an eye to its engagement with canonical literature and its representations of…
Simple Deep Random Model Ensemble
ZHANG, XIAO-LEI; Wu, Ji
2013-01-01
Representation learning and unsupervised learning are two central topics of machine learning and signal processing. Deep learning is one of the most effective unsupervised representation learning approach. The main contributions of this paper to the topics are as follows. (i) We propose to view the representative deep learning approaches as special cases of the knowledge reuse framework of clustering ensemble. (ii) We propose to view sparse coding when used as a feature encoder as the consens...
Numerical weather prediction model tuning via ensemble prediction system
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Ensemble Modeling of Cancer Metabolism
Directory of Open Access Journals (Sweden)
Tahmineh eKhazaei
2012-05-01
Full Text Available The metabolic behaviour of cancer cells is adapted to meet their proliferative needs, with notable changes such as enhanced lactate secretion and glucose uptake rates. In this work, we use the Ensemble Modeling (EM framework to gain insight and predict potential drug targets for tumour cells. EM generates a set of models which span the space of kinetic parameters that are constrained by thermodynamics. Perturbation data based on known targets are used to screen the entire ensemble of models to obtain a sub-set, which is increasingly predictive. EM allows for incorporation of regulatory information and captures the behaviour of enzymatic reactions at the molecular level by representing reactions in the elementary reaction form. In this study, a metabolic network consisting of 58 reactions is considered and accounts for glycolysis, the pentose phosphate pathway, lipid metabolism, amino acid metabolism, and includes allosteric regulation of key enzymes. Experimentally measured intracellular and extracellular metabolite concentrations are used for developing the ensemble of models along with information on established drug targets. The resulting models predicted transaldolase (TALA and succinyl-CoA ligase (SUCOAS1m to cause a significant reduction in growth rate when repressed, relative to currently known drug targets. Furthermore, the results suggest that the synergetic repression of transaldolase and glycine hydroxymethyltransferase (GHMT2r will lead to a three-fold decrease in growth rate compared to the repression of single enzyme targets.
Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei
2016-05-01
This study presents a novel bias correction scheme for regional climate model (RCM) precipitation ensembles. A primary advantage of using model ensembles for climate change impact studies is that the uncertainties associated with the systematic error can be quantified through the ensemble spread. Currently, however, most of the conventional bias correction methods adjust all the ensemble members to one reference observation. As a result, the ensemble spread is degraded during bias correction. Since the observation is only one case of many possible realizations due to the climate natural variability, a successful bias correction scheme should preserve the ensemble spread within the bounds of its natural variability (i.e. sampling uncertainty). To demonstrate a new bias correction scheme conforming to RCM precipitation ensembles, an application to the Thorverton catchment in the south-west of England is presented. For the ensemble, 11 members from the Hadley Centre Regional Climate Model (HadRM3-PPE) data are used and monthly bias correction has been done for the baseline time period from 1961 to 1990. In the typical conventional method, monthly mean precipitation of each of the ensemble members is nearly identical to the observation, i.e. the ensemble spread is removed. In contrast, the proposed method corrects the bias while maintaining the ensemble spread within the natural variability of the observations.
Maneta Lopez, M. P.; Wallender, W. W.; Schnabel, S. C.
2007-12-01
A common model used to simulate actual evapotranspiration in watershed scale hydrologic models is the Kristensen and Jensen model (e.g. Mike She or MODHMS models). While the Kristensen and Jensen model was originally developed for Nordic climates, it has been extensively used in other types of environments without specific calibration or testing of its performance in climates other than the one for which the model was developed. In semiarid watershed hydrology, evapotranspiration is the main output component of the mass balance and is critical for a correct description of the hydrologic processes during interstorm periods. In this work we calibrate and study the performance of the Kristensen and Jensen model in a semiarid rangeland environment in southwest Spain. For this, a full soil water atmosphere model was used to describe the water fluxes in a column of soil. The model describes variably saturated water flow in the soil using Richards' equation and the van Genuchten soil retention curves. The Kristensen and Jensen model is used to calculate direct evaporation and the water uptake by grass cover. Seven parameters are simultaneously calibrated. Two are for the van Genuchten retention curve and three for the Kristensen and Jensen model. Hydraulic conductivity is assumed to decay exponentially with depth. The decay exponent and the hydraulic conductivity at zero depth are the two remaining parameters to be calibrated. Given the large set of free parameters involved, the calibration set up involves two sources of information: soil moisture measurements at four different depths in the soil column and an auxiliary simple linear model relating maximum daily temperatures and average soil moisture; and two sources of prior information: field capacity measured on soil cores and the maximum dry weight biomass when the soil is fully covered by grass. A global search method (SCE-UA) is used to locate the global minimum in the allowed parameter error space and a local
Bhatt, Divesh
2009-01-01
We perform first path sampling simulations of conformational transitions of semi--atomistic protein models. We generate an ensemble of pathways for conformational transitions between open and closed forms of adenylate kinase using weighted ensemble path sampling method. Such an ensemble of pathways is critical in determining the important regions of configuration space sampled during a transition. To different semi--atomistic models are used: one is a pure Go model, whereas the other includes level of residue specificity via use of Miyajawa--Jernigan type interactions and hydrogen bonding. For both the models, we find that the open form of adenylate kinase is more flexible and the the transition from open to close is significantly faster than the reverse transition. We find that the transition occurs via the AMP binding domain snapping shut at a fairly fast time scale. On the other hand, the flexible lid domain fluctuates significantly and the shutting of the AMP binding domain does not depend upon the positi...
[Huang Yizhou's study on Nei jing (Inner Canon)].
Hu, Benxiang; Huang, Youmei; Yu, Chengfen
2002-01-01
Being a great classical scholar of the late Qing dynasty, Huang Yizhou collated Nei jing (Inner Canon) by textual criticism. But most of his works were missing. By reviewing historical documents and literature, it has been found that his collated books include Huang di nei jing su wen jiao ben (Collated Edition of Huangdi's Inner Canon Plain Questions), Huang di nei jing su wen chong jiao zheng (Recollated Huangdi's Inner Canon Plain Questions), Nei jing zhen ci (Acupuncture in Inner Canon), Huang di nei jing jiu juan ji zhu (Variorum of Nine Volumes of Huangdi's Inner Canon), Huang di nei jing ming tang (Acupuncture Chart of Huangdi's Inner Canon), and Jiu chao tai su jiao ben (Old Extremely Plain Question Recension). Many of his disciples became famous scholars in the Republican period. PMID:12015056
A NOVEL ALGORITHM FOR VOICE CONVERSION USING CANONICAL CORRELATION ANALYSIS
Institute of Scientific and Technical Information of China (English)
Jian Zhihua; Yang Zhen
2008-01-01
A novel algorithm for voice conversion is proposed in this paper. The mapping function of spectral vectors of the source and target speakers is calculated by the Canonical Correlation Analysis(CCA) estimation based on Gaussian mixture models. Since the spectral envelope feature remains a majority of second order statistical information contained in speech after Linear Prediction Coding(LPC) analysis, the CCA method is more suitable for spectral conversion than Minimum Mean Square Error (MMSE) because CCA explicitly considers the variance of each component of the spectral vectors during conversion procedure. Both objective evaluations and subjective listening tests are conducted. The experimental results demonstrate that the proposed scheme can achieve better performance than the previous method which uses MMSE estimation criterion.
Paul Weiss and the genesis of canonical quantization
Rickles, Dean; Blum, Alexander
2015-12-01
This paper describes the life and work of a figure who, we argue, was of primary importance during the early years of field quantisation and (albeit more indirectly) quantum gravity. A student of Dirac and Born, he was interned in Canada during the second world war as an enemy alien and after his release never seemed to regain a good foothold in physics, identifying thereafter as a mathematician. He developed a general method of quantizing (linear and non-linear) field theories based on the parameters labelling an arbitrary hypersurface. This method (the `parameter formalism' often attributed to Dirac), though later discarded, was employed (and viewed at the time as an extremely important tool) by the leading figures associated with canonical quantum gravity: Dirac, Pirani and Schild, Bergmann, DeWitt, and others. We argue that he deserves wider recognition for this and other innovations.
Four-dimensional Localization and the Iterative Ensemble Kalman Smoother
Bocquet, M.
2015-12-01
The iterative ensemble Kalman smoother (IEnKS) is a data assimilation method meant for efficiently tracking the state ofnonlinear geophysical models. It combines an ensemble of model states to estimate the errors similarly to the ensemblesquare root Kalman filter, with a 4D-variational analysis performed within the ensemble space. As such it belongs tothe class of ensemble variational methods. Recently introduced 4DEnVar or the 4D-LETKF can be seen as particular casesof the scheme. The IEnKS was shown to outperform 4D-Var, the ensemble Kalman filter (EnKF) and smoother, with low-ordermodels in all investigated dynamical regimes. Like any ensemble method, it could require the use of localization of theanalysis when the state space dimension is high. However, localization for the IEnKS is not as straightforward as forthe EnKF. Indeed, localization needs to be defined across time, and it needs to be as much as possible consistent withthe dynamical flow within the data assimilation variational window. We show that a Liouville equation governs the timeevolution of the localization operator, which is linked to the evolution of the error correlations. It is argued thatits time integration strongly depends on the forecast dynamics. Using either covariance localization or domainlocalization, we propose and test several localization strategies meant to address the issue: (i) a constant and uniformlocalization, (ii) the propagation through the window of a restricted set of dominant modes of the error covariancematrix, (iii) the approximate propagation of the localization operator using model covariant local domains. Theseschemes are illustrated on the one-dimensional Lorenz 40-variable model.
Integral Canonical Models for Automorphic Vector Bundles of Abelian Type
Lovering, Tom
2016-01-01
We define and construct integral canonical models for automorphic vector bundles over Shimura varieties of abelian type. More precisely, we first build on Kisin's work to construct integral canonical models over rings of integers of number fields with finitely many primes inverted for Shimura varieties of abelian type with hyperspecial level at all primes we do not invert, compatible with Kisin's construction. We then define a notion of an integral canonical model for the standard principal b...
Canonical symmetry properties of the constrained singular generalized mechanical system
Institute of Scientific and Technical Information of China (English)
李爱民; 江金环; 李子平
2003-01-01
Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account, the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.
Canonical symmetry properties of the constrained singular generalized mechanical system
Institute of Scientific and Technical Information of China (English)
LiAi-Min; JiangJin-Huan; LiZi-Ping
2003-01-01
Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account,the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.
An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter
Song, Hajoon
2010-07-01
A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.
CoNSEnsX: an ensemble view of protein structures and NMR-derived experimental data
Directory of Open Access Journals (Sweden)
Perczel András
2010-10-01
Full Text Available Abstract Background In conjunction with the recognition of the functional role of internal dynamics of proteins at various timescales, there is an emerging use of dynamic structural ensembles instead of individual conformers. These ensembles are usually substantially more diverse than conventional NMR ensembles and eliminate the expectation that a single conformer should fulfill all NMR parameters originating from 1016 - 1017 molecules in the sample tube. Thus, the accuracy of dynamic conformational ensembles should be evaluated differently to that of single conformers. Results We constructed the web application CoNSEnsX (Consistency of NMR-derived Structural Ensembles with eXperimental data allowing fast, simple and convenient assessment of the correspondence of the ensemble as a whole with diverse independent NMR parameters available. We have chosen different ensembles of three proteins, human ubiquitin, a small protease inhibitor and a disordered subunit of cGMP phosphodiesterase 5/6 for detailed evaluation and demonstration of the capabilities of the CoNSEnsX approach. Conclusions Our results present a new conceptual method for the evaluation of dynamic conformational ensembles resulting from NMR structure determination. The designed CoNSEnsX approach gives a complete evaluation of these ensembles and is freely available as a web service at http://consensx.chem.elte.hu.
Kadoura, Ahmad
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
Directory of Open Access Journals (Sweden)
Jogendra Kushwah
2013-06-01
Full Text Available The free radical gene classification of cancer diseases is challenging job in biomedical data engineering. The improving of classification of gene selection of cancer diseases various classifier are used, but the classification of classifier are not validate. So ensemble classifier is used for cancer gene classification using neural network classifier with random forest tree. The random forest tree is ensembling technique of classifier in this technique the number of classifier ensemble of their leaf node of class of classifier. In this paper we combined neural network with random forest ensemble classifier for classification of cancer gene selection for diagnose analysis of cancer diseases. The proposed method is different from most of the methods of ensemble classifier, which follow an input output paradigm of neural network, where the members of the ensemble are selected from a set of neural network classifier. the number of classifiers is determined during the rising procedure of the forest. Furthermore, the proposed method produces an ensemble not only correct, but also assorted, ensuring the two important properties that should characterize an ensemble classifier. For empirical evaluation of our proposed method we used UCI cancer diseases data set for classification. Our experimental result shows that better result in compression of random forest tree classification.
Properties of the Affine Invariant Ensemble Sampler in high dimensions
Huijser, David; Brewer, Brendon J
2015-01-01
We present theoretical and practical properties of the affine-invariant ensemble sampler Markov chain Monte Carlo method. In high dimensions the affine-invariant ensemble sampler shows unusual and undesirable properties. We demonstrate this with an $n$-dimensional correlated Gaussian toy problem with a known mean and covariance structure, and analyse the burn-in period. The burn-in period seems to be short, however upon closer inspection we discover the mean and the variance of the target distribution do not match the expected, known values. This problem becomes greater as $n$ increases. We therefore conclude that the affine-invariant ensemble sampler should be used with caution in high dimensional problems. We also present some theoretical results explaining this behaviour.
A Flexible Approach for the Statistical Visualization of Ensemble Data
Energy Technology Data Exchange (ETDEWEB)
Potter, K. [Univ. of Utah, Salt Lake City, UT (United States). SCI Institute; Wilson, A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bremer, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pascucci, V. [Univ. of Utah, Salt Lake City, UT (United States). SCI Institute; Johnson, C. [Univ. of Utah, Salt Lake City, UT (United States). SCI Institute
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methods that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.
Generation of Exotic Quantum States of a Cold Atomic Ensemble
DEFF Research Database (Denmark)
Christensen, Stefan Lund
can be created and characterized. First we consider a spin-squeezed state. This state is generated by performing quantum non-demolition measurements of the atomic population difference. We show a spectroscopically relevant noise reduction of -1.7dB, the ensemble is in a many-body entangled state......Over the last decades quantum effects have become more and more controllable, leading to the implementations of various quantum information protocols. These protocols are all based on utilizing quantum correlation. In this thesis we consider how states of an atomic ensemble with such correlations...... — a nanofiber based light-atom interface. Using a dual-frequency probing method we measure and prepare an ensemble with a sub-Poissonian atom number distribution. This is a first step towards the implementation of more exotic quantum states....
The Geometry of Tangent Bundles: Canonical Vector Fields
Directory of Open Access Journals (Sweden)
Tongzhu Li
2013-01-01
Full Text Available A canonical vector field on the tangent bundle is a vector field defined by an invariant coordinate construction. In this paper, a complete classification of canonical vector fields on tangent bundles, depending on vector fields defined on their bases, is obtained. It is shown that every canonical vector field is a linear combination with constant coefficients of three vector fields: the variational vector field (canonical lift, the Liouville vector field, and the vertical lift of a vector field on the base of the tangent bundle.
Molecular Dynamics Simulation of Glass Transition Behavior of Polyimide Ensemble
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The effect of chromophores to the glass transition temperature of polyimide ensemble has been investigated by means of molecular dynamics simulation in conjunction with barrier analysis. Simulated Tg results indicated a good agreement with experimental value. This study showed the MD simulation could estimate the effect of chromophores to the Tg of polyimide ensemble conveniently and an estimation approach method had a surprising deviation of Tg from experiment. At the same time, a polyimide structure with higher barrier energy was designed and validated by MD simulation.
Optimally choosing small ensemble members to produce robust climate simulations
International Nuclear Information System (INIS)
This study examines the subset climate model ensemble size required to reproduce certain statistical characteristics from a full ensemble. The ensemble characteristics examined are the root mean square error, the ensemble mean and standard deviation. Subset ensembles are created using measures that consider the simulation performance alone or include a measure of simulation independence relative to other ensemble members. It is found that the independence measure is able to identify smaller subset ensembles that retain the desired full ensemble characteristics than either of the performance based measures. It is suggested that model independence be considered when choosing ensemble subsets or creating new ensembles. (letter)
Canonical-basis time-dependent Hartree-Fock-Bogoliubov theory and linear-response calculations
Ebata, Shuichiro; Inakura, Tsunenori; Yoshida, Kenichi; Hashimoto, Yukio; Yabana, Kazuhiro
2010-01-01
We present simple equations for a canonical-basis formulation of the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory. The equations are obtained from the TDHFB theory with an approximation that the pair potential is assumed to be diagonal in the canonical basis. The canonical-basis formulation significantly reduces the computational cost. We apply the method to linear-response calculations for even-even light nuclei and demonstrate its capability and accuracy by comparing our results with recent calculations of the quasi-particle random-phase approximation with Skyrme functionals. We show systematic studies of E1 strength distributions for Ne and Mg isotopes. The evolution of the low-lying pygmy strength seems to be determined by the interplay of several factors, including the neutron excess, separation energy, neutron shell effects, deformation, and pairing.
Transforming differential equations of multi-loop Feynman integrals into canonical form
Meyer, Christoph
2016-01-01
The method of differential equations has been proven to be a powerful tool for the computation of multi-loop Feynman integrals appearing in quantum field theory. It has been observed that in many instances a canonical basis can be chosen, which drastically simplifies the solution of the differential equation. In this paper, an algorithm is presented that computes the transformation to a canonical basis, starting from some basis that is, for instance, obtained by the usual integration-by-parts reduction techniques. The algorithm requires the existence of a rational transformation to a canonical basis, but is otherwise completely agnostic about the differential equation. In particular, it is applicable to problems involving multiple scales and allows for a rational dependence on the dimensional regulator. It is demonstrated that the algorithm is suitable for current multi-loop calculations by presenting its successful application to a number of non-trivial examples.
The difference between the Weil height and the canonical height on elliptic curves
Silverman, Joseph H.
1990-10-01
Estimates for the difference of the Weil height and the canonical height of points on elliptic curves are used for many purposes, both theoretical and computational. In this note we give an explicit estimate for this difference in terms of the j-invariant and discriminant of the elliptic curve. The method of proof, suggested by Serge Lang, is to use the decomposition of the canonical height into a sum of local heights. We illustrate one use for our estimate by computing generators for the Mordell-Weil group in three examples.
Consistency of canonical formulation of Horava gravity
Energy Technology Data Exchange (ETDEWEB)
Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Tainan, Taiwan (China)
2011-09-22
Both the non-projectable and projectable version of Horava gravity face serious challenges. In the non-projectable version, the constraint algebra is seemingly inconsistent. The projectable version lacks a local Hamiltonian constraint, thus allowing for an extra graviton mode which can be problematic. A new formulation (based on arXiv:1007.1563) of Horava gravity which is naturally realized as a representation of the master constraint algebra (instead of the Dirac algebra) studied by loop quantum gravity researchers is presented. This formulation yields a consistent canonical theory with first class constraints; and captures the essence of Horava gravity in retaining only spatial diffeomorphisms as the physically relevant non-trivial gauge symmetry. At the same time the local Hamiltonian constraint is equivalently enforced by the master constraint.
Comments on the Canonical Measure in Cosmology
Kaya, Ali
2012-01-01
In the mini-superspace approximation to cosmology, the canonical measure can be used to compute probabilities when a cutoff is introduced in the phase space to regularize the divergent measure. However, the region initially constrained by a simple cutoff evolves non-trivially under the Hamiltonian flow. We determine the deformation of the regularized phase space along the orbits when a cutoff is introduced for the scale factor of the universe or for the Hubble parameter. In the former case, we find that the cutoff for the scale factor varies in the phase space and effectively decreases as one evolves backwards in time. In the later case, we calculate the probability of slow-roll inflation in a chaotic model with a massive scalar, which turns out to be cutoff dependent but not exponentially suppressed. We also investigate the measure problem for non-abelian gauge fields giving rise to inflation.
The Deuteron as a Canonically Quantized Biskyrmion
Acus, A; Norvaisas, E; Riska, D O
2003-01-01
The ground state configurations of the solution to Skyrme's topological soliton model for systems with baryon number larger than 1 are well approximated with rational map ans"atze, without individual baryon coordinates. Here canonical quantization of the baryon number 2 system, which represents the deuteron, is carried out in the rational map approximation. The solution, which is described by the 6 parameters of the chiral group SU(2)$times$SU(2), is stabilized by the quantum corrections. The matter density of the variational quantized solution has the required exponential large distance falloff and the quantum numbers of the deuteron. Similarly to the axially symmetric semiclassical solution, the radius and the quadrupole moment are, however, only about half as large as the corresponding empirical values. The quantized deuteron solution is constructed for representations of arbitrary dimension of the chiral group.
Linear canonical transforms theory and applications
Kutay, M; Ozaktas, Haldun; Sheridan, John
2016-01-01
This book provides a clear and accessible introduction to the essential mathematical foundations of linear canonical transforms from a signals and systems perspective. Substantial attention is devoted to how these transforms relate to optical systems and wave propagation. There is extensive coverage of sampling theory and fast algorithms for numerically approximating the family of transforms. Chapters on topics ranging from digital holography to speckle metrology provide a window on the wide range of applications. This volume will serve as a reference for researchers in the fields of image and signal processing, wave propagation, optical information processing and holography, optical system design and modeling, and quantum optics. It will be of use to graduate students in physics and engineering, as well as for scientists in other areas seeking to learn more about this important yet relatively unfamiliar class of integral transformations.
Ensemble data assimilation for the reconstruction of mantle circulation
Bocher, Marie; Coltice, Nicolas; Fournier, Alexandre; Tackley, Paul
2016-04-01
The surface tectonics of the Earth is the result of mantle dynamics. This link between internal and surface dynamics can be used to reconstruct the evolution of mantle circulation. This is classically done by imposing plate tectonics reconstructions as boundary conditions on numerical models of mantle convection. However, this technique does not account for uncertainties in plate tectonics reconstructions and does not allow any dynamical feedback of mantle dynamics on surface tectonics to develop. Mantle convection models are now able to produce surface tectonics comparable to that of the Earth to first order. We capitalize on these convection models to propose a more consistent integration of plate tectonics reconstructions into mantle convection models. For this purpose, we use the ensemble Kalman filter. This method has been developed and successfully applied to meteorology, oceanography and even more recently outer core dynamics. It consists in integrating sequentially a time series of data into a numerical model, starting from an ensemble of possible initial states. The initial ensemble of states is designed to represent an approximation of the probability density function (pdf) of the a priori state of the system. Whenever new observations are available, each member of the ensemble states is corrected considering both the approximated pdf of the state, and the pdf of the new data. Between two observation times, each ensemble member evolution is computed independently, using the convection model. This technique provides at each time an approximation of the pdf of the state of the system, in the form of a finite ensemble of states. We perform synthetic experiments to assess the efficiency of this method for the reconstruction of mantle circulation.
The Wilson loop in the Gaussian Unitary Ensemble
Gurau, Razvan
2016-01-01
Using the supersymmetric formalism we compute exactly at finite $N$ the expectation of the Wilson loop in the Gaussian Unitary Ensemble and derive an exact formula for the spectral density at finite $N$. We obtain the same result by a second method relying on enumerative combinatorics and show that it leads to a novel proof of the Harer-Zagier series formula.
Korean Percussion Ensemble ("Samulnori") in the General Music Classroom
Kang, Sangmi; Yoo, Hyesoo
2016-01-01
This article introduces "samulnori" (Korean percussion ensemble), its cultural background, and instructional methods as parts of a classroom approach to teaching upper-level general music. We introduce five of eight sections from "youngnam nong-ak" (a style of samulnori) as a repertoire for teaching Korean percussion music to…
The "cause of Jesus" (Sache Jesu as the Canon behind the Canon
Directory of Open Access Journals (Sweden)
Andries G. van Aarde
2001-01-01
Full Text Available God, and not the Bible as such, is the church's primary authority. Jesus of Nazareth is the manifestation of God in history. In a post-Aufkllirung environment one cannot escape the demand to think historically. To discern what could be seen as the "ground" offaith, one needs to distinguish the "proclaiming Jesus" from the "proclaimed Jesus", though these two aspects are dialectically intertwined. This dialeclic can be described as the "Jesus kerygma" or the "cause of Jesus". The aim of this article is to argue that if Christians focus only on the church's kerygma they base their ultimate trust upon assertions of faith, rather than upon the cause of faith. The dictum that the cause of Jesus is the canon behind the canon is explained in terms of the distinction between ''fides qua creditur" and "fides quae creditur", and postmodern historical Jesus research.
Wind Power Prediction using Ensembles
DEFF Research Database (Denmark)
Giebel, Gregor; Badger, Jake; Landberg, Lars;
2005-01-01
offshore wind farm and the whole Jutland/Funen area. The utilities used these forecasts for maintenance planning, fuel consumption estimates and over-the-weekend trading on the Leipzig power exchange. Othernotable scientific results include the better accuracy of forecasts made up from a simple...... superposition of two NWP provider (in our case, DMI and DWD), an investigation of the merits of a parameterisation of the turbulent kinetic energy within thedelivered wind speed forecasts, and the finding that a “naïve” downscaling of each of the coarse ECMWF ensemble members with higher resolution HIRLAM did...
Gibbs Ensembles of Nonintersecting Paths
Borodin, Alexei
2008-01-01
We consider a family of determinantal random point processes on the two-dimensional lattice and prove that members of our family can be interpreted as a kind of Gibbs ensembles of nonintersecting paths. Examples include probability measures on lozenge and domino tilings of the plane, some of which are non-translation-invariant. The correlation kernels of our processes can be viewed as extensions of the discrete sine kernel, and we show that the Gibbs property is a consequence of simple linear relations satisfied by these kernels. The processes depend on infinitely many parameters, which are closely related to parametrization of totally positive Toeplitz matrices.
Zhu, Xiaofeng; Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2016-09-01
Fusing information from different imaging modalities is crucial for more accurate identification of the brain state because imaging data of different modalities can provide complementary perspectives on the complex nature of brain disorders. However, most existing fusion methods often extract features independently from each modality, and then simply concatenate them into a long vector for classification, without appropriate consideration of the correlation among modalities. In this paper, we propose a novel method to transform the original features from different modalities to a common space, where the transformed features become comparable and easy to find their relation, by canonical correlation analysis. We then perform the sparse multi-task learning for discriminative feature selection by using the canonical features as regressors and penalizing a loss function with a canonical regularizer. In our experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we use Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) images to jointly predict clinical scores of Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog) and Mini-Mental State Examination (MMSE) and also identify multi-class disease status for Alzheimer's disease diagnosis. The experimental results showed that the proposed canonical feature selection method helped enhance the performance of both clinical score prediction and disease status identification, outperforming the state-of-the-art methods. PMID:26254746
From Classical to Quantum: New Canonical Tools for the Dynamics of Gravity
Höhn, P.A.
2012-01-01
In a gravitational context, canonical methods offer an intuitive picture of the dynamics and simplify an identification of the degrees of freedom. Nevertheless, extracting dynamical information from background independent approaches to quantum gravity is a highly non-trivial challenge. In this thesi
Measuring social interaction in music ensembles.
Volpe, Gualtiero; D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano
2016-05-01
Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances. PMID:27069054
基于集成支持向量机的故障诊断方法研究%A Fault Diagnosis Method Based on Ensemble Support Vector Machines
Institute of Scientific and Technical Information of China (English)
王金彪; 周伟; 王澍
2012-01-01
In order to enhance the generalization ability of Support Vector Machine(SVM) ,Bagging ensemble learning algorithm was studied. The experimental results of Bagging SVM in the standard data set showed that the Bagging method couldn't enhance the generalization ability of SVM markedly. In order to find reason of this,the stability of SVM and neural network was studied. The results showed that SVM is a relative stable classifier in comparison with neural network. Then,a double disturbance algorithm was proposed,in which the subspace method was used for data characteristics disturbance, and Bagging method for data distribution disturbance. Experiments were made by using double disturbance algorithm for the standard data sets and fault diagnosis data set, and the results showed that the recognition rate of SVM is obviously enhanced by this method.%为了提高支持向量机的泛化能力,研究了Bagging集成学习方法对于支持向量机的提升作用,试验结果表明提升作用不明显.通过模拟数据扰动的方法,在标准数据集上通过试验定量比较了支持向量机和神经网络的稳定性,结果表明支持向量机相对于神经网络来说是一种稳定的分类器.在此基础上,提出了双重扰动法,即通过子空间法扰动数据特征,通过Bagging算法扰动数据分布,来达到提高基分类器之间差异性的目的,在标准数据集和故障诊断数据上进行了试验,试验结果表明,双重扰动法较好地提升了支持向量机的正确识别率.
Optimization on scoliosis examination on Canons DR system
DEFF Research Database (Denmark)
Precht, Helle
2007-01-01
are provided with the prospect for examining the following hypothesis: A Canon receptor with CsI scintilator is able to give an acceptable image quality with a lower dosage at a scoliosis recording than Canon’s receptor with a GOS scintilator. Materials and method The project relies on an empirical study...... in which theory is used as a background for the test setup and later audit appraisal. Two tests are carried out on two different hospitals with each their receptor, and afterwards the results are compared. To guarantee comparable tests the results are verified through status checks and statistical t...... the audit appraisal. The test is validated against a scientific article and the results are treated following. Conclusion I found a bias in the size of the specter, since the torso is larger than an average scoliosis patient. For that reason the test values of the recordings can not be transferred directly...
A Computer Program for a Canonical Problem in Underwater Shock
Directory of Open Access Journals (Sweden)
Thomas L. Geers
1994-01-01
Full Text Available Finite-element/boundary-element codes are widely used to analyze the response of marine structures to underwater explosions. An important step in verifying the correctness and accuracy of such codes is the comparison of code-generated results for canonical problems with corresponding analytical or semianalytical results. At the present time, such comparisons rely on hardcopy results presented in technical journals and reports. This article describes a computer program available from SAVIAC that produces user-selected numerical results for a step-wave-excited spherical shell submerged in and (optionally filled with an acoustic fluid. The method of solution employed in the program is based on classical expansion of the field quantities in generalized Fourier series in the meridional coordinate. Convergence of the series is enhanced by judicious application of modified Cesàro summation and partial closed-form solution.
Canonical Group Quantization, Rotation Generators and Quantum Indistinguishability
Benavides, C
2008-01-01
Using the method of canonical group quantization, we construct the angular momentum operators associated to configuration spaces with the topology of (i) a sphere and (ii) a projective plane. In the first case, the obtained angular momentum operators are the quantum version of Poincare's vector, i.e., the physically correct angular momentum operators for an electron coupled to the field of a magnetic monopole. In the second case, the obtained operators represent the angular momentum operators of a system of two indistinguishable spin zero quantum particles in three spatial dimensions. We explicitly show how our formalism relates to the one developed by Berry and Robbins. The relevance of the proposed formalism for an advance in our understanding of the spin-statistics connection in non-relativistic quantum mechanics is discussed.
Recovery of spectral data using weighted canonical correlation regression
Eslahi, Niloofar; Amirshahi, Seyed Hossein; Agahian, Farnaz
2009-05-01
The weighted canonical correlation regression technique is employed for reconstruction of reflectance spectra of surface colors from the related XYZ tristimulus values of samples. Flexible input data based on applying certain weights to reflectance and colorimetric values of Munsell color chips has been implemented for each particular sample which belongs to Munsell or GretagMacbeth Colorchecker DC color samples. In fact, the colorimetric and spectrophotometric data of Munsell chips are selected as fundamental bases and the color difference values between the target and samples in Munsell dataset are chosen as a criterion for determination of weighting factors. The performance of the suggested method is evaluated in spectral reflectance reconstruction. The results show considerable improvements in terms of root mean square error (RMS) and goodness-of-fit coefficient (GFC) between the actual and reconstructed reflectance curves as well as CIELAB color difference values under illuminants A and TL84 for CIE1964 standard observer.
Interpreting Tree Ensembles with inTrees
Deng, Houtao
2014-01-01
Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy. In this work, we provide the inTrees (interpretable trees) framework that extracts, measures, prunes and selects rules from a tree ensemble, and calculates frequent variable interactions. An rule-based learner, referred to as the simplified tree ensemble learner (STEL), can also be formed and used for future prediction. The inTrees framework can applied to both classification an...
A Gaussian mixture ensemble transform filter
Reich, Sebastian
2011-01-01
We generalize the popular ensemble Kalman filter to an ensemble transform filter where the prior distribution can take the form of a Gaussian mixture or a Gaussian kernel density estimator. The design of the filter is based on a continuous formulation of the Bayesian filter analysis step. We call the new filter algorithm the ensemble Gaussian mixture filter (EGMF). The EGMF is implemented for three simple test problems (Brownian dynamics in one dimension, Langevin dynamics in two dimensions, ...
Bred vectors with customizable scale: 'À la carte' ensemble perturbations
Homar Santaner, V.; Stensrud, D. J.
2009-09-01
Short-range forecasts of severe weather are one of the most challenging tasks faced by the atmospheric science community. Our persistent failure to generate accurate numerical forecasts of tornadoes, large hail, heavy precipitation or strong wind events is caused by two fundamental aspects of numerical forecast systems: the chaotic nature of the governing equations and the large uncertainties in both the atmospheric state and the models that govern its evolution. Currently, we cope with both sources of error by describing the state of the atmosphere in a probabilistic manner. In this framework, forecasting becomes predicting the probability density function (pdf) of future states, given the pdf of initial states that are compatible with available observations and previous forecasts. This probabilistic perspective is often created by generating ensembles of deterministic predictions that are aimed at sampling the most important sources of uncertainty in the forecasting system. The ensemble generation/sampling strategy is a crucial aspect of their performance and various methods have been proposed. Although global forecasting offices have been using ensembles of perturbed initial conditions for medium-range operational forecasts since 1994, no consensus exists regarding the optimum sampling strategy for high resolution short-range ensemble forecasts with predicting skill in the mesoscale. Bred vectors, however, have been hypothesized to better capture the growing modes in the highly nonlinear mesoscale dynamics of severe episodes than singular vectors or observation perturbations. Yet even this technique is not able to produce enough diversity in the ensembles to accurately and routinely predict extreme phenomena such as severe weather. Thus, we propose a new method to generate ensembles of initial conditions perturbations that is based on the breeding technique. Given a standard bred mode, a set of customized perturbations is derived with specified amplitudes and
Efficient Kernel-Based Ensemble Gaussian Mixture Filtering
Liu, Bo
2015-11-11
We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.
Glycosylation site prediction using ensembles of Support Vector Machine classifiers
Directory of Open Access Journals (Sweden)
Silvescu Adrian
2007-11-01
Full Text Available Abstract Background Glycosylation is one of the most complex post-translational modifications (PTMs of proteins in eukaryotic cells. Glycosylation plays an important role in biological processes ranging from protein folding and subcellular localization, to ligand recognition and cell-cell interactions. Experimental identification of glycosylation sites is expensive and laborious. Hence, there is significant interest in the development of computational methods for reliable prediction of glycosylation sites from amino acid sequences. Results We explore machine learning methods for training classifiers to predict the amino acid residues that are likely to be glycosylated using information derived from the target amino acid residue and its sequence neighbors. We compare the performance of Support Vector Machine classifiers and ensembles of Support Vector Machine classifiers trained on a dataset of experimentally determined N-linked, O-linked, and C-linked glycosylation sites extracted from O-GlycBase version 6.00, a database of 242 proteins from several different species. The results of our experiments show that the ensembles of Support Vector Machine classifiers outperform single Support Vector Machine classifiers on the problem of predicting glycosylation sites in terms of a range of standard measures for comparing the performance of classifiers. The resulting methods have been implemented in EnsembleGly, a web server for glycosylation site prediction. Conclusion Ensembles of Support Vector Machine classifiers offer an accurate and reliable approach to automated identification of putative glycosylation sites in glycoprotein sequences.
Directory of Open Access Journals (Sweden)
J. Rasmussen
2015-02-01
Full Text Available Groundwater head and stream discharge is assimilated using the Ensemble Transform Kalman Filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members, an adaptive localization method is used. The performance of the adaptive localization method is compared to the more common local analysis localization. The relationship between filter performance in terms of hydraulic head and discharge error and the number of ensemble members is investigated for varying numbers and spatial distributions of groundwater head observations and with or without discharge assimilation and parameter estimation. The study shows that (1 more ensemble members are needed when fewer groundwater head observations are assimilated, and (2 assimilating discharge observations and estimating parameters requires a much larger ensemble size than just assimilating groundwater head observations. However, the required ensemble size can be greatly reduced with the use of adaptive localization, which by far outperforms local analysis localization.
Impacts of non-canonical El Niño patterns on Atlantic hurricane activity
Larson, S.; Lee, S.; Wang, C.; Chung, E.; Enfield, D. B.
2012-12-01
The impact of non-canonical El Niño patterns, typically characterized by warmer than normal sea surface tempera- tures (SSTs) in the central tropical Pacific, on Atlantic tropical cyclone (TC) is explored by using composites of key Atlantic TC indices and tropospheric vertical wind shear over the Atlantic main development region (MDR). The highlight of our major findings is that, while the canonical El Niño pattern has a strong suppressing influence on Atlantic TC activity, non-canonical El Niño patterns con- sidered in this study, namely central Pacific warming, El Niño Modoki, positive phase Trans-Niño, and positive phase Pacific meridional mode, all have insubstantial impact on Atlantic TC activity. This result becomes more conclu- sive when the impact of MDR SST is removed from the Atlantic TC indices and MDR wind shear by using the method of linear regression. Further analysis suggests that the tropical Pacific SST anomalies associated with the non- canonical El Niño patterns are not strong enough to cause a substantial warming of the tropical troposphere in the Atlantic region, which is the key factor that increases the wind shear and atmospheric static stability over the MDR. During the recent decades, the non-canonical El Niños have been more frequent while the canonical El Niño has been less frequent. If such a trend continues in the future, it is expected that the suppressing effect of El Niño on Atlantic TC activity will diminish and thus the MDR SST will play a more important role in controlling Atlantic TC activity in the coming decades.
The Asian American Fakeness Canon, 1972-2002
Oishi, Eve
2007-01-01
The year 1972 can be seen to inaugurate not a tradition of Asian American New York theater, but the rich and multigenre collection of writing that the author has called "the Asian American fakeness canon." The fakeness canon refers to a collection of writings that take as one of their central points of reference the question of cultural and ethnic…
Stability of 2nd Hilbert points of canonical curves
Fedorchuk, Maksym
2011-01-01
We establish GIT semistability of the 2nd Hilbert point of every Gieseker-Petri general canonical curve by a simple geometric argument. As a consequence, we obtain an upper bound on slopes of general families of Gorenstein curves. We also explore the question of what replaces hyperelliptic curves in the GIT quotients of the Hilbert scheme of canonical curves.
Canonical connection on a class of Riemannian almost product manifolds
Mekerov, Dimitar
2009-01-01
The canonical connection on a Riemannian almost product manifolds is an analogue to the Hermitian connection on an almost Hermitian manifold. In this paper we consider the canonical connection on a class of Riemannian almost product manifolds with nonintegrable almost product structure.
Critical Literature Pedagogy: Teaching Canonical Literature for Critical Literacy
Borsheim-Black, Carlin; Macaluso, Michael; Petrone, Robert
2014-01-01
This article introduces Critical Literature Pedagogy (CLP), a pedagogical framework for applying goals of critical literacy within the context of teaching canonical literature. Critical literacies encompass skills and dispositions to understand, question, and critique ideological messages of texts; because canonical literature is often…
Canonical Quantum Teleportation of Two-Particle Arbitrary State
Institute of Scientific and Technical Information of China (English)
HAO Xiang; ZHU Shi-Qun
2005-01-01
The canonical quantum teleportation of two-particle arbitrary state is realized by means of phase operator and number operator. The maximally entangled eigenstates between the difference of phase operators and the sum of number operators are considered as the quantum channels. In contrast to the standard quantum teleportation, the different unitary local operation of canonical teleportation can be simplified by a general expression.
Canonical representation for approximating solution of fuzzy polynomial equations
Directory of Open Access Journals (Sweden)
M. Salehnegad
2010-06-01
Full Text Available In this paper, the concept of canonical representation is proposed to find fuzzy roots of fuzzy polynomial equations. We transform fuzzy polynomial equations to system of crisp polynomial equations, this transformation is perform by using canonical representation based on three parameters Value, Ambiguity and Fuzziness.
Grand canonical potential of a magnetized neutron gas
Diener, Jacobus P W
2015-01-01
We compute the effective action for stationary and spatially constant magnetic fields, when coupled anomalously to charge neutral fermions, by integrating out the fermions. From this the grand canonical partition function and potential of the fermions and fields are computed. This also takes care of magnetic field dependent vacuum corrections to the grand canonical potential. Possible applications to neutron stars are indicated.
Hierarchical Bayes Ensemble Kalman Filtering
Tsyrulnikov, Michael
2015-01-01
Ensemble Kalman filtering (EnKF), when applied to high-dimensional systems, suffers from an inevitably small affordable ensemble size, which results in poor estimates of the background error covariance matrix ${\\bf B}$. The common remedy is a kind of regularization, usually an ad-hoc spatial covariance localization (tapering) combined with artificial covariance inflation. Instead of using an ad-hoc regularization, we adopt the idea by Myrseth and Omre (2010) and explicitly admit that the ${\\bf B}$ matrix is unknown and random and estimate it along with the state (${\\bf x}$) in an optimal hierarchical Bayes analysis scheme. We separate forecast errors into predictability errors (i.e. forecast errors due to uncertainties in the initial data) and model errors (forecast errors due to imperfections in the forecast model) and include the two respective components ${\\bf P}$ and ${\\bf Q}$ of the ${\\bf B}$ matrix into the extended control vector $({\\bf x},{\\bf P},{\\bf Q})$. Similarly, we break the traditional backgrou...
Canonical correlations between chemical and energetic characteristics of lignocellulosic wastes
Directory of Open Access Journals (Sweden)
Thiago de Paula Protásio
2012-09-01
Full Text Available Canonical correlation analysis is a statistical multivariate procedure that allows analyzing linear correlation that may exist between two groups or sets of variables (X and Y. This paper aimed to provide canonical correlation analysis between a group comprised of lignin and total extractives contents and higher heating value (HHV with a group of elemental components (carbon, hydrogen, nitrogen and sulfur for lignocellulosic wastes. The following wastes were used: eucalyptus shavings; pine shavings; red cedar shavings; sugar cane bagasse; residual bamboo cellulose pulp; coffee husk and parchment; maize harvesting wastes; and rice husk. Only the first canonical function was significant, but it presented a low canonical R². High carbon, hydrogen and sulfur contents and low nitrogen contents seem to be related to high total extractives contents of the lignocellulosic wastes. The preliminary results found in this paper indicate that the canonical correlations were not efficient to explain the correlations between the chemical elemental components and lignin contents and higher heating values.
Joys of Community Ensemble Playing: The Case of the Happy Roll Elastic Ensemble in Taiwan
Hsieh, Yuan-Mei; Kao, Kai-Chi
2012-01-01
The Happy Roll Elastic Ensemble (HREE) is a community music ensemble supported by Tainan Culture Centre in Taiwan. With enjoyment and friendship as its primary goals, it aims to facilitate the joys of ensemble playing and the spirit of social networking. This article highlights the key aspects of HREE's development in its first two years…
ARM Cloud Retrieval Ensemble Data Set (ACRED)
Energy Technology Data Exchange (ETDEWEB)
Zhao, C; Xie, S; Klein, SA; McCoy, R; Comstock, JM; Delanoë, J; Deng, M; Dunn, M; Hogan, RJ; Jensen, MP; Mace, GG; McFarlane, SA; O’Connor, EJ; Protat, A; Shupe, MD; Turner, D; Wang, Z
2011-09-12
This document describes a new Atmospheric Radiation Measurement (ARM) data set, the ARM Cloud Retrieval Ensemble Data Set (ACRED), which is created by assembling nine existing ground-based cloud retrievals of ARM measurements from different cloud retrieval algorithms. The current version of ACRED includes an hourly average of nine ground-based retrievals with vertical resolution of 45 m for 512 layers. The techniques used for the nine cloud retrievals are briefly described in this document. This document also outlines the ACRED data availability, variables, and the nine retrieval products. Technical details about the generation of ACRED, such as the methods used for time average and vertical re-grid, are also provided.
Nonlinear stability and ergodicity of ensemble based Kalman filters
Tong, Xin T.; Majda, Andrew J.; Kelly, David
2016-02-01
The ensemble Kalman filter (EnKF) and ensemble square root filter (ESRF) are data assimilation methods used to combine high dimensional, nonlinear dynamical models with observed data. Despite their widespread usage in climate science and oil reservoir simulation, very little is known about the long-time behavior of these methods and why they are effective when applied with modest ensemble sizes in large dimensional turbulent dynamical systems. By following the basic principles of energy dissipation and controllability of filters, this paper establishes a simple, systematic and rigorous framework for the nonlinear analysis of EnKF and ESRF with arbitrary ensemble size, focusing on the dynamical properties of boundedness and geometric ergodicity. The time uniform boundedness guarantees that the filter estimate will not diverge to machine infinity in finite time, which is a potential threat for EnKF and ESQF known as the catastrophic filter divergence. Geometric ergodicity ensures in addition that the filter has a unique invariant measure and that initialization errors will dissipate exponentially in time. We establish these results by introducing a natural notion of observable energy dissipation. The time uniform bound is achieved through a simple Lyapunov function argument, this result applies to systems with complete observations and strong kinetic energy dissipation, but also to concrete examples with incomplete observations. With the Lyapunov function argument established, the geometric ergodicity is obtained by verifying the controllability of the filter processes; in particular, such analysis for ESQF relies on a careful multivariate perturbation analysis of the covariance eigen-structure.
A benchmark for reaction coordinates in the transition path ensemble.
Li, Wenjin; Ma, Ao
2016-04-01
The molecular mechanism of a reaction is embedded in its transition path ensemble, the complete collection of reactive trajectories. Utilizing the information in the transition path ensemble alone, we developed a novel metric, which we termed the emergent potential energy, for distinguishing reaction coordinates from the bath modes. The emergent potential energy can be understood as the average energy cost for making a displacement of a coordinate in the transition path ensemble. Where displacing a bath mode invokes essentially no cost, it costs significantly to move the reaction coordinate. Based on some general assumptions of the behaviors of reaction and bath coordinates in the transition path ensemble, we proved theoretically with statistical mechanics that the emergent potential energy could serve as a benchmark of reaction coordinates and demonstrated its effectiveness by applying it to a prototypical system of biomolecular dynamics. Using the emergent potential energy as guidance, we developed a committor-free and intuition-independent method for identifying reaction coordinates in complex systems. We expect this method to be applicable to a wide range of reaction processes in complex biomolecular systems.
El Escritor y las Normas del Canon Literario (The Writer and the Norms of the Literary Canon).
Policarpo, Alcibiades
This paper speculates about whether a literary canon exists in contemporary Latin American literature, particularly in the prose genre. The paper points to Carlos Fuentes, Gabriel Garcia Marquez, and Mario Vargas Llosa as the three authors who might form this traditional and liberal canon with their works "La Muerte de Artemio Cruz" (Fuentes),…
Finite Canonical Measure for Nonsingular Cosmologies
Page, Don N
2011-01-01
The total canonical (Liouville-Henneaux-Gibbons-Hawking-Stewart) measure is finite for completely nonsingular Friedmann-Robertson-Walker classical universes with a minimally coupled massive scalar field and a positive cosmological constant. For a cosmological constant very small in units of the square of the scalar field mass, most of the measure is for nearly de Sitter solutions with no inflation at a much more rapid rate. However, if one restricts to solutions in which the scalar field energy density is ever more than twice the equivalent energy density of the cosmological constant, then the number of e-folds of rapid inflation must be large, and the fraction of the measure is low in which the spatial curvature is comparable to the cosmological constant at the time when it is comparable to the energy density of the scalar field. The measure for such classical FRW-Lambda-phi models with both a big bang and a big crunch is also finite. Only the solutions with a big bang that expand forever, or the time-revers...
Canonical Coordinates for Retino-Cortical Magnification
Directory of Open Access Journals (Sweden)
Luc Florack
2014-02-01
Full Text Available A geometric model for a biologically-inspired visual front-end is proposed, based on an isotropic, scale-invariant two-form field. The model incorporates a foveal property typical of biological visual systems, with an approximately linear decrease of resolution as a function of eccentricity, and by a physical size constant that measures the radius of the geometric foveola, the central region characterized by maximal resolving power. It admits a description in singularity-free canonical coordinates generalizing the familiar log-polar coordinates and reducing to these in the asymptotic case of negligibly-sized geometric foveola or, equivalently, at peripheral locations in the visual field. It has predictive power to the extent that quantitative geometric relationships pertaining to retino-cortical magnification along the primary visual pathway, such as receptive field size distribution and spatial arrangement in retina and striate cortex, can be deduced in a principled manner. The biological plausibility of the model is demonstrated by comparison with known facts of human vision.
An $OSp$ extension of Canonical Tensor Model
Narain, Gaurav
2015-01-01
Tensor models are generalizations of matrix models, and are studied as discrete models of quantum gravity for arbitrary dimensions. Among them, the canonical tensor model (CTM for short) is a rank-three tensor model formulated as a totally constrained system with a number of first-class constraints, which have a similar algebraic structure as the constraints of the ADM formalism of general relativity. In this paper, we formulate a super-extension of CTM as an attempt to incorporate fermionic degrees of freedom. The kinematical symmetry group is extended from $O(N)$ to $OSp(N,\\tilde N)$, and the constraints are constructed so that they form a first-class constraint super-Poisson algebra. This is a straightforward super-extension, and the constraints and their algebraic structure are formally unchanged from the purely bosonic case, except for the additional signs associated to the order of the fermionic indices and dynamical variables. However, this extension of CTM leads to the existence of negative norm state...
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik;
2006-01-01
Meteorological ensemble forecasts aim at quantifying the uncertainty of the future development of the weather by supplying several possible scenarios of this development. Here we address the use of such scenarios in probabilistic forecasting of wind power production. Specifically, for each forecast...... horizon we aim at supplying quantiles of the wind power production conditional on the information available at the time at which the forecast is generated. This involves: (i) transformation of meteorological ensemble forecasts into wind power ensemble forecasts and (ii) calculation of quantiles based...... on the wind power ensemble forecasts. Given measurements of power production, representing a region or a single wind farm, we have developed methods applicable for these two steps. While (ii) should in principle be a simple task we found that the probabilistic information contained in the wind power ensembles...
Skill and relative economic value of medium-range hydrological ensemble predictions
Directory of Open Access Journals (Sweden)
E. Roulin
2007-01-01
Full Text Available A hydrological ensemble prediction system, integrating a water balance model with ensemble precipitation forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF Ensemble Prediction System (EPS, is evaluated for two Belgian catchments using verification methods borrowed from meteorology. The skill of the probability forecast that the streamflow exceeds a given level is measured with the Brier Skill Score. Then the value of the system is assessed using a cost-loss decision model. The verification results of the hydrological ensemble predictions are compared with the corresponding results obtained for simpler alternatives as the one obtained by using of the deterministic forecast of ECMWF which is characterized by a higher spatial resolution or by using of the EPS ensemble mean.
Constructing Support Vector Machine Ensembles for Cancer Classification Based on Proteomic Profiling
Institute of Scientific and Technical Information of China (English)
Yong Mao; Xiao-Bo Zhou; Dao-Ying Pi; You-Xian Sun
2005-01-01
In this study, we present a constructive algorithm for training cooperative support vector machine ensembles (CSVMEs). CSVME combines ensemble architecture design with cooperative training for individual SVMs in ensembles. Unlike most previous studies on training ensembles, CSVME puts emphasis on both accuracy and collaboration among individual SVMs in an ensemble. A group of SVMs selected on the basis of recursive classifier elimination is used in CSVME, and the number of the individual SVMs selected to construct CSVME is determined by 10-fold cross-validation. This kind of SVME has been tested on two ovarian cancer datasets previously obtained by proteomic mass spectrometry. By combining several individual SVMs, the proposed method achieves better performance than the SVME of all base SVMs.
Basic Brackets of a 2D Model for the Hodge Theory Without its Canonical Conjugate Momenta
Kumar, R.; Gupta, S.; Malik, R. P.
2016-06-01
We deduce the canonical brackets for a two (1+1)-dimensional (2D) free Abelian 1-form gauge theory by exploiting the beauty and strength of the continuous symmetries of a Becchi-Rouet-Stora-Tyutin (BRST) invariant Lagrangian density that respects, in totality, six continuous symmetries. These symmetries entail upon this model to become a field theoretic example of Hodge theory. Taken together, these symmetries enforce the existence of exactly the same canonical brackets amongst the creation and annihilation operators that are found to exist within the standard canonical quantization scheme. These creation and annihilation operators appear in the normal mode expansion of the basic fields of this theory. In other words, we provide an alternative to the canonical method of quantization for our present model of Hodge theory where the continuous internal symmetries play a decisive role. We conjecture that our method of quantization is valid for a class of field theories that are tractable physical examples for the Hodge theory. This statement is true in any arbitrary dimension of spacetime.
Directory of Open Access Journals (Sweden)
E. Crestani
2013-04-01
Full Text Available Estimating the spatial variability of hydraulic conductivity K in natural aquifers is important for predicting the transport of dissolved compounds. Especially in the nonreactive case, the plume evolution is mainly controlled by the heterogeneity of K. At the local scale, the spatial distribution of K can be inferred by combining the Lagrangian formulation of the transport with a Kalman-filter-based technique and assimilating a sequence of time-lapse concentration C measurements, which, for example, can be evaluated on site through the application of a geophysical method. The objective of this work is to compare the ensemble Kalman filter (EnKF and the ensemble smoother (ES capabilities to retrieve the hydraulic conductivity spatial distribution in a groundwater flow and transport modeling framework. The application refers to a two-dimensional synthetic aquifer in which a tracer test is simulated. Moreover, since Kalman-filter-based methods are optimal only if each of the involved variables fit to a Gaussian probability density function (pdf and since this condition may not be met by some of the flow and transport state variables, issues related to the non-Gaussianity of the variables are analyzed and different transformation of the pdfs are considered in order to evaluate their influence on the performance of the methods. The results show that the EnKF reproduces with good accuracy the hydraulic conductivity field, outperforming the ES regardless of the pdf of the concentrations.
Canonical cortical circuits: current evidence and theoretical implications
Directory of Open Access Journals (Sweden)
Capone F
2016-04-01
Full Text Available Fioravante Capone,1,2 Matteo Paolucci,1,2 Federica Assenza,1,2 Nicoletta Brunelli,1,2 Lorenzo Ricci,1,2 Lucia Florio,1,2 Vincenzo Di Lazzaro1,2 1Unit of Neurology, Neurophysiology, Neurobiology, Department of Medicine, Università Campus Bio-Medico di Roma, Rome, Italy; 2Fondazione Alberto Sordi – Research Institute for Aging, Rome, ItalyAbstract: Neurophysiological and neuroanatomical studies have found that the same basic structural and functional organization of neuronal circuits exists throughout the cortex. This kind of cortical organization, termed canonical circuit, has been functionally demonstrated primarily by studies involving visual striate cortex, and then, the concept has been extended to different cortical areas. In brief, the canonical circuit is composed of superficial pyramidal neurons of layers II/III receiving different inputs and deep pyramidal neurons of layer V that are responsible for cortex output. Superficial and deep pyramidal neurons are reciprocally connected, and inhibitory interneurons participate in modulating the activity of the circuit. The main intuition of this model is that the entire cortical network could be modeled as the repetition of relatively simple modules composed of relatively few types of excitatory and inhibitory, highly interconnected neurons. We will review the origin and the application of the canonical cortical circuit model in the six sections of this paper. The first section (The origins of the concept of canonical circuit: the cat visual cortex reviews the experiments performed in the cat visual cortex, from the origin of the concept of canonical circuit to the most recent developments in the modelization of cortex. The second (The canonical circuit in neocortex and third (Toward a canonical circuit in agranular cortex sections try to extend the concept of canonical circuit to other cortical areas, providing some significant examples of circuit functioning in different cytoarchitectonic
A genetic ensemble approach for gene-gene interaction identification
Directory of Open Access Journals (Sweden)
Ho Joshua WK
2010-10-01
Full Text Available Abstract Background It has now become clear that gene-gene interactions and gene-environment interactions are ubiquitous and fundamental mechanisms for the development of complex diseases. Though a considerable effort has been put into developing statistical models and algorithmic strategies for identifying such interactions, the accurate identification of those genetic interactions has been proven to be very challenging. Methods In this paper, we propose a new approach for identifying such gene-gene and gene-environment interactions underlying complex diseases. This is a hybrid algorithm and it combines genetic algorithm (GA and an ensemble of classifiers (called genetic ensemble. Using this approach, the original problem of SNP interaction identification is converted into a data mining problem of combinatorial feature selection. By collecting various single nucleotide polymorphisms (SNP subsets as well as environmental factors generated in multiple GA runs, patterns of gene-gene and gene-environment interactions can be extracted using a simple combinatorial ranking method. Also considered in this study is the idea of combining identification results obtained from multiple algorithms. A novel formula based on pairwise double fault is designed to quantify the degree of complementarity. Conclusions Our simulation study demonstrates that the proposed genetic ensemble algorithm has comparable identification power to Multifactor Dimensionality Reduction (MDR and is slightly better than Polymorphism Interaction Analysis (PIA, which are the two most popular methods for gene-gene interaction identification. More importantly, the identification results generated by using our genetic ensemble algorithm are highly complementary to those obtained by PIA and MDR. Experimental results from our simulation studies and real world data application also confirm the effectiveness of the proposed genetic ensemble algorithm, as well as the potential benefits of
Ensemble-based approximation of observation impact using an observation-based verification metric
Sommer, Matthias; Weissmann, Martin
2016-01-01
Knowledge on the contribution of observations to forecast accuracy is crucial for the refinement of observing and data assimilation systems. Several recent publications highlighted the benefits of efficiently approximating this observation impact using adjoint methods or ensembles. This study proposes a modification of an existing method for computing observation impact in an ensemble-based data assimilation and forecasting system and applies the method to a pre-operational, convective-scale ...