Derivation of Mayer Series from Canonical Ensemble
Wang, Xian-Zhi
2016-02-01
Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.
Matrix product purifications for canonical ensembles and quantum number distributions
Barthel, Thomas
2016-09-01
Matrix product purifications (MPPs) are a very efficient tool for the simulation of strongly correlated quantum many-body systems at finite temperatures. When a system features symmetries, these can be used to reduce computation costs substantially. It is straightforward to compute an MPP of a grand-canonical ensemble, also when symmetries are exploited. This paper provides and demonstrates methods for the efficient computation of MPPs of canonical ensembles under utilization of symmetries. Furthermore, we present a scheme for the evaluation of global quantum number distributions using matrix product density operators (MPDOs). We provide exact matrix product representations for canonical infinite-temperature states, and discuss how they can be constructed alternatively by applying matrix product operators to vacuum-type states or by using entangler Hamiltonians. A demonstration of the techniques for Heisenberg spin-1 /2 chains explains why the difference in the energy densities of canonical and grand-canonical ensembles decays as 1 /L .
Canonical Ensemble Model for Black Hole Radiation
Indian Academy of Sciences (India)
Jingyi Zhang
2014-09-01
In this paper, a canonical ensemble model for the black hole quantum tunnelling radiation is introduced. In this model the probability distribution function corresponding to the emission shell is calculated to second order. The formula of pressure and internal energy of the thermal system is modified, and the fundamental equation of thermodynamics is also discussed.
Parvan, A S; Ploszajczak, M
2000-01-01
A quantum statistical model of nuclear multifragmentation is proposed. The recurrence equation method used within the canonical ensemble makes the model solvable and transparent to physical assumptions and allows to get results without involving the Monte Carlo technique. The model exhibits the first-order phase transition. Quantum statistics effects are clearly seen on the microscopic level of occupation numbers but are almost washed out for global thermodynamic variables and the averaged observables studied. In the latter case, the recurrence relations for multiplicity distributions of both intermediate-mass and all fragments are derived and the specific changes in the shape of multiplicity distributions in the narrow region of the transition temperature is stressed. The temperature domain favorable to search for the HBT effect is noted.
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been
Multiplicity fluctuations in heavy-ion collisions using canonical and grand-canonical ensemble
Energy Technology Data Exchange (ETDEWEB)
Garg, P. [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Science, Simrol (India); Mishra, D.K.; Netrakanti, P.K.; Mohanty, A.K. [Bhabha Atomic Research Center, Nuclear Physics Division, Mumbai (India)
2016-02-15
We report the higher-order cumulants and their ratios for baryon, charge and strangeness multiplicity in canonical and grand-canonical ensembles in ideal thermal model including all the resonances. When the number of conserved quanta is small, an explicit treatment of these conserved charges is required, which leads to a canonical description of the system and the fluctuations are significantly different from the grand-canonical ensemble. Cumulant ratios of total-charge and net-charge multiplicity as a function of collision energies are also compared in grand-canonical ensemble. (orig.)
Geometric integrator for simulations in the canonical ensemble
Tapias, Diego; Sanders, David P.; Bravetti, Alessandro
2016-08-01
We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.
Grand Canonical Ensemble Monte Carlo Simulation of Depletion Interactions in Colloidal Suspensions
Institute of Scientific and Technical Information of China (English)
GUO Ji-Yuan; XIAO Chang-Ming
2008-01-01
Depletion interactions in colloidal suspensions confined between two parallel plates are investigated by using acceptance ratio method with grand canonical ensemble Monte Carlo simulation.The numerical results show that both the depletion potential and depletion force are affected by the confinement from the two parallel plates.Furthermore,it is found that in the grand canonical ensemble Monte Carlo simulation,the depletion interactions are strongly affected by the generalized chemical potential.
Universal critical wrapping probabilities in the canonical ensemble
Directory of Open Access Journals (Sweden)
Hao Hu
2015-09-01
Full Text Available Universal dimensionless quantities, such as Binder ratios and wrapping probabilities, play an important role in the study of critical phenomena. We study the finite-size scaling behavior of the wrapping probability for the Potts model in the random-cluster representation, under the constraint that the total number of occupied bonds is fixed, so that the canonical ensemble applies. We derive that, in the limit L→∞, the critical values of the wrapping probability are different from those of the unconstrained model, i.e. the model in the grand-canonical ensemble, but still universal, for systems with 2yt−d>0 where yt=1/ν is the thermal renormalization exponent and d is the spatial dimension. Similar modifications apply to other dimensionless quantities, such as Binder ratios. For systems with 2yt−d≤0, these quantities share same critical universal values in the two ensembles. It is also derived that new finite-size corrections are induced. These findings apply more generally to systems in the canonical ensemble, e.g. the dilute Potts model with a fixed total number of vacancies. Finally, we formulate an efficient cluster-type algorithm for the canonical ensemble, and confirm these predictions by extensive simulations.
Cluster expansion in the canonical ensemble
Pulvirenti, Elena
2011-01-01
We consider a system of particles confined in a box $\\La\\subset\\R^d$ interacting via a tempered and stable pair potential. We prove the validity of the cluster expansion for the canonical partition function in the high temperature - low density regime. The convergence is uniform in the volume and in the thermodynamic limit it reproduces Mayer's virial expansion providing an alternative and more direct derivation which avoids the deep combinatorial issues present in the original proof.
Geometric integrator for simulations in the canonical ensemble
Tapias, Diego; Bravetti, Alessandro
2016-01-01
In this work we introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble. In particular, we consider the equations arising from the so-called density dynamics algorithm with any possible type of thermostat and provide an integrator that preserves the invariant distribution. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of the system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results.
On Black Hole Entropy Corrections in the Grand Canonical Ensemble
Mahapatra, Subhash; Sarkar, Tapobrata
2011-01-01
We study entropy corrections due to thermal fluctuations for asymptotically AdS black holes in the grand canonical ensemble. To leading order, these can be expressed in terms of the black hole response coefficients via fluctuation moments. We also analyze entropy corrections due to mass and charge fluctuations of R-charged black holes, and our results indicate an universality in the logarithmic corrections to charged AdS black hole entropy in various dimensions.
Canonical ensemble approach to graded-response perceptrons
Bollé, D.; Erichsen, R., Jr.
1999-03-01
Perceptrons with graded input-output relations and a limited output precision are studied within the Gardner-Derrida canonical ensemble approach. Soft non-negative error measures are introduced allowing for extended retrieval properties. In particular, the performance of these systems for a linear (quadratic) error measure, corresponding to the perceptron (adaline) learning algorithm, is compared with the performance for a rigid error measure, simply counting the number of errors. Replica-symmetry-breaking effects are evaluated, and the analytic results are compared with numerical simulations.
Climate Prediction Center(CPC)Ensemble Canonical Correlation Analysis Forecast of Temperature
National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) temperature forecast is a 90-day (seasonal) outlook of US surface temperature anomalies. The ECCA uses Canonical...
National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...
Canonical ensemble in non-extensive statistical mechanics, q > 1
Ruseckas, Julius
2016-09-01
The non-extensive statistical mechanics has been used to describe a variety of complex systems. The maximization of entropy, often used to introduce the non-extensive statistical mechanics, is a formal procedure and does not easily lead to physical insight. In this article we investigate the canonical ensemble in the non-extensive statistical mechanics by considering a small system interacting with a large reservoir via short-range forces and assuming equal probabilities for all available microstates. We concentrate on the situation when the reservoir is characterized by generalized entropy with non-extensivity parameter q > 1. We also investigate the problem of divergence in the non-extensive statistical mechanics occurring when q > 1 and show that there is a limit on the growth of the number of microstates of the system that is given by the same expression for all values of q.
Oza, Nikunj C.
2004-01-01
Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.
Energy Technology Data Exchange (ETDEWEB)
Parvan, A.S. [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics, Dubna (Russian Federation); Horia Hulubei National Institute of Physics and Nuclear Engineering, Department of Theoretical Physics, Bucharest (Romania); Moldova Academy of Sciences, Institute of Applied Physics, Chisinau (Moldova, Republic of)
2015-09-15
In the present paper, the Tsallis statistics in the grand canonical ensemble was reconsidered in a general form. The thermodynamic properties of the nonrelativistic ideal gas of hadrons in the grand canonical ensemble was studied numerically and analytically in a finite volume and the thermodynamic limit. It was proved that the Tsallis statistics in the grand canonical ensemble satisfies the requirements of the equilibrium thermodynamics in the thermodynamic limit if the thermodynamic potential is a homogeneous function of the first order with respect to the extensive variables of state of the system and the entropic variable z = 1/(q - 1) is an extensive variable of state. The equivalence of canonical, microcanonical and grand canonical ensembles for the nonrelativistic ideal gas of hadrons was demonstrated. (orig.)
Domain walls, $Z(N)$ charge and $A_0$ condensate a canonical ensemble study
Borisenko, O A; Zinovjev, G M; Petrov, K V
1996-01-01
The deconfinement phase transition is studied in the ensemble canonical with respect to triality. Since this ensemble implies a projection to the zero triality sector of the theory we introduce a quantity which is insensitive to $Z(N_c)$ symmetry but can reveal a critical behaviour in the theory with dynamical quarks. Further, we argue that in the canonical ensemble description of full QCD there exist domains of different $Z(N_c)$ phases which are degenerate and possess normal physical properties. This contradicts the predictions of the grand canonical ensemble. We propose a new order parameter to test the realization of the discrete $Z(N_c)$ symmetry at finite temperature and calculate it for the case of $Z(2)$ gauge fields coupled to fundamental fermions.
Hard, charged spheres in spherical pores. Grand canonical ensemble Monte Carlo calculations
DEFF Research Database (Denmark)
Sloth, Peter; Sørensen, T. S.
1992-01-01
A model consisting of hard charged spheres inside hard spherical pores is investigated by grand canonical ensemble Monte Carlo calculations. It is found that the mean ionic density profiles in the pores are almost the same when the wall of the pore is moderately charged as when it is uncharged...
THERMODYNAMICS OF THE SLOWLY ROTATING KERR-NEWMAN BLACK HOLE IN THE GRAND CANONICAL ENSEMBLE
Institute of Scientific and Technical Information of China (English)
CHEN JU-HUA; JING JI-LIANG
2001-01-01
We investigate the thermodynamics of the slowly rotating Kerr-Newman (K-N) black hole in the grand canonical ensemble with York's formalism. Some thermodynamical properties, such as the thermodynamical action, entropy,thermodynamical energy and heat capacity are studied, and solutions of the slowly rotating K-N black hole with different boundary conditions are analysed. We find stable solutions and instantons under certain boundary conditions.
Courtney, Owen T.; Bianconi, Ginestra
2016-06-01
Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three, or more linked nodes. Moreover, we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing, respectively, the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff of simplicial complexes that for simplicial complexes of dimension d =1 reduces to the structural cutoff of simple networks. Finally, we provide a numerical analysis of the natural correlations emerging in the configuration model of simplicial complexes without structural cutoff.
Canonical transformation method in classical electrodynamics
Pavlenko, Yu. G.
1983-08-01
The solutions of Maxwell's equations in the parabolic equation approximation is obtained on the basis of the canonical transformation method. The Hamiltonian form of the equations for the field in an anisotropic stratified medium is also examined. The perturbation theory for the calculation of the wave reflection and transmission coefficients is developed.
Ensemble Methods Foundations and Algorithms
Zhou, Zhi-Hua
2012-01-01
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a
Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble
Bicci, Alberto
2016-12-01
In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.
Phase structures of 4D stringy charged black holes in canonical ensemble
Jia, Qiang; Tan, Xiao-Jun
2016-01-01
We study the thermodynamics and phase structures of the asymptotically flat dilatonic black holes in 4 dimensions, placed in a cavity {\\it a la} York, in string theory for an arbitrary dilaton coupling. We consider these charged black systems in canonical ensemble for which the temperature at the wall of and the charge inside the cavity are fixed. We find that the dilaton coupling plays the key role in the underlying phase structures. The connection of these black holes to higher dimensional brane systems via diagonal (double) and/or direct dimensional reductions indicates that the phase structures of the former may exhaust all possible ones of the latter, which are more difficult to study, under conditions of similar settings. Our study also shows that a diagonal (double) dimensional reduction preserves the underlying phase structure while a direct dimensional reduction has the potential to change it.
Study of lattice QCD at finite chemical potential using canonical ensemble approach
Bornyakov, V G; Goy, V A; Molochkov, A V; Nakamura, Atsushi; Nikolaev, A A; Zakharov, V I
2016-01-01
New approach to computation of canonical partition functions in $N_f=2$ lattice QCD is presented. We compare results obtained by new method with results obtained by known method of hopping parameter expansion. We observe agreement between two methods indicating validity of the new method. We use results for the number density obtained in the confining and deconfining phases at imaginary chemical potential to determine the phase transition line at real chemical potential.
Li, Gu-Qiang
2016-01-01
The phase transition of four-dimensional charged AdS black hole solution in the $R+f(R)$ gravity with constant curvature is investigated in the grand canonical ensemble, where we find novel characteristics quite different from that in canonical ensemble. There exists no critical point for $T-S$ curve while in former research critical point was found for both the $T-S$ curve and $T-r_+$ curve when the electric charge of $f(R)$ black holes is kept fixed. Moreover, we derive the explicit expression for the specific heat, the analog of volume expansion coefficient and isothermal compressibility coefficient when the electric potential of $f(R)$ AdS black hole is fixed. The specific heat $C_\\Phi$ encounters a divergence when $0b$. This finding also differs from the result in the canonical ensemble, where there may be two, one or no divergence points for the specific heat $C_Q$. To examine the phase structure newly found in the grand canonical ensemble, we appeal to the well-known thermodynamic geometry tools and de...
Li, Gu-Qiang; Mo, Jie-Xiong
2016-06-01
The phase transition of a four-dimensional charged AdS black hole solution in the R +f (R ) gravity with constant curvature is investigated in the grand canonical ensemble, where we find novel characteristics quite different from that in the canonical ensemble. There exists no critical point for T -S curve while in former research critical point was found for both the T -S curve and T -r+ curve when the electric charge of f (R ) black holes is kept fixed. Moreover, we derive the explicit expression for the specific heat, the analog of volume expansion coefficient and isothermal compressibility coefficient when the electric potential of f (R ) AdS black hole is fixed. The specific heat CΦ encounters a divergence when 0 b . This finding also differs from the result in the canonical ensemble, where there may be two, one or no divergence points for the specific heat CQ . To examine the phase structure newly found in the grand canonical ensemble, we appeal to the well-known thermodynamic geometry tools and derive the analytic expressions for both the Weinhold scalar curvature and Ruppeiner scalar curvature. It is shown that they diverge exactly where the specific heat CΦ diverges.
DEFF Research Database (Denmark)
Sloth, Peter
1990-01-01
Density profiles and partition coefficients are obtained for hard-sphere fluids inside hard, spherical pores of different sizes by grand canonical ensemble Monte Carlo calculations. The Monte Carlo results are compared to the results obtained by application of different kinds of integral equation...
DEFF Research Database (Denmark)
Sloth, Peter
1993-01-01
The grand canonical ensemble has been used to study the evaluation of single ion activity coefficients in homogeneous ionic fluids. In this work, the Coulombic interactions are truncated according to the minimum image approximation, and the ions are assumed to be placed in a structureless, homoge...
Path planning in uncertain flow fields using ensemble method
Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.
2016-08-01
An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.
Path planning in uncertain flow fields using ensemble method
Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.
2016-10-01
An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.
Path planning in uncertain flow fields using ensemble method
Wang, Tong
2016-08-20
An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.
Analysis of mesoscale forecasts using ensemble methods
Gross, Markus
2016-01-01
Mesoscale forecasts are now routinely performed as elements of operational forecasts and their outputs do appear convincing. However, despite their realistic appearance at times the comparison to observations is less favorable. At the grid scale these forecasts often do not compare well with observations. This is partly due to the chaotic system underlying the weather. Another key problem is that it is impossible to evaluate the risk of making decisions based on these forecasts because they do not provide a measure of confidence. Ensembles provide this information in the ensemble spread and quartiles. However, running global ensembles at the meso or sub mesoscale involves substantial computational resources. National centers do run such ensembles, but the subject of this publication is a method which requires significantly less computation. The ensemble enhanced mesoscale system presented here aims not at the creation of an improved mesoscale forecast model. Also it is not to create an improved ensemble syste...
Popular Ensemble Methods: An Empirical Study
Maclin, R; 10.1613/jair.614
2011-01-01
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier -- especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being exa...
Indian Academy of Sciences (India)
W. X. Zhong
2014-09-01
In this paper, we use the canonical ensemble model to discuss the radiation of a Schwarzschild–de Sitter black hole on the black hole horizon. Using this model, we calculate the probability distribution from function of the emission shell. And the statistical meaning which compare with the distribution function is used to investigate the black hole tunnelling radiation spectrum.We also discuss the mechanism of information flowing from the black hole.
Composed ensembles of random unitary ensembles
Pozniak, M; Kus, M; Pozniak, Marcin; Zyczkowski, Karol; Kus, Marek
1997-01-01
Composed ensembles of random unitary matrices are defined via products of matrices, each pertaining to a given canonical circular ensemble of Dyson. We investigate statistical properties of spectra of some composed ensembles and demonstrate their physical relevance. We discuss also the methods of generating random matrices distributed according to invariant Haar measure on the orthogonal and unitary group.
Directory of Open Access Journals (Sweden)
Xun Chen
2014-01-01
Full Text Available Electroencephalogram (EEG recordings are often contaminated with muscle artifacts. This disturbing muscular activity strongly affects the visual analysis of EEG and impairs the results of EEG signal processing such as brain connectivity analysis. If multichannel EEG recordings are available, then there exist a considerable range of methods which can remove or to some extent suppress the distorting effect of such artifacts. Yet to our knowledge, there is no existing means to remove muscle artifacts from single-channel EEG recordings. Moreover, considering the recently increasing need for biomedical signal processing in ambulatory situations, it is crucially important to develop single-channel techniques. In this work, we propose a simple, yet effective method to achieve the muscle artifact removal from single-channel EEG, by combining ensemble empirical mode decomposition (EEMD with multiset canonical correlation analysis (MCCA. We demonstrate the performance of the proposed method through numerical simulations and application to real EEG recordings contaminated with muscle artifacts. The proposed method can successfully remove muscle artifacts without altering the recorded underlying EEG activity. It is a promising tool for real-world biomedical signal processing applications.
Olsen, Seth
2015-01-01
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence
Energy Technology Data Exchange (ETDEWEB)
Olsen, Seth, E-mail: seth.olsen@uq.edu.au [School of Mathematics and Physics, The University of Queensland, Brisbane QLD 4072 (Australia)
2015-01-28
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space
Critical behavior of charged Gauss-Bonnet AdS black holes in the grand canonical ensemble
Zou, De-Cheng; Wang, Bin
2014-01-01
We study the thermodynamics in the grand canonical ensemble of D-dimensional charged Gauss-Bonnet-AdS black holes in the extended phase space. We find that the usual small-large black hole phase transition, which exhibits analogy with the Van de Waals liquid-gas system holds in five-dimensional spherical charged Gauss-Bonnet-AdS black holes when its potential is fixed within the range $0<\\Phi<\\frac{\\sqrt{3}\\pi}{4}$. For the other higher dimensional and topological charged Gauss-Bonnet-AdS black holes, there is no such phase transition. In the limiting case, Reissner-Nordstrom-AdS black holes, with vanishing Gauss-Bonnet parameter, there is no critical behavior in the grand canonical ensemble. This result holds independent of the spacetime dimensions and topologies. We also examine the behavior of physical quantities in the vicinity of the critical point in the five-dimensional spherical charged Gauss-Bonnet-AdS black holes.
Ensemble transform sensitivity method for adaptive observations
Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan
2016-01-01
The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.
Sundararaman, Ravishankar; Weaver, Kendra; Arias, Tomas
2012-02-01
The study of electrochemical systems within electronic density functional theory requires the handling of non-neutral electronic systems in the plane-wave basis in order to accurately describe charged metallic surfaces; this can be accomplished in joint density functional theory by adding an electrolyte with Debye screening ootnotetextK. L. Weaver and T. A. Arias (under preparation). This capability opens up the opportunity to work in the grand canonical ensemble at fixed chemical potential μ for the electrons, which corresponds directly to the experimental setting in electrochemistry. We present efficient techniques for electronic density functional calculations at fixed μ, and demonstrate the improvement in predictive power over conventional neutral calculations using the underpotential deposition of Cu/Pt(111) as an example: for the first time, we calculate absolute voltages for electrochemical processes in excellent agreement with experiment, instead of voltage shifts alone.
Energy Technology Data Exchange (ETDEWEB)
Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
Kadoura, Ahmad Salim
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.
Electronic chemical response indexes at finite temperature in the canonical ensemble
Energy Technology Data Exchange (ETDEWEB)
Franco-Pérez, Marco, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx; Gázquez, José L., E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Universidad Autónoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186, México, D. F. 09340, México (Mexico); Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico); Vela, Alberto, E-mail: qimfranco@hotmail.com, E-mail: jlgm@xanum.uam.mx, E-mail: avela@cinvestav.mx [Departamento de Química, Centro de Investigación y de Estudios Avanzados, Av. Instituto Politécnico Nacional 2508, México, D. F. 07360, México (Mexico)
2015-07-14
Assuming that the electronic energy is given by a smooth function of the number of electrons and within the extension of density functional theory to finite temperature, the first and second order chemical reactivity response functions of the Helmholtz free energy with respect to the temperature, the number of electrons, and the external potential are derived. It is found that in all cases related to the first or second derivatives with respect to the number of electrons or the external potential, there is a term given by the average of the corresponding derivative of the electronic energy of each state (ground and excited). For the second derivatives, including those related with the temperature, there is a thermal fluctuation contribution that is zero at zero temperature. Thus, all expressions reduce correctly to their corresponding chemical reactivity expressions at zero temperature and show that, at room temperature, the corrections are very small. When the assumption that the electronic energy is given by a smooth function of the number of electrons is replaced by the straight lines behavior connecting integer values, as required by the ensemble theorem, one needs to introduce directional derivatives in most cases, so that the temperature dependent expressions reduce correctly to their zero temperature counterparts. However, the main result holds, namely, at finite temperature the thermal corrections to the chemical reactivity response functions are very small. Consequently, the present work validates the usage of reactivity indexes calculated at zero temperature to infer chemical behavior at room and even higher temperatures.
Ensemble methods for handwritten digit recognition
DEFF Research Database (Denmark)
Hansen, Lars Kai; Liisberg, Christian; Salamon, P.
1992-01-01
. It is further shown that it is possible to estimate the ensemble performance as well as the learning curve on a medium-size database. In addition the authors present preliminary analysis of experiments on a large database and show that state-of-the-art performance can be obtained using the ensemble approach...
The canonical and grand canonical models for nuclear multifragmentation
Indian Academy of Sciences (India)
G Chaudhuri; S Das Gupta
2010-08-01
Many observables seen in intermediate energy heavy-ion collisions can be explained on the basis of statistical equilibrium. Calculations based on statistical equilibrium can be implemented in microcanonical ensemble, canonical ensemble or grand canonical ensemble. This paper deals with calculations with canonical and grand canonical ensembles. A recursive relation developed recently allows calculations with arbitrary precision for many nuclear problems. Calculations are done to study the nature of phase transition in nuclear matter.
Ensemble Machine Learning Methods and Applications
Ma, Yunqian
2012-01-01
It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics. Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...
Parametric Potential Determination by the Canonical Function Method
Tannous, C; Langlois, J M
1999-01-01
The canonical function method (CFM) is a powerful means for solving the Radial Schrodinger Equation. The mathematical difficulty of the RSE lies in the fact it is a singular boundary value problem. The CFM turns it into a regular initial value problem and allows the full determination of the spectrum of the Schrodinger operator without calculating the eigenfunctions. Following the parametrisation suggested by Klapisch and Green, Sellin and Zachor we develop a CFM to optimise the potential parameters in order to reproduce the experimental Quantum Defect results for various Rydberg series of He, Ne and Ar as evaluated from Moore's data.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-12-03
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Directory of Open Access Journals (Sweden)
S. Roh
2015-05-01
Full Text Available In ensemble Kalman filtering (EnKF, the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-05-08
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Hybrid Intrusion Detection Using Ensemble of Classification Methods
Directory of Open Access Journals (Sweden)
M.Govindarajan
2014-01-01
Full Text Available One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed for homogeneous ensemble classifiers using bagging and heterogeneous ensemble classifiers using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF and Support Vector Machine (SVM as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of real and benchmark data sets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase and combining phase. A wide range of comparative experiments are conducted for real and benchmark data sets of intrusion detection. The accuracy of base classifiers is compared with homogeneous and heterogeneous models for data mining problem. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and also heterogeneous models exhibit better results than homogeneous models for real and benchmark data sets of intrusion detection.
Directory of Open Access Journals (Sweden)
J.M.P. Carmelo
2017-01-01
Full Text Available Whether in the thermodynamic limit, vanishing magnetic field h→0, and nonzero temperature the spin stiffness of the spin-1/2 XXX Heisenberg chain is finite or vanishes within the grand-canonical ensemble remains an unsolved and controversial issue, as different approaches yield contradictory results. Here we provide an upper bound on the stiffness and show that within that ensemble it vanishes for h→0 in the thermodynamic limit of chain length L→∞, at high temperatures T→∞. Our approach uses a representation in terms of the L physical spins 1/2. For all configurations that generate the exact spin-S energy and momentum eigenstates such a configuration involves a number 2S of unpaired spins 1/2 in multiplet configurations and L−2S spins 1/2 that are paired within Msp=L/2−S spin–singlet pairs. The Bethe-ansatz strings of length n=1 and n>1 describe a single unbound spin–singlet pair and a configuration within which n pairs are bound, respectively. In the case of n>1 pairs this holds both for ideal and deformed strings associated with n complex rapidities with the same real part. The use of such a spin 1/2 representation provides useful physical information on the problem under investigation in contrast to often less controllable numerical studies. Our results provide strong evidence for the absence of ballistic transport in the spin-1/2 XXX Heisenberg chain in the thermodynamic limit, for high temperatures T→∞, vanishing magnetic field h→0 and within the grand-canonical ensemble.
Carmelo, J. M. P.; Prosen, T.
2017-01-01
Whether in the thermodynamic limit, vanishing magnetic field h → 0, and nonzero temperature the spin stiffness of the spin-1/2 XXX Heisenberg chain is finite or vanishes within the grand-canonical ensemble remains an unsolved and controversial issue, as different approaches yield contradictory results. Here we provide an upper bound on the stiffness and show that within that ensemble it vanishes for h → 0 in the thermodynamic limit of chain length L → ∞, at high temperatures T → ∞. Our approach uses a representation in terms of the L physical spins 1/2. For all configurations that generate the exact spin-S energy and momentum eigenstates such a configuration involves a number 2S of unpaired spins 1/2 in multiplet configurations and L - 2 S spins 1/2 that are paired within Msp = L / 2 - S spin-singlet pairs. The Bethe-ansatz strings of length n = 1 and n > 1 describe a single unbound spin-singlet pair and a configuration within which n pairs are bound, respectively. In the case of n > 1 pairs this holds both for ideal and deformed strings associated with n complex rapidities with the same real part. The use of such a spin 1/2 representation provides useful physical information on the problem under investigation in contrast to often less controllable numerical studies. Our results provide strong evidence for the absence of ballistic transport in the spin-1/2 XXX Heisenberg chain in the thermodynamic limit, for high temperatures T → ∞, vanishing magnetic field h → 0 and within the grand-canonical ensemble.
Generalized ensemble method applied to study systems with strong first order transitions
Małolepsza, E.; Kim, J.; Keyes, T.
2015-09-01
At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.
Goldstein, Sheldon; Lebowitz, Joel L.; Tumulka, Roderich; Zanghi, Nino
2005-01-01
It is well known that a system, S, weakly coupled to a heat bath, B, is described by the canonical ensemble when the composite, S+B, is described by the microcanonical ensemble corresponding to a suitable energy shell. This is true both for classical distributions on the phase space and for quantum density matrices. Here we show that a much stronger statement holds for quantum systems. Even if the state of the composite corresponds to a single wave function rather than a mixture, the reduced ...
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
Tweet-based Target Market Classification Using Ensemble Method
Directory of Open Access Journals (Sweden)
Muhammad Adi Khairul Anshary
2016-09-01
Full Text Available Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting. To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras and three categories of sentiments (positive, negative and neutral were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%. On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.
Ensemble classi cation methods for autism disordered speech
Directory of Open Access Journals (Sweden)
Zoubir Abdeslem Benselama
2016-09-01
Full Text Available In this paper, we present the results of our investigation on Autism classifi cation by applying ensemble classi ers to disordered speech signals. The aim is to distinguish between Autism sub-classes by comparing an ensemble combining three decision methods, the sequential minimization optimization (SMO algorithm, the random forests (RF, and the feature-subspace aggregating approach (Feating. The conducted experiments allowed a reduction of 30% of the feature space with an accuracy increase over the baseline of 8.66% in the development set and 6.62% in the test set.
Directory of Open Access Journals (Sweden)
Kazuo Saito
2012-01-01
Full Text Available The effect of lateral boundary perturbations (LBPs on the mesoscale breeding (MBD method and the local ensemble transform Kalman filter (LETKF as the initial perturbations generators for mesoscale ensemble prediction systems (EPSs was examined. A LBPs method using the Japan Meteorological Agency's (JMA's operational one-week global ensemble prediction was developed and applied to the mesoscale EPS of the Meteorological Research Institute for the World Weather Research Programme, Beijing 2008 Olympics Research and Development Project. The amplitude of the LBPs was adjusted based on the ensemble spread statistics considering the difference of the forecast times of the JMA's one-week EPS and the associated breeding/ensemble Kalman filter (EnKF cycles. LBPs in the ensemble forecast increase the ensemble spread and improve the accuracy of the ensemble mean forecast. In the MBD method, if LBPs were introduced in its breeding cycles, the growth rate of the generated bred vectors is increased, and the ensemble spread and the root mean square errors (RMSEs of the ensemble mean are further improved in the ensemble forecast. With LBPs in the breeding cycles, positional correspondences to the meteorological disturbances and the orthogonality of the bred vectors are improved. Brier Skill Scores (BSSs also showed a remarkable effect of LBPs in the breeding cycles. LBPs showed a similar effect with the LETKF. If LBPs were introduced in the EnKF data assimilation cycles, the ensemble spread, ensemble mean accuracy, and BSSs for precipitation were improved, although the relative advantage of LETKF as the initial perturbations generator against MDB was not necessarily clear. LBPs in the EnKF cycles contribute not to the orthogonalisation but to prevent the underestimation of the forecast error near the lateral boundary.The accuracy of the LETKF analyses was compared with that of the mesoscale 4D-VAR analyses. With LBPs in the LETKF cycles, the RMSEs of the
Canonical Transform Method for Treating Strongly Anisotropy Magnets
DEFF Research Database (Denmark)
Cooke, J. F.; Lindgård, Per-Anker
1977-01-01
An infinite-order perturbation approach to the theory of magnetism in magnets with strong single-ion anisotropy is given. This approach is based on a canonical transformation of the system into one with a diagonal crystal field, an effective two-ion anisotropy, and reduced ground-state corrections....... A matrix-element matching procedure is used to obtain an explicit expression for the spin-wave energy to second order. The consequences of this theory are illustrated by an application to a simple example with planar anisotropy and an external magnetic field. A detailed comparison between the results...
Splitting K-symplectic methods for non-canonical separable Hamiltonian problems
Zhu, Beibei; Zhang, Ruili; Tang, Yifa; Tu, Xiongbiao; Zhao, Yue
2016-10-01
Non-canonical Hamiltonian systems have K-symplectic structures which are preserved by K-symplectic numerical integrators. There is no universal method to construct K-symplectic integrators for arbitrary non-canonical Hamiltonian systems. However, in many cases of interest, by using splitting, we can construct explicit K-symplectic methods for separable non-canonical systems. In this paper, we identify situations where splitting K-symplectic methods can be constructed. Comparative numerical experiments in three non-canonical Hamiltonian problems show that symmetric/non-symmetric splitting K-symplectic methods applied to the non-canonical systems are more efficient than the same-order Gauss' methods/non-symmetric symplectic methods applied to the corresponding canonicalized systems; for the non-canonical Lotka-Volterra model, the splitting algorithms behave better in efficiency and energy conservation than the K-symplectic method we construct via generating function technique. In our numerical experiments, the favorable energy conservation property of the splitting K-symplectic methods is apparent.
EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms
Rapakoulia, Trisevgeni
2014-04-26
Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.
Algebraic method for exact solution of canonical partition function in nuclear multifragmentation
Parvan, A S
2002-01-01
An algebraic method for the exact recursion formula for the calculation of canonical partition function of non-interaction finite systems of particles obeying Bose-Einstein, Fermi-Dirac, Maxwell-Boltzmann statistics or parastatistics is derived. A new exactly solvable multifragmentation model with baryon and electric charge conservation laws is developed. Recursion relations for this model are presented that allow exact calculation of canonical partition function for any statistics.
Ensemble Methods in Data Mining Improving Accuracy Through Combining Predictions
Seni, Giovanni
2010-01-01
This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although e
Local polynomial method for ensemble forecast of time series
Directory of Open Access Journals (Sweden)
S. Regonda
2005-01-01
Full Text Available We present a nonparametric approach based on local polynomial regression for ensemble forecast of time series. The state space is first reconstructed by embedding the univariate time series of the response variable in a space of dimension (D with a delay time (τ. To obtain a forecast from a given time point t, three steps are involved: (i the current state of the system is mapped on to the state space, known as the feature vector, (ii a small number (K=α*n, α=fraction (0,1] of the data, n=data length of neighbors (and their future evolution to the feature vector are identified in the state space, and (iii a polynomial of order p is fitted to the identified neighbors, which is then used for prediction. A suite of parameter combinations (D, τ, α, p is selected based on an objective criterion, called the Generalized Cross Validation (GCV. All of the selected parameter combinations are then used to issue a T-step iterated forecast starting from the current time t, thus generating an ensemble forecast which can be used to obtain the forecast probability density function (PDF. The ensemble approach improves upon the traditional method of providing a single mean forecast by providing the forecast uncertainty. Further, for short noisy data it can provide better forecasts. We demonstrate the utility of this approach on two synthetic (Henon and Lorenz attractors and two real data sets (Great Salt Lake bi-weekly volume and NINO3 index. This framework can also be used to forecast a vector of response variables based on a vector of predictors.
String Variant Alias Extraction Method using Ensemble Learner
Directory of Open Access Journals (Sweden)
P.Selvaperumal
2016-02-01
Full Text Available String variant alias names are surnames which are string variant form of the primary name. Extracting string variant aliases are important in tasks such as information retrieval, information extraction, and name resolution etc. String variant alias extraction involves candidate alias name extraction and string variant alias validation. In this paper, string variant aliases are first extracted from the web and then using seven different string similarity metrics as features, candidate aliases are validated using ensemble classifier random forest. Experiments were conducted using string variant name-alias dataset containing name-alias data for 15 persons containing 30 name-alias pairs. Experimental results show that the proposed method outperforms other similar methods in terms of accuracy.
Adaptive error covariances estimation methods for ensemble Kalman filters
Energy Technology Data Exchange (ETDEWEB)
Zhen, Yicun, E-mail: zhen@math.psu.edu [Department of Mathematics, The Pennsylvania State University, University Park, PA 16802 (United States); Harlim, John, E-mail: jharlim@psu.edu [Department of Mathematics and Department of Meteorology, The Pennsylvania State University, University Park, PA 16802 (United States)
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Extending the square root method to account for additive forecast noise in ensemble methods
Raanes, Patrick N; Bertino, Laurent
2015-01-01
A square root approach is considered for the problem of accounting for model noise in the forecast step of the ensemble Kalman filter (EnKF) and related algorithms. The primary aim is to replace the method of simulated, pseudo-random, additive noise so as to eliminate the associated sampling errors. The core method is based on the analysis step of ensemble square root filters, and consists in the deterministic computation of a transform matrix. The theoretical advantages regarding dynamical consistency are surveyed, applying equally well to the square root method in the analysis step. A fundamental problem due to the limited size of the ensemble subspace is discussed, and novel solutions that complement the core method are suggested and studied. Benchmarks from twin experiments with simple, low-order dynamics indicate improved performance over standard approaches such as additive, simulated noise and multiplicative inflation.
Comparison of four ensemble methods combining regional climate simulations over Asia
Feng, Jinming; Lee, Dong-Kyou; Fu, Congbin; Tang, Jianping; Sato, Yasuo; Kato, Hisashi; McGregor, John L.; Mabuchi, Kazuo
2011-02-01
A number of uncertainties exist in climate simulation because the results of climate models are influenced by factors such as their dynamic framework, physical processes, initial and driving fields, and horizontal and vertical resolution. The uncertainties of the model results may be reduced, and the credibility can be improved by employing multi-model ensembles. In this paper, multi-model ensemble results using 10-year simulations of five regional climate models (RCMs) from December 1988 to November 1998 over Asia are presented and compared. The simulation results are derived from phase II of the Regional Climate Model Inter-comparison Project (RMIP) for Asia. Using the methods of the arithmetic mean, the weighted mean, multivariate linear regression, and singular value decomposition, the ensembles for temperature, precipitation, and sea level pressure are carried out. The results show that the multi-RCM ensembles outperform the single RCMs in many aspects. Among the four ensemble methods used, the multivariate linear regression, based on the minimization of the root mean square errors, significantly improved the ensemble results. With regard to the spatial distribution of the mean climate, the ensemble result for temperature was better than that for precipitation. With an increasing number of models used in the ensembles, the ensemble results were more accurate. Therefore, a multi-model ensemble is an efficient approach to improve the results of regional climate simulations.
Frank, T. D.; Kim, S.; Dotov, D. G.
2013-11-01
Canonical-dissipative nonequilibrium energy distributions play an important role in the life sciences. In one of the most fundamental forms, such energy distributions correspond to two-parametric normal distributions truncated to the left. We present an implicit moment method involving the first and second energy moments to estimate the distribution parameters. It is shown that the method is consistent with Cohen's 1949 formula. The implementation of the algorithm is discussed and the range of admissible parameter values is identified. In addition, an application to an earlier study on human oscillatory hand movements is presented. In this earlier study, energy was conceptualized as the energy of a Hamiltonian oscillator model. The canonical-dissipative approach allows for studying the systematic change of the model parameters with oscillation frequency. It is shown that the results obtained with the implicit moment method are consistent with those derived in the earlier study by other means.
Ensemble Methods for MiRNA Target Prediction from Expression Data.
Directory of Open Access Journals (Sweden)
Thuc Duy Le
Full Text Available microRNAs (miRNAs are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory.In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and
Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods
Directory of Open Access Journals (Sweden)
Saadia Zahid
2015-01-01
Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.
Lísal, Martin; Brennan, John K.; Smith, William R.; Siperstein, Flor R.
2004-09-01
We present a simulation tool to study fluid mixtures that are simultaneously chemically reacting and adsorbing in a porous material. The method is a combination of the reaction ensemble Monte Carlo method and the dual control volume grand canonical molecular dynamics technique. The method, termed the dual control cell reaction ensemble molecular dynamics method, allows for the calculation of both equilibrium and nonequilibrium transport properties in porous materials such as diffusion coefficients, permeability, and mass flux. Control cells, which are in direct physical contact with the porous solid, are used to maintain the desired reaction and flow conditions for the system. The simulation setup closely mimics an actual experimental system in which the thermodynamic and flow parameters are precisely controlled. We present an application of the method to the dry reforming of methane reaction within a nanoscale reactor model in the presence of a semipermeable membrane that was modeled as a porous material similar to silicalite. We studied the effects of the membrane structure and porosity on the reaction species permeability by considering three different membrane models. We also studied the effects of an imposed pressure gradient across the membrane on the mass flux of the reaction species. Conversion of syngas (H2/CO) increased significantly in all the nanoscale membrane reactor models considered. A brief discussion of further potential applications is also presented.
Canonical integration and analysis of periodic maps using non-standard analysis and life methods
Energy Technology Data Exchange (ETDEWEB)
Forest, E.; Berz, M.
1988-06-01
We describe a method and a way of thinking which is ideally suited for the study of systems represented by canonical integrators. Starting with the continuous description provided by the Hamiltonians, we replace it by a succession of preferably canonical maps. The power series representation of these maps can be extracted with a computer implementation of the tools of Non-Standard Analysis and analyzed by the same tools. For a nearly integrable system, we can define a Floquet ring in a way consistent with our needs. Using the finite time maps, the Floquet ring is defined only at the locations s/sub i/ where one perturbs or observes the phase space. At most the total number of locations is equal to the total number of steps of our integrator. We can also produce pseudo-Hamiltonians which describe the motion induced by these maps. 15 refs., 1 fig.
Caprio, M A; McCoy, A E; 10.1063/1.3445529
2010-01-01
It is shown that the method of infinitesimal generators ("Racah's method") can be broadly and systematically formulated as a method applicable to the calculation of reduced coupling coefficients for a generic subalgebra chain G>H, provided the reduced matrix elements of the generators of G and the recoupling coefficients of H are known. The calculation of SO(5)>SO(4) reduced coupling coefficients is considered as an example, and a procedure for transformation of reduced coupling coefficients between canonical and physical subalegebra chains is presented. The problem of calculating coupling coefficients for generic irreps of SO(5), reduced with respect to any of its subalgebra chains, is completely resolved by this approach.
A Random Forest-based ensemble method for activity recognition.
Feng, Zengtao; Mo, Lingfei; Li, Meng
2015-01-01
This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation.
Directory of Open Access Journals (Sweden)
Heinz Toparkus
2014-04-01
Full Text Available In this paper we consider first-order systems with constant coefficients for two real-valued functions of two real variables. This is both a problem in itself, as well as an alternative view of the classical linear partial differential equations of second order with constant coefficients. The classification of the systems is done using elementary methods of linear algebra. Each type presents its special canonical form in the associated characteristic coordinate system. Then you can formulate initial value problems in appropriate basic areas, and you can try to achieve a solution of these problems by means of transform methods.
Hybrid Levenberg-Marquardt and weak-constraint ensemble Kalman smoother method
Mandel, J.; Bergou, E.; Gürol, S.; Gratton, S.; Kasanický, I.
2016-03-01
The ensemble Kalman smoother (EnKS) is used as a linear least-squares solver in the Gauss-Newton method for the large nonlinear least-squares system in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Furthermore, adding a regularization term results in replacing the Gauss-Newton method, which may diverge, by the Levenberg-Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS. The method is illustrated on the Lorenz 63 model and a two-level quasi-geostrophic model.
ENSEMBLE methods to reconcile disparate national long range dispersion forecasting
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, T.; Galmarini, S.; Bianconi, R.; French, S. (eds.)
2003-11-01
ENSEMBLE is a web-based decision support system for real-time exchange and evaluation of national long-range dispersion forecasts of nuclear releases with cross-boundary consequences. The system is developed with the purpose to reconcile among disparate national forecasts for long-range dispersion. ENSEMBLE addresses the problem of achieving a common coherent strategy across European national emergency management when national long-range dispersion forecasts differ from one another during an accidental atmospheric release of radioactive material. A series of new decision-making 'ENSEMBLE' procedures and Web-based software evaluation and exchange tools have been created for real-time reconciliation and harmonisation of real-time dispersion forecasts from meteorological and emergency centres across Europe during an accident. The new ENSEMBLE software tools is available to participating national emergency and meteorological forecasting centres, which may choose to integrate them directly into operational emergency information systems, or possibly use them as a basis for future system development. (au)
Chen, Zhiwen
2017-01-01
Zhiwen Chen aims to develop advanced fault detection (FD) methods for the monitoring of industrial processes. With the ever increasing demands on reliability and safety in industrial processes, fault detection has become an important issue. Although the model-based fault detection theory has been well studied in the past decades, its applications are limited to large-scale industrial processes because it is difficult to build accurate models. Furthermore, motivated by the limitations of existing data-driven FD methods, novel canonical correlation analysis (CCA) and projection-based methods are proposed from the perspectives of process input and output data, less engineering effort and wide application scope. For performance evaluation of FD methods, a new index is also developed. Contents A New Index for Performance Evaluation of FD Methods CCA-based FD Method for the Monitoring of Stationary Processes Projection-based FD Method for the Monitoring of Dynamic Processes Benchmark Study and Real-Time Implementat...
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method.
Roux, Benoît; Weare, Jonathan
2013-02-28
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method.
Evaluation of the thermodynamics of a four level system using canonical density matrix method
Directory of Open Access Journals (Sweden)
Awoga Oladunjoye A.
2013-02-01
Full Text Available We consider a four-level system with two subsystems coupled by weak interaction. The system is in thermal equilibrium. The thermodynamics of the system, namely internal energy, free energy, entropy and heat capacity, are evaluated using the canonical density matrix by two methods. First by Kronecker product method and later by treating the subsystems separately and then adding the evaluated thermodynamic properties of each subsystem. It is discovered that both methods yield the same result, the results obey the laws of thermodynamics and are the same as earlier obtained results. The results also show that each level of the subsystems introduces a new degree of freedom and increases the entropy of the entire system. We also found that the four-level system predicts a linear relationship between heat capacity and temperature at very low temperatures just as in metals. Our numerical results show the same trend.
Hybrid Modeling of Flotation Height in Air Flotation Oven Based on Selective Bagging Ensemble Method
Directory of Open Access Journals (Sweden)
Shuai Hou
2013-01-01
Full Text Available The accurate prediction of the flotation height is very necessary for the precise control of the air flotation oven process, therefore, avoiding the scratch and improving production quality. In this paper, a hybrid flotation height prediction model is developed. Firstly, a simplified mechanism model is introduced for capturing the main dynamic behavior of the process. Thereafter, for compensation of the modeling errors existing between actual system and mechanism model, an error compensation model which is established based on the proposed selective bagging ensemble method is proposed for boosting prediction accuracy. In the framework of the selective bagging ensemble method, negative correlation learning and genetic algorithm are imposed on bagging ensemble method for promoting cooperation property between based learners. As a result, a subset of base learners can be selected from the original bagging ensemble for composing a selective bagging ensemble which can outperform the original one in prediction accuracy with a compact ensemble size. Simulation results indicate that the proposed hybrid model has a better prediction performance in flotation height than other algorithms’ performance.
Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-05-01
A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.
Ali, Elaf Jaafar; Gao, David Yang
2016-10-01
The goal of this paper is to solve the post buckling phenomena of a large deformed elastic beam by a canonical dual mixed finite element method (CD-FEM). The total potential energy of this beam is a nonconvex functional which can be used to model both pre-and post-buckling problems. Different types of dual stress interpolations are used in order to verify the triality theory. Applications are illustrated with different boundary conditions and external loads by using semi-definite programming (SDP) algorithm. The results show that the global minimum of the total potential energy is stable buckled configuration, the local maximum solution leads to the unbuckled state, and both of these two solutions are numerically stable. While the local minimum is unstable buckled configuration and very sensitive to both stress interpolations and the external loads.
Directory of Open Access Journals (Sweden)
Ju Hyoung Lee
2015-12-01
Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.
Canonical Information Analysis
DEFF Research Database (Denmark)
Vestergaard, Jacob Schack; Nielsen, Allan Aasbjerg
2015-01-01
Canonical correlation analysis is an established multivariate statistical method in which correlation between linear combinations of multivariate sets of variables is maximized. In canonical information analysis introduced here, linear correlation as a measure of association between variables is ...... airborne data. The simulation study shows that canonical information analysis is as accurate as and much faster than algorithms presented in previous work, especially for large sample sizes. URL: http://www.imm.dtu.dk/pubdb/p.php?6270...
Hybrid Levenberg–Marquardt and weak constraint ensemble Kalman smoother method
Directory of Open Access Journals (Sweden)
J. Mandel
2015-05-01
Full Text Available We propose to use the ensemble Kalman smoother (EnKS as the linear least squares solver in the Gauss–Newton method for the large nonlinear least squares in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Further, adding a regularization term results in replacing the Gauss–Newton method, which may diverge, by the Levenberg–Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS. The method is illustrated on the Lorenz 63 and the two-level quasi-geostrophic model problems.
A Bayes fusion method based ensemble classification approach for Brown cloud application
Directory of Open Access Journals (Sweden)
M.Krishnaveni
2014-03-01
Full Text Available Classification is a recurrent task of determining a target function that maps each attribute set to one of the predefined class labels. Ensemble fusion is one of the suitable classifier model fusion techniques which combine the multiple classifiers to perform high classification accuracy than individual classifiers. The main objective of this paper is to combine base classifiers using ensemble fusion methods namely Decision Template, Dempster-Shafer and Bayes to compare the accuracy of the each fusion methods on the brown cloud dataset. The base classifiers like KNN, MLP and SVM have been considered in ensemble classification in which each classifier with four different function parameters. From the experimental study it is proved, that the Bayes fusion method performs better classification accuracy of 95% than Decision Template of 80%, Dempster-Shaferof 85%, in a Brown Cloud image dataset.
Directory of Open Access Journals (Sweden)
O’Boyle Noel M
2012-09-01
Full Text Available Abstract Background There are two line notations of chemical structures that have established themselves in the field: the SMILES string and the InChI string. The InChI aims to provide a unique, or canonical, identifier for chemical structures, while SMILES strings are widely used for storage and interchange of chemical structures, but no standard exists to generate a canonical SMILES string. Results I describe how to use the InChI canonicalisation to derive a canonical SMILES string in a straightforward way, either incorporating the InChI normalisations (Inchified SMILES or not (Universal SMILES. This is the first description of a method to generate canonical SMILES that takes stereochemistry into account. When tested on the 1.1 m compounds in the ChEMBL database, and a 1 m compound subset of the PubChem Substance database, no canonicalisation failures were found with Inchified SMILES. Using Universal SMILES, 99.79% of the ChEMBL database was canonicalised successfully and 99.77% of the PubChem subset. Conclusions The InChI canonicalisation algorithm can successfully be used as the basis for a common standard for canonical SMILES. While challenges remain – such as the development of a standard aromatic model for SMILES – the ability to create the same SMILES using different toolkits will mean that for the first time it will be possible to easily compare the chemical models used by different toolkits.
A Synergy Method to Improve Ensemble Weather Predictions and Differential SAR Interferograms
Ulmer, Franz-Georg; Adam, Nico
2015-11-01
A compensation of atmospheric effects is essential for mm-sensitivity in differential interferometric synthetic aperture radar (DInSAR) techniques. Numerical weather predictions are used to compensate these disturbances allowing a reduction in the number of required radar scenes. Practically, predictions are solutions of partial differential equations which never can be precise due to model or initialisation uncertainties. In order to deal with the chaotic nature of the solutions, ensembles of predictions are computed. From a stochastic point of view, the ensemble mean is the expected prediction, if all ensemble members are equally likely. This corresponds to the typical assumption that all ensemble members are physically correct solutions of the set of partial differential equations. DInSAR allows adding to this knowledge. Observations of refractivity can now be utilised to check the likelihood of a solution and to weight the respective ensemble member to estimate a better expected prediction. The objective of the paper is to show the synergy between ensemble weather predictions and differential interferometric atmospheric correction. We demonstrate a new method first to compensate better for the atmospheric effect in DInSAR and second to estimate an improved numerical weather prediction (NWP) ensemble mean. Practically, a least squares fit of predicted atmospheric effects with respect to a differential interferogram is computed. The coefficients of this fit are interpreted as likelihoods and used as weights for the weighted ensemble mean. Finally, the derived weighted prediction has minimal expected quadratic errors which is a better solution compared to the straightforward best-fitting ensemble member. Furthermore, we propose an extension of the algorithm which avoids the systematic bias caused by deformations. It makes this technique suitable for time series analysis, e.g. persistent scatterer interferometry (PSI). We validate the algorithm using the well known
A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
Iglesias, Marco A.
2016-02-01
We introduce a derivative-free computational framework for approximating solutions to nonlinear PDE-constrained inverse problems. The general aim is to merge ideas from iterative regularization with ensemble Kalman methods from Bayesian inference to develop a derivative-free stable method easy to implement in applications where the PDE (forward) model is only accessible as a black box (e.g. with commercial software). The proposed regularizing ensemble Kalman method can be derived as an approximation of the regularizing Levenberg-Marquardt (LM) scheme (Hanke 1997 Inverse Problems 13 79-95) in which the derivative of the forward operator and its adjoint are replaced with empirical covariances from an ensemble of elements from the admissible space of solutions. The resulting ensemble method consists of an update formula that is applied to each ensemble member and that has a regularization parameter selected in a similar fashion to the one in the LM scheme. Moreover, an early termination of the scheme is proposed according to a discrepancy principle-type of criterion. The proposed method can be also viewed as a regularizing version of standard Kalman approaches which are often unstable unless ad hoc fixes, such as covariance localization, are implemented. The aim of this paper is to provide a detailed numerical investigation of the regularizing and convergence properties of the proposed regularizing ensemble Kalman scheme; the proof of these properties is an open problem. By means of numerical experiments, we investigate the conditions under which the proposed method inherits the regularizing properties of the LM scheme of (Hanke 1997 Inverse Problems 13 79-95) and is thus stable and suitable for its application in problems where the computation of the Fréchet derivative is not computationally feasible. More concretely, we study the effect of ensemble size, number of measurements, selection of initial ensemble and tunable parameters on the performance of the method
ENSO-conditioned weather resampling method for seasonal ensemble streamflow prediction
Beckers, Joost V.L.; Weerts, Albrecht H.; Tijdeman, Erik; Welles, Edwin
2016-01-01
Oceanic-atmospheric climate modes, such as El Niño-Southern Oscillation (ENSO), are known to affect the local streamflow regime in many rivers around the world. A new method is proposed to incorporate climate mode information into the well-known ensemble streamflow prediction (ESP) method for sea
Method to detect gravitational waves from an ensemble of known pulsars
Fan, Xilong; Messenger, Christopher
2016-01-01
Combining information from weak sources, such as known pulsars, for gravitational wave detection, is an attractive approach to improve detection efficiency. We propose an optimal statistic for a general ensemble of signals and apply it to an ensemble of known pulsars. Our method combines $\\mathcal F$-statistic values from individual pulsars using weights proportional to each pulsar's expected optimal signal-to-noise ratio to improve the detection efficiency. We also point out that to detect at least one pulsar within an ensemble, different thresholds should be designed for each source based on the expected signal strength. The performance of our proposed detection statistic is demonstrated using simulated sources, with the assumption that all pulsars' ellipticities belong to a common (yet unknown) distribution. Comparing with an equal-weight strategy and with individual source approaches, we show that the weighted-combination of all known pulsars, where weights are assigned based on the pulsars' known informa...
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.
Evaluation of bias-correction methods for ensemble streamflow volume forecasts
Directory of Open Access Journals (Sweden)
T. Hashino
2007-01-01
Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.
Higher Derivatives and Canonical Formalism
HAMAMOTO, Shinji
1995-01-01
A canonical formalism for higher-derivative theories is presented on the basis of Dirac's method for constrained systems. It is shown that this formalism shares a path integral expression with Ostrogradski's canonical formalism.
Senjean, Bruno; Jensen, Hans Jørgen Aa; Fromager, Emmanuel
2015-01-01
The computation of excitation energies in range-separated ensemble density-functional theory (DFT) is discussed. The latter approach is appealing as it enables the rigorous formulation of a multi-determinant state-averaged DFT method. In the exact theory, the short-range density functional, that complements the long-range wavefunction-based ensemble energy contribution, should vary with the ensemble weights even when the density is held fixed. This weight dependence ensures that the range-separated ensemble energy varies linearly with the ensemble weights. When the (weight-independent) ground-state short-range exchange-correlation functional is used in this context, curvature appears thus leading to an approximate weight-dependent excitation energy. In order to obtain unambiguous approximate excitation energies, we simply propose to interpolate linearly the ensemble energy between equiensembles. It is shown that such a linear interpolation method (LIM) effectively introduces weight dependence effects. LIM has...
Application of the Multimodel Ensemble Kalman Filter Method in Groundwater System
Directory of Open Access Journals (Sweden)
Liang Xue
2015-02-01
Full Text Available With the development of in-situ monitoring techniques, the ensemble Kalman filter (EnKF has become a popular data assimilation method due to its capability to jointly update model parameters and state variables in a sequential way, and to assess the uncertainty associated with estimation and prediction. To take the conceptual model uncertainty into account during the data assimilation process, a novel multimodel ensemble Kalman filter method has been proposed by incorporating the standard EnKF with Bayesian model averaging framework. In this paper, this method is applied to analyze the dataset obtained from the Hailiutu River Basin located in the northwest part of China. Multiple conceptual models are created by considering two important factors that control groundwater dynamics in semi-arid areas: the zonation pattern of the hydraulic conductivity field and the relationship between evapotranspiration and groundwater level. The results show that the posterior model weights of the postulated models can be dynamically adjusted according to the mismatch between the measurements and the ensemble predictions, and the multimodel ensemble estimation and the corresponding uncertainty can be quantified.
An optimized ensemble local mean decomposition method for fault detection of mechanical components
Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang
2017-03-01
Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.
EXPERIMENTS OF ENSEMBLE FORECAST OF TYPHOON TRACK USING BDA PERTURBING METHOD
Institute of Scientific and Technical Information of China (English)
HUANG Yan-yan; WAN Qi-lin; YUAN Jin-nan; DING Wei-yu
2006-01-01
A new method, BDA perturbing, is used in ensemble forecasting of typhoon track. This method is based on the Bogus Data Assimilation scheme. It perturbs the initial position and intensity of typhoons and gets a series of bogus vortex. Then each bogus vortex is used in data assimilation to obtain initial conditions. Ensemble forecast members are constructed by conducting simulation with these initial conditions. Some cases of typhoon are chosen to test the validity of this new method and the results show that: using the BDA perturbing method to perturb initial position and intensity of typhoon for track forecast can improve accuracy, compared with the direct use of the BDA assimilation scheme. And it is concluded that a perturbing amplitude of intensity of 5 hPa is probably more appropriate than 10 hPa if the BDA perturbing method is used in combination with initial position perturbation.
Canonical Strangeness and Distillation Effects in Hadron Production
Toneev, V D
2004-01-01
Strangeness canonical ensemble for Maxwell-Boltzmann statistics is reconsidered for excited nuclear systems with non-vanishing net strangeness. A new recurrence relation method is applied to find the partition function. The method is first generalized to the case of quantum strangeness canonical ensemble. Uncertainties in calculation of the K+/pi+ excitation function are discussed. A new scenario based on the strangeness distillation effect is put forward for a possible explanation of anomalous strangeness production observed at the bombarding energy near 30 AGeV. The peaked maximum in the K+/pi+ ratio is considered as a sign of the critical end-point reached in evolution of the system rather than a latent heat jump emerging from the onset of the first order deconfinement phase transition.
A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification
Directory of Open Access Journals (Sweden)
Yongjun Piao
2015-01-01
Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.
A Numerical Comparison of Rule Ensemble Methods and Support Vector Machines
Energy Technology Data Exchange (ETDEWEB)
Meza, Juan C.; Woods, Mark
2009-12-18
Machine or statistical learning is a growing field that encompasses many scientific problems including estimating parameters from data, identifying risk factors in health studies, image recognition, and finding clusters within datasets, to name just a few examples. Statistical learning can be described as 'learning from data' , with the goal of making a prediction of some outcome of interest. This prediction is usually made on the basis of a computer model that is built using data where the outcomes and a set of features have been previously matched. The computer model is called a learner, hence the name machine learning. In this paper, we present two such algorithms, a support vector machine method and a rule ensemble method. We compared their predictive power on three supernova type 1a data sets provided by the Nearby Supernova Factory and found that while both methods give accuracies of approximately 95%, the rule ensemble method gives much lower false negative rates.
Thermodynamic stability of charged BTZ black holes: Ensemble dependency problem and its solution
Hendi, S H; Mamasani, R
2015-01-01
Motivated by the wide applications of thermal stability and phase transition, we investigate thermodynamic properties of charged BTZ black holes. We apply the standard method to calculate the heat capacity and the Hessian matrix and find that thermal stability of charged BTZ solutions depends on the choice of ensemble. To overcome this problem, we take into account cosmological constant as a thermodynamical variable. By this modification, we show that the ensemble dependency is eliminated and thermal stability conditions are the same in both ensembles. Then, we generalize our solutions to the case of nonlinear electrodynamics. We show how nonlinear matter field modifies the geometrical behavior of the metric function. We also study phase transition and thermal stability of these black holes in context of both canonical and grand canonical ensembles. We show that by considering the cosmological constant as a thermodynamical variable and modifying the Hessian matrix, the ensemble dependency of thermal stability...
Filatov, Michael; Liu, Fang; Kim, Kwang S.; Martínez, Todd J.
2016-12-01
The spin-restricted ensemble-referenced Kohn-Sham (REKS) method is based on an ensemble representation of the density and is capable of correctly describing the non-dynamic electron correlation stemming from (near-)degeneracy of several electronic configurations. The existing REKS methodology describes systems with two electrons in two fractionally occupied orbitals. In this work, the REKS methodology is extended to treat systems with four fractionally occupied orbitals accommodating four electrons and self-consistent implementation of the REKS(4,4) method with simultaneous optimization of the orbitals and their fractional occupation numbers is reported. The new method is applied to a number of molecular systems where simultaneous dissociation of several chemical bonds takes place, as well as to the singlet ground states of organic tetraradicals 2,4-didehydrometaxylylene and 1,4,6,9-spiro[4.4]nonatetrayl.
Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor
2016-10-01
Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.
2012-09-01
ATMOSPHERIC MODELS INCLUDING ENSEMBLE METHODS Scott E. Miller Lieutenant Commander, United States Navy B.S., University of South Carolina, 2000 B.S...Typical gas turbine fuel consumption curve and relationship to sea state .......51 Figure 16. DDG 58 speed reduction curves for bow seas...Day Time Group ECDIS-N Electronic Chart Display and Information System – Navy ECMWF European Center for Medium Range Weather Forecasts EFAS
Chen, Jinglong; Zhang, Chunlin; Zhang, Xiaoyan; Zi, Yanyang; He, Shuilong; Yang, Zhe
2015-03-01
Satellite communication antennas are key devices of a measurement ship to support voice, data, fax and video integration services. Condition monitoring of mechanical equipment from the vibration measurement data is significant for guaranteeing safe operation and avoiding the unscheduled breakdown. So, condition monitoring system for ship-based satellite communication antennas is designed and developed. Planetary gearboxes play an important role in the transmission train of satellite communication antenna. However, condition monitoring of planetary gearbox still faces challenges due to complexity and weak condition feature. This paper provides a possibility for planetary gearbox condition monitoring by proposing ensemble a multiwavelet analysis method. Benefit from the property on multi-resolution analysis and the multiple wavelet basis functions, multiwavelet has the advantage over characterizing the non-stationary signal. In order to realize the accurate detection of the condition feature and multi-resolution analysis in the whole frequency band, adaptive multiwavelet basis function is constructed via increasing multiplicity and then vibration signal is processed by the ensemble multiwavelet transform. Finally, normalized ensemble multiwavelet transform information entropy is computed to describe the condition of planetary gearbox. The effectiveness of proposed method is first validated through condition monitoring of experimental planetary gearbox. Then this method is used for planetary gearbox condition monitoring of ship-based satellite communication antennas and the results support its feasibility.
ENSO-conditioned weather resampling method for seasonal ensemble streamflow prediction
Beckers, Joost V. L.; Weerts, Albrecht H.; Tijdeman, Erik; Welles, Edwin
2016-08-01
Oceanic-atmospheric climate modes, such as El Niño-Southern Oscillation (ENSO), are known to affect the local streamflow regime in many rivers around the world. A new method is proposed to incorporate climate mode information into the well-known ensemble streamflow prediction (ESP) method for seasonal forecasting. The ESP is conditioned on an ENSO index in two steps. First, a number of original historical ESP traces are selected based on similarity between the index value in the historical year and the index value at the time of forecast. In the second step, additional ensemble traces are generated by a stochastic ENSO-conditioned weather resampler. These resampled traces compensate for the reduction of ensemble size in the first step and prevent degradation of skill at forecasting stations that are less affected by ENSO. The skill of the ENSO-conditioned ESP is evaluated over 50 years of seasonal hindcasts of streamflows at three test stations in the Columbia River basin in the US Pacific Northwest. An improvement in forecast skill of 5 to 10 % is found for two test stations. The streamflows at the third station are less affected by ENSO and no change in forecast skill is found here.
Data Mining and Ensemble of Learning Methods%数据挖掘与组合学习
Institute of Scientific and Technical Information of China (English)
刁力力; 胡可云; 陆玉昌; 石纯一
2001-01-01
Data-mining is a kind of solution for solving the problem of information exploding. Classification and prediction belong to the most fundamental tasks in data-mining field. Many experiments have showed that the results of ensemble of learning methods are generally better than those of single learning methods under most of the time. In the sense,it is of great value to introduce ensemble of learning methods to data mining. This paper introduces data mining and ensemble of learning methods respectively,along with the analysis and formulation about the role ensemble of learning methods can act in some important practicing aspects of data mining:Text mining,multi-media information mining and web mining.
Extensions and applications of ensemble-of-trees methods in machine learning
Bleich, Justin
Ensemble-of-trees algorithms have emerged to the forefront of machine learning due to their ability to generate high forecasting accuracy for a wide array of regression and classification problems. Classic ensemble methodologies such as random forests (RF) and stochastic gradient boosting (SGB) rely on algorithmic procedures to generate fits to data. In contrast, more recent ensemble techniques such as Bayesian Additive Regression Trees (BART) and Dynamic Trees (DT) focus on an underlying Bayesian probability model to generate the fits. These new probability model-based approaches show much promise versus their algorithmic counterparts, but also offer substantial room for improvement. The first part of this thesis focuses on methodological advances for ensemble-of-trees techniques with an emphasis on the more recent Bayesian approaches. In particular, we focus on extensions of BART in four distinct ways. First, we develop a more robust implementation of BART for both research and application. We then develop a principled approach to variable selection for BART as well as the ability to naturally incorporate prior information on important covariates into the algorithm. Next, we propose a method for handling missing data that relies on the recursive structure of decision trees and does not require imputation. Last, we relax the assumption of homoskedasticity in the BART model to allow for parametric modeling of heteroskedasticity. The second part of this thesis returns to the classic algorithmic approaches in the context of classification problems with asymmetric costs of forecasting errors. First we consider the performance of RF and SGB more broadly and demonstrate its superiority to logistic regression for applications in criminology with asymmetric costs. Next, we use RF to forecast unplanned hospital readmissions upon patient discharge with asymmetric costs taken into account. Finally, we explore the construction of stable decision trees for forecasts of
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Battogtokh, D.; Asch, D. K.; Case, M. E.; Arnold, J.; Schüttler, H.-B.
2002-01-01
A chemical reaction network for the regulation of the quinic acid (qa) gene cluster of Neurospora crassa is proposed. An efficient Monte Carlo method for walking through the parameter space of possible chemical reaction networks is developed to identify an ensemble of deterministic kinetics models with rate constants consistent with RNA and protein profiling data. This method was successful in identifying a model ensemble fitting available RNA profiling data on the qa gene cluster. PMID:12477937
Oh, Seok-Geun; Suh, Myoung-Seok
2016-03-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Full canonical information from grand-potential density-functional theory.
de Las Heras, Daniel; Schmidt, Matthias
2014-12-05
We present a general and formally exact method to obtain the canonical one-body density distribution and the canonical free energy from direct decomposition of classical density functional results in the grand ensemble. We test the method for confined one-dimensional hard-core particles for which the exact grand potential density functional is explicitly known. The results agree to within high accuracy with those from exact methods and our Monte Carlo many-body simulations. The method is relevant for treating finite systems and for dynamical density functional theory.
Institute of Scientific and Technical Information of China (English)
JIANG Zhina; MU Mu
2009-01-01
The authors apply the technique of conditional nonlinear optimal perturbations (CNOPs) as a means of providing initial perturbations for ensemble forecasting by using a barotropic quasi-gcostrophic (QG) model in a perfect-model scenario. Ensemble forecasts for the medium range (14 days) are made from the initial states perturbed by CNOPs and singular vectors (SVs). 13 different cases have been chosen when analysis error is a kind of fast growing error. Our experiments show that the introduction of CNOP provides better forecast skill than the SV method. Moreover, the spread-skill relationship reveals that the ensemble samples in which the first SV is replaced by CNOP appear supcrior to those obtained by SVs from day 6 to day 14. Rank diagrams are adopted to compare the new method with the SV approach. The results illustrate that the introduction of CNOP has higher reliability for medium-range ensemble forecasts.
Exploring ensemble visualization
Phadke, Madhura N.; Pinto, Lifford; Alabi, Oluwafemi; Harter, Jonathan; Taylor, Russell M., II; Wu, Xunlei; Petersen, Hannah; Bass, Steffen A.; Healey, Christopher G.
2012-01-01
An ensemble is a collection of related datasets. Each dataset, or member, of an ensemble is normally large, multidimensional, and spatio-temporal. Ensembles are used extensively by scientists and mathematicians, for example, by executing a simulation repeatedly with slightly different input parameters and saving the results in an ensemble to see how parameter choices affect the simulation. To draw inferences from an ensemble, scientists need to compare data both within and between ensemble members. We propose two techniques to support ensemble exploration and comparison: a pairwise sequential animation method that visualizes locally neighboring members simultaneously, and a screen door tinting method that visualizes subsets of members using screen space subdivision. We demonstrate the capabilities of both techniques, first using synthetic data, then with simulation data of heavy ion collisions in high-energy physics. Results show that both techniques are capable of supporting meaningful comparisons of ensemble data.
Trajectory study of dissociation reactions. The single-ensemble method. II
Kutz, H. Douglas; Burns, George
1981-04-01
The single uniform ensemble method was previously employed in 3D classical trajectory calculations [H. D. Kutz and G. Burns, J. Chem. Phys. 72, 3652 (1980)]. Presently it is applied to the Br2+Ar system to study nonequilbrium effects in diatom dissociation over a wide temperature range. It was found that, for a given large set of trajectories, observables, such as reaction cross sections or rate constants, are indepedent within four significant figures of the initial distribution function. This indicates a high degree of reliability of the single uniform ensemble method, once the choice of a set of trajectories is made. In order to study dissociation from the low lying energy states, the uniform velocity selection method in trajectory calculations was used. It was found that dissociation from these states contributes but little to the overall dissociation reaction. The latter finding is consistent with the attractive nature of the potential energy surface used, and constitutes an argument against those current theories of diatom dissociation reaction which explains experimental data by postulating a high probability of dissociation from low lying energy states of diatoms. It was found that the contribution from the low lying states to dissociation can be estimated with good accuracy using information theory expressions. Temperature dependence of nonequilibrium effects was investigated between 1 500 and 6 000 °K. In this range the nonequilibrium correction factor varies between 0.2 and 0.5. Angular momentum dependence of such observables as reaction rate constant and reaction cross section was investigated.
An ensemble method for data stream classification in the presence of concept drift
Institute of Scientific and Technical Information of China (English)
Omid ABBASZADEH; Ali AMIRI‡; Ali Reza KHANTEYMOORI
2015-01-01
One recent area of interest in computer science is data stream management and processing. By ‘data stream’, we refer to continuous and rapidly generated packages of data. Specifi c features of data streams are immense volume, high production rate, limited data processing time, and data concept drift; these features differentiate the data stream from standard types of data. An issue for the data stream is classifi cation of input data. A novel ensemble classifi er is proposed in this paper. The classifi er uses base classifi ers of two weighting functions under different data input conditions. In addition, a new method is used to determine drift, which emphasizes the precision of the algorithm. Another characteristic of the proposed method is removal of different numbers of the base classifi ers based on their quality. Implementation of a weighting mechanism to the base classifi ers at the decision-making stage is another advantage of the algorithm. This facilitates adaptability when drifts take place, which leads to classifi ers with higher eﬃciency. Furthermore, the proposed method is tested on a set of standard data and the results confi rm higher accuracy compared to available ensemble classifi ers and single classifi ers. In addition, in some cases the proposed classifi er is faster and needs less storage space.
Boosting iterative stochastic ensemble method for nonlinear calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-06-01
A novel parameter estimation algorithm is proposed. The inverse problem is formulated as a sequential data integration problem in which Gaussian process regression (GPR) is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen-Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative stochastic ensemble method (ISEM). ISEM employs directional derivatives within a Gauss-Newton iteration for efficient gradient estimation. The resulting update equation relies on the inverse of the output covariance matrix which is rank deficient.In the proposed algorithm we use an iterative regularization based on the ℓ2 Boosting algorithm. ℓ2 Boosting iteratively fits the residual and the amount of regularization is controlled by the number of iterations. A termination criteria based on Akaike information criterion (AIC) is utilized. This regularization method is very attractive in terms of performance and simplicity of implementation. The proposed algorithm combining ISEM and ℓ2 Boosting is evaluated on several nonlinear subsurface flow parameter estimation problems. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier B.V.
Radhakrishnan, Mala L; Tidor, Bruce
2008-05-01
Drug resistance is a significant obstacle in the effective treatment of diseases with rapidly mutating targets, such as AIDS, malaria, and certain forms of cancer. Such targets are remarkably efficient at exploring the space of functional mutants and at evolving to evade drug binding while still maintaining their biological role. To overcome this challenge, drug regimens must be active against potential target variants. Such a goal may be accomplished by one drug molecule that recognizes multiple variants or by a drug "cocktail"--a small collection of drug molecules that collectively binds all desired variants. Ideally, one wants the smallest cocktail possible due to the potential for increased toxicity with each additional drug. Therefore, the task of designing a regimen for multiple target variants can be framed as an optimization problem--find the smallest collection of molecules that together "covers" the relevant target variants. In this work, we formulate and apply this optimization framework to theoretical model target ensembles. These results are analyzed to develop an understanding of how the physical properties of a target ensemble relate to the properties of the optimal cocktail. We focus on electrostatic variation within target ensembles, as it is one important mechanism by which drug resistance is achieved. Using integer programming, we systematically designed optimal cocktails to cover model target ensembles. We found that certain drug molecules covered much larger regions of target space than others, a phenomenon explained by theory grounded in continuum electrostatics. Molecules within optimal cocktails were often dissimilar, such that each drug was responsible for binding variants with a certain electrostatic property in common. On average, the number of molecules in the optimal cocktails correlated with the number of variants, the differences in the variants' electrostatic properties at the binding interface, and the level of binding affinity
Ensemble approach combining multiple methods improves human transcription start site prediction
LENUS (Irish Health Repository)
Dineen, David G
2010-11-30
Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.
A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface
Directory of Open Access Journals (Sweden)
Francesco Cavrini
2016-01-01
Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.
Study of Phase Equilibria of Petrochemical Fluids using Gibbs Ensemble Monte Carlo Methods
Nath, Shyamal
2001-03-01
Knowledge of phase behavior of hydrocarbons and related compounds are highly of interest to chemical and petrochemical industries. For example, design of processes such as supercritical fluid extraction, petroleum refining, enhanced oil recovery, gas treatment, and fractionation of wax products. A precise knowledge of the phase equilibria of alkanes, alkenes and related compounds and their mixtures are required for efficient design of these processes. Experimental studies to understand the related phase equilibria often become unsuitable for various reasons. With the advancement of simulation technology, molecular simulations could provide a useful complement and alternative in the study and description of phase behavior of these systems. In this work we study vapor-liquid phase equilibria of pure hydrocarbons and their mixtures using Gibbs ensemble simulation. Insertion of long and articulated chain molecules are facilitated in our simulations by means of configurational bias and expanded ensemble methods. We use the newly developed NERD force field in our simulation. In this work NERD force field is extended to provide coverage for hydrocarbons with any arbitrary architecture. Our simulation results provide excellent quantitative agreement with available experimental phase equilibria data for both the pure components and mixtures.
Ahn, Joong-Bae; Lee, Joonlee
2016-08-01
A new multimodel ensemble (MME) method that uses a genetic algorithm (GA) is developed and applied to the prediction of winter surface air temperature (SAT) and precipitation. The GA based on the biological process of natural evolution is a nonlinear method which solves nonlinear optimization problems. Hindcast data of winter SAT and precipitation from the six coupled general circulation models participating in the seasonal MME prediction system of the Asia-Pacific Economic Conference Climate Center are used. Three MME methods using GA (MME/GAs) are examined in comparison with a simple composite MME strategy (MS0): MS1 which applies GA to single-model ensembles (SMEs), MS2 which applies GA to each ensemble member and then performs a simple composite method for MME, and MS3 which applies GA to both MME and SME. MS3 shows the highest predictability compared to MS0, MS1, and MS2 for both winter SAT and precipitation. These results indicate that biases of ensemble members of each model and model ensemble are more reduced with MS3 than with other MME/GAs and MS0. The predictability of the MME/GAs shows a greater improvement than that of MS0, particularly in higher-latitude land areas. The reason for the more improved increase of predictability over the land area, particularly in MS3, seems to be the fact that GA is more efficient in finding an optimum solution in a complex region where nonlinear physical properties are evident.
Comparing generalized ensemble methods for sampling of systems with many degrees of freedom
Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa
2016-11-01
We compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchange (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium (http://www.omnia.md/).
Aken, Bronwen L.; Achuthan, Premanand; Akanni, Wasiu; Amode, M. Ridwan; Bernsdorff, Friederike; Bhai, Jyothish; Billis, Konstantinos; Carvalho-Silva, Denise; Cummins, Carla; Clapham, Peter; Gil, Laurent; Girón, Carlos García; Gordon, Leo; Hourlier, Thibaut; Hunt, Sarah E.; Janacek, Sophie H.; Juettemann, Thomas; Keenan, Stephen; Laird, Matthew R.; Lavidas, Ilias; Maurel, Thomas; McLaren, William; Moore, Benjamin; Murphy, Daniel N.; Nag, Rishi; Newman, Victoria; Nuhn, Michael; Ong, Chuang Kee; Parker, Anne; Patricio, Mateus; Riat, Harpreet Singh; Sheppard, Daniel; Sparrow, Helen; Taylor, Kieron; Thormann, Anja; Vullo, Alessandro; Walts, Brandon; Wilder, Steven P.; Zadissa, Amonida; Kostadima, Myrto; Martin, Fergal J.; Muffato, Matthieu; Perry, Emily; Ruffier, Magali; Staines, Daniel M.; Trevanion, Stephen J.; Cunningham, Fiona; Yates, Andrew; Zerbino, Daniel R.; Flicek, Paul
2017-01-01
Ensembl (www.ensembl.org) is a database and genome browser for enabling research on vertebrate genomes. We import, analyse, curate and integrate a diverse collection of large-scale reference data to create a more comprehensive view of genome biology than would be possible from any individual dataset. Our extensive data resources include evidence-based gene and regulatory region annotation, genome variation and gene trees. An accompanying suite of tools, infrastructure and programmatic access methods ensure uniform data analysis and distribution for all supported species. Together, these provide a comprehensive solution for large-scale and targeted genomics applications alike. Among many other developments over the past year, we have improved our resources for gene regulation and comparative genomics, and added CRISPR/Cas9 target sites. We released new browser functionality and tools, including improved filtering and prioritization of genome variation, Manhattan plot visualization for linkage disequilibrium and eQTL data, and an ontology search for phenotypes, traits and disease. We have also enhanced data discovery and access with a track hub registry and a selection of new REST end points. All Ensembl data are freely released to the scientific community and our source code is available via the open source Apache 2.0 license. PMID:27899575
Meng, S.; Xie, X.
2014-12-01
Hydrological model performance is usually not as acceptable as expected due to limited measurements and imperfect parameterization which is attributable to the uncertainties from model parameters and model structures. In applications, a general assumption is hold that model parameters are constant in a stationary condition during the simulation period, and the parameters are generally prescribed though calibration with observed data. In reality, but the model parameters related to the physical or conceptual characteristics of a catchment will travel in nonstationary conditions in response to climate transition and land use alteration. The travels or changes of parameters are especially evident for long-term hydrological simulations. Therefore, the assumption of using constant parameters under nonstationary condition is inappropriate, and it will deliver errors from the parameters to the outputs during the simulation and prediction. Even though a few of studies have acknowledged the parameter travel or change, little attention has been paid on the estimation of changing parameters. In this study, we employ an ensemble Kalman filter (EnKF) based method to trace parameter changes in real time. Through synthetic experiments, the capability of the EnKF-based is demonstrated by assimilating runoff observations into a rainfall-runoff model, i.e., the Xinanjing Model. In addition to the stationary condition, three typical nonstationary conditions are considered, i.e., the leap, linear and Ω-shaped transitions. To examine the robustness of the method, different errors from rainfall input, modelling and observations are investigated. The shuffled complex evolution (SCE-UA) algorithm is applied under the same conditions to make a comparison. The results show that the EnKF-based method is capable of capturing the general pattern of the parameter travels even for high levels of uncertainties. It provides better estimates than the SCE-UA method does by taking advantages of real
DEFF Research Database (Denmark)
Senjean, Bruno; Knecht, Stefan; Jensen, Hans Jørgen Aa
2015-01-01
of range separation and use of the slope of the linearly interpolated ensemble energy, rather than orbital energies. The range-separated approach is appealing, as it enables the rigorous formulation of a multideterminant state-averaged DFT method. In the exact theory, the short-range density functional......-independent) ground-state short-range exchange-correlation functional is used in this context, curvature appears, thus leading to an approximate weight-dependent excitation energy. In order to obtain unambiguous approximate excitation energies, we propose to interpolate linearly the ensemble energy between...... promising results have been obtained for both single (including charge transfer) and double excitations with spin-independent short-range local and semilocal functionals. Even at the Kohn-Sham ensemble DFT level, which is recovered when the range-separation parameter is set to 0, LIM performs better than...
Design Hybrid method for intrusion detection using Ensemble cluster classification and SOM network
Directory of Open Access Journals (Sweden)
Deepak Rathore
2012-09-01
Full Text Available In current scenario of internet technology security is bigchallenge. Internet network threats by various cyber-attackand loss the system data and degrade the performance ofhost computer. In this sense intrusion detection arechallenging field of research in concern of networksecurity based on firewall and some rule based detectiontechnique. In this paper we proposed an Ensemble ClusterClassification technique using som network for detectionof mixed variable data generated by malicious software forattack purpose in host system. In our methodology SOMnetwork control the iteration of distance of differentparameters of ensembling our experimental result showthat better empirical evaluation on KDD data set 99 incomparison of existing ensemble classifier.
DEFF Research Database (Denmark)
Olesen, Alexander Neergaard; Christensen, Julie Anja Engelhard; Sørensen, Helge Bjarup Dissing;
2016-01-01
(EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen’s kappa of 0.74 indicating substantial agreement between...
An ensemble method with hybrid features to identify extracellular matrix proteins.
Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina
2015-01-01
The extracellular matrix (ECM) is a dynamic composite of secreted proteins that play important roles in numerous biological processes such as tissue morphogenesis, differentiation and homeostasis. Furthermore, various diseases are caused by the dysfunction of ECM proteins. Therefore, identifying these important ECM proteins may assist in understanding related biological processes and drug development. In view of the serious imbalance in the training dataset, a Random Forest-based ensemble method with hybrid features is developed in this paper to identify ECM proteins. Hybrid features are employed by incorporating sequence composition, physicochemical properties, evolutionary and structural information. The Information Gain Ratio and Incremental Feature Selection (IGR-IFS) methods are adopted to select the optimal features. Finally, the resulting predictor termed IECMP (Identify ECM Proteins) achieves an balanced accuracy of 86.4% using the 10-fold cross-validation on the training dataset, which is much higher than results obtained by other methods (ECMPRED: 71.0%, ECMPP: 77.8%). Moreover, when tested on a common independent dataset, our method also achieves significantly improved performance over ECMPP and ECMPRED. These results indicate that IECMP is an effective method for ECM protein prediction, which has a more balanced prediction capability for positive and negative samples. It is anticipated that the proposed method will provide significant information to fully decipher the molecular mechanisms of ECM-related biological processes and discover candidate drug targets. For public access, we develop a user-friendly web server for ECM protein identification that is freely accessible at http://iecmp.weka.cc.
Directory of Open Access Journals (Sweden)
Jiang Tianzi
2004-09-01
Full Text Available Abstract Background Microarray experiments are becoming a powerful tool for clinical diagnosis, as they have the potential to discover gene expression patterns that are characteristic for a particular disease. To date, this problem has received most attention in the context of cancer research, especially in tumor classification. Various feature selection methods and classifier design strategies also have been generally used and compared. However, most published articles on tumor classification have applied a certain technique to a certain dataset, and recently several researchers compared these techniques based on several public datasets. But, it has been verified that differently selected features reflect different aspects of the dataset and some selected features can obtain better solutions on some certain problems. At the same time, faced with a large amount of microarray data with little knowledge, it is difficult to find the intrinsic characteristics using traditional methods. In this paper, we attempt to introduce a combinational feature selection method in conjunction with ensemble neural networks to generally improve the accuracy and robustness of sample classification. Results We validate our new method on several recent publicly available datasets both with predictive accuracy of testing samples and through cross validation. Compared with the best performance of other current methods, remarkably improved results can be obtained using our new strategy on a wide range of different datasets. Conclusions Thus, we conclude that our methods can obtain more information in microarray data to get more accurate classification and also can help to extract the latent marker genes of the diseases for better diagnosis and treatment.
Making Tree Ensembles Interpretable
Hara, Satoshi; Hayashi, Kohei
2016-01-01
Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited. In this paper, we propose a post processing method that improves the model interpretability of tree ensembles. After learning a complex tree ensembles in a standard way, we approximate it by a simpler model that is interpretable for human. To obtain the simpler model, we derive the EM algorithm minimizing the KL divergence from the ...
You, Wei; Wang, Yuanyuan
2011-11-01
In ultrasound color flow imaging (CFI), the single-ensemble eigen-based filters can reject clutter components using each slow-time ensemble individually. They have shown excellent spatial adaptability. This article proposes a novel clutter rejection method called the single-ensemble geometry filter (SGF), which is derived from an analytic geometry perspective. If the transmitted pulse number M equals two, the clutter component distribution on a two-dimensional (2-D) plane will be similar to a tilted ellipse. Therefore, the direction of the major axis of the ellipse can be used as the first principal component of the autocorrelation matrix estimated from multiple ensembles. Then the algorithm is generalized from 2-D to a higher dimensional space by using linear algebra representations of the ellipse. Comparisons have been made with the high-pass filter (HPF), the Hankel-singular value decomposition (SVD) filter and the recursive eigen-decomposition (RED) method using both simulated and human carotid data. Results show that compared with HPF and Hankel-SVD, the proposed filter causes less bias on the velocity estimation when the clutter velocity is close to that of the blood flow. On the other hand, the proposed filter does not need to update the autocorrelation matrix and can achieve better spatial adaptability than the RED.
An ensemble method for gene discovery based on DNA microarray data
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
The advent of DNA microarray technology has offered the promise of casting new insights onto deciphering secrets of life by monitoring activities of thousands of genes simultaneously.Current analyses of microarray data focus on precise classification of biological types,for example,tumor versus normal tissues.A further scientific challenging task is to extract disease-relevant genes from the bewildering amounts of raw data,which is one of the most critical themes in the post-genomic era,but it is generally ignored due to lack of an efficient approach.In this paper,we present a novel ensemble method for gene extraction that can be tailored to fulfill multiple biological tasks including(i)precise classification of biological types;(ii)disease gene mining; and(iii)target-driven gene networking.We also give a numerical application for(i)and(ii)using a public microarrary data set and set aside a separate paper to address(iii).
Hamdi, Anis; Missaoui, Oualid; Frigui, Hichem; Gader, Paul
2010-04-01
We propose a landmine detection algorithm that uses ensemble discrete hidden Markov models with context dependent training schemes. We hypothesize that the data are generated by K models. These different models reflect the fact that mines and clutter objects have different characteristics depending on the mine type, soil and weather conditions, and burial depth. Model identification is based on clustering in the log-likelihood space. First, one HMM is fit to each of the N individual sequence. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an N x N log-likelihood distance matrix that will be partitioned into K groups. In the second step, we learn the parameters of one discrete HMM per group. We propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we will investigate the maximum likelihood, and the MCE-based discriminative training approaches. Results on large and diverse Ground Penetrating Radar data collections show that the proposed method can identify meaningful and coherent HMM models that describe different properties of the data. Each HMM models a group of alarm signatures that share common attributes such as clutter, mine type, and burial depth. Our initial experiments have also indicated that the proposed mixture model outperform the baseline HMM that uses one model for the mine and one model for the background.
Xue, L.; Dai, C.; Zhang, D.; Guadagnini, A.
2015-12-01
It is critical to predict contaminant plume in an aquifer under uncertainty, which can help assess environmental risk and design rational management strategies. An accurate prediction of contaminant plume requires the collection of data to help characterize the system. Due to the limitation of financial resources, ones should estimate the expectative value of data collected from each optional monitoring scheme before carried out. Data-worth analysis is believed to be an effective approach to identify the value of the data in some problems, which quantifies the uncertainty reduction assuming that the plausible data has been collected. However, it is difficult to apply the data-worth analysis to a dynamic simulation of contaminant transportation model owning to its requirement of large number of inverse-modeling. In this study, a novel efficient data-worth analysis framework is proposed by developing the Probabilistic Collocation Method based Ensemble Kalman Filter (PCKF). The PCKF constructs polynomial chaos expansion surrogate model to replace the original complex numerical model. Consequently, the inverse modeling can perform on the proxy rather than the original model. An illustrative example, considering the dynamic change of the contaminant concentration, is employed to demonstrate the proposed approach. The Results reveal that schemes with different sampling frequencies, monitoring networks location, prior data content will have significant impact on the uncertainty reduction of the estimation of contaminant plume. Our proposition is validated to provide the reasonable value of data from various schemes.
Ensemble treatments of thermal pairing in nuclei
Hung, Nguyen Quang; Dang, Nguyen Dinh
2009-10-01
A systematic comparison is conducted for pairing properties of finite systems at nonzero temperature as predicted by the exact solutions of the pairing problem embedded in three principal statistical ensembles, namely the grandcanonical ensemble, canonical ensemble and microcanonical ensemble, as well as the unprojected (FTBCS1+SCQRPA) and Lipkin-Nogami projected (FTLN1+SCQRPA) theories that include the quasiparticle number fluctuation and coupling to pair vibrations within the self-consistent quasiparticle random-phase approximation. The numerical calculations are performed for the pairing gap, total energy, heat capacity, entropy, and microcanonical temperature within the doubly-folded equidistant multilevel pairing model. The FTLN1+SCQRPA predictions are found to agree best with the exact grand-canonical results. In general, all approaches clearly show that the superfluid-normal phase transition is smoothed out in finite systems. A novel formula is suggested for extracting the empirical pairing gap in reasonable agreement with the exact canonical results.
Lee, Mark D; Ruostekoski, Janne
2016-01-01
We derive equations for the strongly coupled system of light and dense atomic ensembles. The formalism includes an arbitrary internal level structure for the atoms and is not restricted to weak excitation of atoms by light. In the low light intensity limit for atoms with a single electronic ground state, the full quantum field-theoretical representation of the model can be solved exactly by means of classical stochastic electrodynamics simulations for stationary atoms that represent cold atomic ensembles. Simulations for the optical response of atoms in a quantum degenerate regime require one to synthesize a stochastic ensemble of atomic positions that generates the corresponding quantum statistical position correlations between the atoms. In the case of multiple ground levels or at light intensities where saturation becomes important, the classical simulations require approximations that neglect quantum fluctuations between the levels. We show how the model is extended to incorporate corrections due to quant...
Design Hybrid method for intrusion detection using Ensemble cluster classification and SOM network
Directory of Open Access Journals (Sweden)
Deepak Rathore
2012-09-01
Full Text Available In current scenario of internet technology security is big challenge. Internet network threats by various cyber-attack and loss the system data and degrade the performance of host computer. In this sense intrusion detection are challenging field of research in concern of network security based on firewall and some rule based detection technique. In this paper we proposed an Ensemble Cluster Classification technique using som network for detection of mixed variable data generated by malicious software for attack purpose in host system. In our methodology SOM network control the iteration of distance of different parameters of ensembling our experimental result show that better empirical evaluation on KDD data set 99 in comparison of existing ensemble classifier.
Energy Technology Data Exchange (ETDEWEB)
Macedo-Junior, A.F. [Departamento de Fisica, Laboratorio de Fisica Teorica e Computacional, Universidade Federal de Pernambuco, 50670-901 Recife, PE (Brazil)]. E-mail: ailton@df.ufpe.br; Macedo, A.M.S. [Departamento de Fisica, Laboratorio de Fisica Teorica e Computacional, Universidade Federal de Pernambuco, 50670-901 Recife, PE (Brazil)
2006-09-25
We study a class of Brownian-motion ensembles obtained from the general theory of Markovian stochastic processes in random-matrix theory. The ensembles admit a complete classification scheme based on a recent multivariable generalization of classical orthogonal polynomials and are closely related to Hamiltonians of Calogero-Sutherland-type quantum systems. An integral transform is proposed to evaluate the n-point correlation function for a large class of initial distribution functions. Applications of the classification scheme and of the integral transform to concrete physical systems are presented in detail.
Energy Technology Data Exchange (ETDEWEB)
Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei; Sun, Yajuan; Burby, Joshua W.; Ellison, Leland; Zhou, Yao
2015-12-14
Particle-in-cell (PIC) simulation is the most important numerical tool in plasma physics. However, its long-term accuracy has not been established. To overcome this difficulty, we developed a canonical symplectic PIC method for the Vlasov-Maxwell system by discretising its canonical Poisson bracket. A fast local algorithm to solve the symplectic implicit time advance is discovered without root searching or global matrix inversion, enabling applications of the proposed method to very large-scale plasma simulations with many, e.g. 10(9), degrees of freedom. The long-term accuracy and fidelity of the algorithm enables us to numerically confirm Mouhot and Villani's theory and conjecture on nonlinear Landau damping over several orders of magnitude using the PIC method, and to calculate the nonlinear evolution of the reflectivity during the mode conversion process from extraordinary waves to Bernstein waves.
Directory of Open Access Journals (Sweden)
González-Martín, M. I.
2016-03-01
Full Text Available The canonical biplot method (CB is used to determine the discriminatory power of volatile chemical compounds in cheese. These volatile compounds were used as variables in order to differentiate among 6 groups or populations of cheeses (combinations of two seasons (winter and summer with 3 types of cheese (cow, sheep and goat’s milk. We analyzed a total of 17 volatile compounds by means of gas chromatography coupled with mass detection. The compounds included aldehydes and methyl-aldehydes, alcohols (primary, secondary and branched chain, ketones, methyl-ketones and esters in winter (WC and summer (SC cow’s cheeses, winter (WSh and summer (SSh sheep’s cheeses and in winter (WG and summer (SG goat’s cheeses. The CB method allows differences to be found as a function of the elaboration of the cheeses, the seasonality of the milk, and the separation of the six groups of cheeses, characterizing the specific volatile chemical compounds responsible for such differences.El m.todo biplot can.nico (CB se utiliza para determinar el poder discriminatorio de compuestos qu.micos vol.tiles en queso. Los compuestos vol.tiles se utilizan como variables con el fin de diferenciar entre los 6 grupos o poblaciones de quesos (combinaciones de dos temporadas (invierno y verano con 3 tipos de queso (vaca, oveja y cabra. Se analizan un total de 17 compuestos vol.tiles por medio de cromatograf.a de gases acoplada con detecci.n de masas. Los compuestos incluyen aldeh.dos y metil-aldeh.dos, alcoholes (primarios de cadena, secundaria y ramificada, cetonas, metil-cetonas y .steres. Los seis grupos de quesos son, quesos de vaca de invierno (WC y verano (SC; quesos de oveja de invierno (WSh y verano (SSh y quesos de cabra de invierno (WG y verano (SG. El m.todo CB permite la separaci.n de los seis grupos de quesos y encontrar las diferencias en funci.n del tipo y estacionalidad de la leche, caracterizando los compuestos qu.micos vol.tiles espec.ficos responsables de
Elsheikh, Ahmed H.
2013-06-01
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem. © 2013 Elsevier Ltd.
Shen, Bo-Wen; Cheung, Samson; Li, Jui-Lin F.; Wu, Yu-ling
2013-01-01
In this study, we discuss the performance of the parallel ensemble empirical mode decomposition (EMD) in the analysis of tropical waves that are associated with tropical cyclone (TC) formation. To efficiently analyze high-resolution, global, multiple-dimensional data sets, we first implement multilevel parallelism into the ensemble EMD (EEMD) and obtain a parallel speedup of 720 using 200 eight-core processors. We then apply the parallel EEMD (PEEMD) to extract the intrinsic mode functions (IMFs) from preselected data sets that represent (1) idealized tropical waves and (2) large-scale environmental flows associated with Hurricane Sandy (2012). Results indicate that the PEEMD is efficient and effective in revealing the major wave characteristics of the data, such as wavelengths and periods, by sifting out the dominant (wave) components. This approach has a potential for hurricane climate study by examining the statistical relationship between tropical waves and TC formation.
Design Hybrid method for intrusion detection using Ensemble cluster classification and SOM network
Deepak Rathore; Anurag Jain
2012-01-01
In current scenario of internet technology security is big challenge. Internet network threats by various cyber-attack and loss the system data and degrade the performance of host computer. In this sense intrusion detection are challenging field of research in concern of network security based on firewall and some rule based detection technique. In this paper we proposed an Ensemble Cluster Classification technique using som network for detection of mixed variable data generated by malicious ...
2002-01-01
NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx
Directory of Open Access Journals (Sweden)
J. H. Lee
2012-04-01
Full Text Available Aerodynamic roughness height (Z_{om} is a key parameter required in land surface hydrological model, since errors in heat flux estimations are largely dependent on accurate optimization of this parameter. Despite its significance, it remains an uncertain parameter that is not easily determined. This is mostly because of non-linear relationship in Monin-Obukhov Similarity (MOS and unknown vertical characteristic of vegetation. Previous studies determined aerodynamic roughness using traditional wind profile method, remotely sensed vegetation index, minimization of cost function over MOS relationship or linear regression. However, these are complicated procedures that presume high accuracy for several other related parameters embedded in MOS equations. In order to simplify a procedure and reduce the number of parameters in need, this study suggests a new approach to extract aerodynamic roughness parameter via Ensemble Kalman Filter (EnKF that affords non-linearity and that requires only single or two heat flux measurement. So far, to our knowledge, no previous study has applied EnKF to aerodynamic roughness estimation, while a majority of data assimilation study has paid attention to land surface state variables such as soil moisture or land surface temperature. This approach was applied to grassland in semi-arid Tibetan area and maize on moderately wet condition in Italy. It was demonstrated that aerodynamic roughness parameter can inversely be tracked from data assimilated heat flux analysis. The aerodynamic roughness height estimated in this approach was consistent with eddy covariance result and literature value. Consequently, this newly estimated input adjusted the sensible heat overestimated and latent heat flux underestimated by the original Surface Energy Balance System (SEBS model, suggesting better heat flux estimation especially during the summer Monsoon period. The advantage of this approach over other methodologies is
Lee, Mark D.; Jenkins, Stewart D.; Ruostekoski, Janne
2016-06-01
We derive equations for the strongly coupled system of light and dense atomic ensembles. The formalism includes an arbitrary internal-level structure for the atoms and is not restricted to weak excitation of atoms by light. In the low-light-intensity limit for atoms with a single electronic ground state, the full quantum field-theoretical representation of the model can be solved exactly by means of classical stochastic electrodynamics simulations for stationary atoms that represent cold atomic ensembles. Simulations for the optical response of atoms in a quantum degenerate regime require one to synthesize a stochastic ensemble of atomic positions that generates the corresponding quantum statistical position correlations between the atoms. In the case of multiple ground levels or at light intensities where saturation becomes important, the classical simulations require approximations that neglect quantum fluctuations between the levels. We show how the model is extended to incorporate corrections due to quantum fluctuations that result from virtual scattering processes. In the low-light-intensity limit, we illustrate the simulations in a system of atoms in a Mott-insulator state in a two-dimensional optical lattice, where recurrent scattering of light induces strong interatomic correlations. These correlations result in collective many-atom subradiant and superradiant states and a strong dependence of the response on the spatial confinement within the lattice sites.
Microcanonical and canonical approach to traffic flow
Surda, Anton
2007-01-01
A system of identical cars on a single-lane road is treated as a microcanonical and canonical ensemble. Behaviour of the cars is characterized by the probability of car velocity as a function of distance and velocity of the car ahead. The calculations a performed on a discrete 1D lattice with discrete car velocities. Probability of total velocity of a group of cars as a function of density is calculated in microcanonical approach. For a canonical ensemble, fluctuations of car density as a function of total velocity is found. Phase transitions between free and jammed flow for large deceleration rate of cars and formation of queues of cars with the same velocity for low deceleration rate are described.
Babaei, Masoud; Pan, Indranil
2016-06-01
In this paper we defined a relatively complex reservoir engineering optimization problem of maximizing the net present value of the hydrocarbon production in a water flooding process by controlling the water injection rates in multiple control periods. We assessed the performance of a number of response surface surrogate models and their ensembles which are combined by Dempster-Shafer theory and Weighted Averaged Surrogates as found in contemporary literature works. Most of these ensemble methods are based on the philosophy that multiple weak learners can be leveraged to obtain one strong learner which is better than the individual weak ones. Even though these techniques have been shown to work well for test bench functions, we found them not offering a considerable improvement compared to an individually used cubic radial basis function surrogate model. Our simulations on two and three dimensional cases, with varying number of optimization variables suggest that cubic radial basis functions-based surrogate model is reliable, outperforms Kriging surrogates and multivariate adaptive regression splines, and if it does not outperform, it is rarely outperformed by the ensemble surrogate models.
A Margin-Based Greedy Ensemble Pruning Method%一种基于边界的贪心组合剪枝方法
Institute of Scientific and Technical Information of China (English)
郭华平; 范明; 职为梅
2013-01-01
Theoretical and experimental results indicate that for the ensemble classifiers with the same training error the one with higher margin distribution on training examples has better generalization performance.Therefore,the concept of margins of examples is introduced to ensemble pruning and it is employed to supervise the design of ensemble pruning methods.Based on the margins,a new metric called marginbased metric (MBM) is designed to evaluate the importance of a classifier to an ensemble and an example set,and then a greedy ensemble pruning method called MBM-based ensemble selection is proposed to reduce the ensemble size and improve its accuracy.The experimental results on 30 UCI datasets show that compared with other state-of-the-art greedy ensemble pruning methods,the ensembles selected by the proposed method have better performance.%理论及实验表明,在训练集上具有较大边界分布的组合分类器泛化能力较强.文中将边界概念引入到组合剪枝中,并用它指导组合剪枝方法的设计.基于此,构造一个度量标准(MBM)用于评估基分类器相对于组合分类器的重要性,进而提出一种贪心组合选择方法(MBMEP)以降低组合分类器规模并提高它的分类准确率.在随机选择的30个UCI数据集上的实验表明,与其它一些高级的贪心组合选择算法相比,MBMEP选择出的子组合分类器具有更好的泛化能力.
Pan, Xiaoning; Li, Yang; Wu, Zhisheng; Zhang, Qiao; Zheng, Zhou; Shi, Xinyuan; Qiao, Yanjiang
2015-04-14
Model performance of the partial least squares method (PLS) alone and bagging-PLS was investigated in online near-infrared (NIR) sensor monitoring of pilot-scale extraction process in Fructus aurantii. High-performance liquid chromatography (HPLC) was used as a reference method to identify the active pharmaceutical ingredients: naringin, hesperidin and neohesperidin. Several preprocessing methods and synergy interval partial least squares (SiPLS) and moving window partial least squares (MWPLS) variable selection methods were compared. Single quantification models (PLS) and ensemble methods combined with partial least squares (bagging-PLS) were developed for quantitative analysis of naringin, hesperidin and neohesperidin. SiPLS was compared to SiPLS combined with bagging-PLS. Final results showed the root mean square error of prediction (RMSEP) of bagging-PLS to be lower than that of PLS regression alone. For this reason, an ensemble method of online NIR sensor is here proposed as a means of monitoring the pilot-scale extraction process in Fructus aurantii, which may also constitute a suitable strategy for online NIR monitoring of CHM.
Directory of Open Access Journals (Sweden)
Xiaoning Pan
2015-04-01
Full Text Available Model performance of the partial least squares method (PLS alone and bagging-PLS was investigated in online near-infrared (NIR sensor monitoring of pilot-scale extraction process in Fructus aurantii. High-performance liquid chromatography (HPLC was used as a reference method to identify the active pharmaceutical ingredients: naringin, hesperidin and neohesperidin. Several preprocessing methods and synergy interval partial least squares (SiPLS and moving window partial least squares (MWPLS variable selection methods were compared. Single quantification models (PLS and ensemble methods combined with partial least squares (bagging-PLS were developed for quantitative analysis of naringin, hesperidin and neohesperidin. SiPLS was compared to SiPLS combined with bagging-PLS. Final results showed the root mean square error of prediction (RMSEP of bagging-PLS to be lower than that of PLS regression alone. For this reason, an ensemble method of online NIR sensor is here proposed as a means of monitoring the pilot-scale extraction process in Fructus aurantii, which may also constitute a suitable strategy for online NIR monitoring of CHM.
Consecutive Charging of a Molecule-on-Insulator Ensemble Using Single Electron Tunnelling Methods.
Rahe, Philipp; Steele, Ryan P; Williams, Clayton C
2016-02-10
We present the local charge state modification at room temperature of small insulator-supported molecular ensembles formed by 1,1'-ferrocenedicarboxylic acid on calcite. Single electron tunnelling between the conducting tip of a noncontact atomic force microscope (NC-AFM) and the molecular islands is observed. By joining NC-AFM with Kelvin probe force microscopy, successive charge build-up in the sample is observed from consecutive experiments. Charge transfer within the islands and structural relaxation of the adsorbate/surface system is suggested by the experimental data.
Directory of Open Access Journals (Sweden)
Jiaming Liu
2016-01-01
Full Text Available Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables to assess the hydrological impacts of climate change. To improve the simulation accuracy of downscaling methods, the Bayesian Model Averaging (BMA method combined with three statistical downscaling methods, which are support vector machine (SVM, BCC/RCG-Weather Generators (BCC/RCG-WG, and Statistics Downscaling Model (SDSM, is proposed in this study, based on the statistical relationship between the larger scale climate predictors and observed precipitation in upper Hanjiang River Basin (HRB. The statistical analysis of three performance criteria (the Nash-Sutcliffe coefficient of efficiency, the coefficient of correlation, and the relative error shows that the performance of ensemble downscaling method based on BMA for rainfall is better than that of each single statistical downscaling method. Moreover, the performance for the runoff modelled by the SWAT rainfall-runoff model using the downscaled daily rainfall by four methods is also compared, and the ensemble downscaling method has better simulation accuracy. The ensemble downscaling technology based on BMA can provide scientific basis for the study of runoff response to climate change.
Directory of Open Access Journals (Sweden)
Vasić Aleksandar
2008-01-01
Full Text Available Serbian music criticism became a subject of professional music critics at the beginning of the twentieth century, after being developed by music amateurs throughout the whole previous century. The Serbian Literary Magazine (1901- 1914, 1920-1941, the forum of the Serbian modernist writers in the early 1900s, had a crucial role in shaping the Serbian music criticism and essayistics of the modern era. The Serbian elite musicians wrote for the SLM and therefore it reflects the most important issues of the early twentieth century Serbian music. The SLM undertook the mission of educating its readers. The music culture of the Serbian public was only recently developed. The public needed an introduction into the most important features of the European music, as well as developing its own taste in music. This paper deals with two aspects of the music criticism in the SLM, in view of its educational role: the problem of virtuosity and the method used by music critics in this magazine. The aesthetic canon of the SLM was marked by decisively negative attitude towards the virtuosity. Mainly concerned by educating the Serbian music public in the spirit of the highest music achievements in Europe, the music writers of the SLM criticized both domestic and foreign performers who favoured virtuosity over the 'essence' of music. Therefore, Niccolò Paganini, Franz Liszt, and even Peter Tchaikowsky with his Violin concerto became the subject of the magazine's criticism. However their attitude towards the interpreters with both musicality and virtuoso technique was always positive. That was evident in the writings on Jan Kubelík. This educational mission also had its effect on the structure of critique writings in the SLM. In their wish to inform the Serbian public on the European music (which they did very professionally, the critics gave much more information on biographies, bibliographies and style of the European composers, than they valued the interpretation
Non-Boltzmann Ensembles and Monte Carlo Simulations
Murthy, K. P. N.
2016-10-01
Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc. This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g(E, M), as a function of both energy E, and order parameter M. This is carried out in two stages. We estimate g(E) in the first stage. Employing g
Deterministic Methods for Filtering, part I: Mean-field Ensemble Kalman Filtering
Law, Kody J H; Tempone, Raul
2014-01-01
This paper provides a proof of convergence of the standard EnKF generalized to non-Gaussian state space models, based on the indistinguishability property of the joint distribution on the ensemble. A density-based deterministic approximation of the mean-field EnKF (MFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for d<2k. The fidelity of approximation of the true distribution is also established using an extension of total variation metric to random measures. This is limited by a Gaussian bias term arising from non-linearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Directory of Open Access Journals (Sweden)
Lina Zhang
2015-09-01
Full Text Available Bacteriophage virion proteins and non-virion proteins have distinct functions in biological processes, such as specificity determination for host bacteria, bacteriophage replication and transcription. Accurate identification of bacteriophage virion proteins from bacteriophage protein sequences is significant to understand the complex virulence mechanism in host bacteria and the influence of bacteriophages on the development of antibacterial drugs. In this study, an ensemble method for bacteriophage virion protein prediction from bacteriophage protein sequences is put forward with hybrid feature spaces incorporating CTD (composition, transition and distribution, bi-profile Bayes, PseAAC (pseudo-amino acid composition and PSSM (position-specific scoring matrix. When performing on the training dataset 10-fold cross-validation, the presented method achieves a satisfactory prediction result with a sensitivity of 0.870, a specificity of 0.830, an accuracy of 0.850 and Matthew’s correlation coefficient (MCC of 0.701, respectively. To evaluate the prediction performance objectively, an independent testing dataset is used to evaluate the proposed method. Encouragingly, our proposed method performs better than previous studies with a sensitivity of 0.853, a specificity of 0.815, an accuracy of 0.831 and MCC of 0.662 on the independent testing dataset. These results suggest that the proposed method can be a potential candidate for bacteriophage virion protein prediction, which may provide a useful tool to find novel antibacterial drugs and to understand the relationship between bacteriophage and host bacteria. For the convenience of the vast majority of experimental Int. J. Mol. Sci. 2015, 16 21735 scientists, a user-friendly and publicly-accessible web-server for the proposed ensemble method is established.
Directory of Open Access Journals (Sweden)
Marin-Garcia Pablo
2010-05-01
Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.
Wind Power Prediction using Ensembles
DEFF Research Database (Denmark)
Giebel, Gregor; Badger, Jake; Landberg, Lars;
2005-01-01
The Ensemble project investigated the use of meteorological ensemble fore-casts for the prognosis of uncertainty of the forecasts, and found a good method to make use of ensemble forecasts. This method was then tried based on ensembles from ECMWF in formof a demo application for both the Nysted...... offshore wind farm and the whole Jutland/Funen area. The utilities used these forecasts for maintenance planning, fuel consumption estimates and over-the-weekend trading on the Leipzig power exchange. Othernotable scientific results include the better accuracy of forecasts made up from a simple...
Directory of Open Access Journals (Sweden)
Eva C Arnspang
Full Text Available The lateral dynamics of proteins and lipids in the mammalian plasma membrane are heterogeneous likely reflecting both a complex molecular organization and interactions with other macromolecules that reside outside the plane of the membrane. Several methods are commonly used for characterizing the lateral dynamics of lipids and proteins. These experimental and data analysis methods differ in equipment requirements, labeling complexities, and further oftentimes give different results. It would therefore be very convenient to have a single method that is flexible in the choice of fluorescent label and labeling densities from single molecules to ensemble measurements, that can be performed on a conventional wide-field microscope, and that is suitable for fast and accurate analysis. In this work we show that k-space image correlation spectroscopy (kICS analysis, a technique which was originally developed for analyzing lateral dynamics in samples that are labeled at high densities, can also be used for fast and accurate analysis of single molecule density data of lipids and proteins labeled with quantum dots (QDs. We have further used kICS to investigate the effect of the label size and by comparing the results for a biotinylated lipid labeled at high densities with Atto647N-strepatvidin (sAv or sparse densities with sAv-QDs. In this latter case, we see that the recovered diffusion rate is two-fold greater for the same lipid and in the same cell-type when labeled with Atto647N-sAv as compared to sAv-QDs. This data demonstrates that kICS can be used for analysis of single molecule data and furthermore can bridge between samples with a labeling densities ranging from single molecule to ensemble level measurements.
Resistant multiple sparse canonical correlation.
Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna
2016-04-01
Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca.
Botvina, A; Gupta, S Das; Mishustin, I
2008-01-01
The statistical multifragmentation model (SMM) has been widely used to explain experimental data of intermediate energy heavy ion collisions. A later entrant in the field is the canonical thermodynamic model (CTM) which is also being used to fit experimental data. The basic physics of both the models is the same, namely that fragments are produced according to their statistical weights in the available phase space. However, they are based on different statistical ensembles, and the methods of calculation are different: while the SMM uses Monte-Carlo simulations, the CTM solves recursion relations. In this paper we compare the predictions of the two models for a few representative cases.
Classifying Linear Canonical Relations
Lorand, Jonathan
2015-01-01
In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.
Ye, Linlin; Yang, Dan; Wang, Xu
2014-06-01
A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal.
Energy Technology Data Exchange (ETDEWEB)
Juxiu Tong; Bill X. Hu; Hai Huang; Luanjin Guo; Jinzhong Yang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations, we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.
An introduction to the theory of canonical matrices
Turnbull, H W
2004-01-01
Thorough and self-contained, this penetrating study of the theory of canonical matrices presents a detailed consideration of all the theory's principal features. Topics include elementary transformations and bilinear and quadratic forms; canonical reduction of equivalent matrices; subgroups of the group of equivalent transformations; and rational and classical canonical forms. The final chapters explore several methods of canonical reduction, including those of unitary and orthogonal transformations. 1952 edition. Index. Appendix. Historical notes. Bibliographies. 275 problems.
Fu, Mao-Jing; Zhuang, Jian-Jun; Hou, Feng-Zhen; Zhan, Qing-Bo; Shao, Yi; Ning, Xin-Bao
2010-05-01
In this paper, the ensemble empirical mode decomposition (EEMD) is applied to analyse accelerometer signals collected during normal human walking. First, the self-adaptive feature of EEMD is utilised to decompose the accelerometer signals, thus sifting out several intrinsic mode functions (IMFs) at disparate scales. Then, gait series can be extracted through peak detection from the eigen IMF that best represents gait rhythmicity. Compared with the method based on the empirical mode decomposition (EMD), the EEMD-based method has the following advantages: it remarkably improves the detection rate of peak values hidden in the original accelerometer signal, even when the signal is severely contaminated by the intermittent noises; this method effectively prevents the phenomenon of mode mixing found in the process of EMD. And a reasonable selection of parameters for the stop-filtering criteria can improve the calculation speed of the EEMD-based method. Meanwhile, the endpoint effect can be suppressed by using the auto regressive and moving average model to extend a short-time series in dual directions. The results suggest that EEMD is a powerful tool for extraction of gait rhythmicity and it also provides valuable clues for extracting eigen rhythm of other physiological signals.
[Canon Busting and Cultural Literacy.
National Forum: Phi Kappa Phi Journal, 1989
1989-01-01
Articles on literary canon include: "Educational Anomie" (Stephen W. White); "Why Western Civilization?" (William J. Bennett); "Peace Plan for Canon Wars" (Gerald Graff, William E. Cain); "Canons, Cultural Literacy, and Core Curriculum" (Lynne V. Cheney); "Canon Busting: Basic Issues" (Stanley Fish); "A Truce in Curricular Wars" (Chester E. Finn,…
Ensemble inequivalence: Landau theory and the ABC model
Cohen, O.; Mukamel, D.
2012-12-01
It is well known that systems with long-range interactions may exhibit different phase diagrams when studied within two different ensembles. In many of the previously studied examples of ensemble inequivalence, the phase diagrams differ only when the transition in one of the ensembles is first order. By contrast, in a recent study of a generalized ABC model, the canonical and grand-canonical ensembles of the model were shown to differ even when they both exhibit a continuous transition. Here we show that the order of the transition where ensemble inequivalence may occur is related to the symmetry properties of the order parameter associated with the transition. This is done by analyzing the Landau expansion of a generic model with long-range interactions. The conclusions drawn from the generic analysis are demonstrated for the ABC model by explicit calculation of its Landau expansion.
Ensemble approach combining multiple methods improves human transcription start site prediction.
LENUS (Irish Health Repository)
Dineen, David G
2010-01-01
The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.
Directory of Open Access Journals (Sweden)
S. Skachko
2014-01-01
Full Text Available The Ensemble Kalman filter (EnKF assimilation method is applied to the tracer transport using the same stratospheric transport model as in the 4D-Var assimilation system BASCOE. This EnKF version of BASCOE was built primarily to avoid the large costs associated with the maintenance of an adjoint model. The EnKF developed in BASCOE accounts for two adjustable parameters: a parameter α controlling the model error term and a parameter r controlling the observational error. The EnKF system is shown to be markedly sensitive to these two parameters, which are adjusted based on the monitoring of a χ2-test measuring the misfit between the control variable and the observations. The performance of the EnKF and 4D-Var versions was estimated through the assimilation of Aura-MLS ozone observations during an 8 month period which includes the formation of the 2008 Antarctic ozone hole. To ensure a proper comparison, despite the fundamental differences between the two assimilation methods, both systems use identical and carefully calibrated input error statistics. We provide the detailed procedure for these calibrations, and compare the two sets of analyses with a focus on the lower and middle stratosphere where the ozone lifetime is much larger than the observational update frequency. Based on the Observation-minus-Forecast statistics, we show that the analyses provided by the two systems are markedly similar, with biases smaller than 5% and standard deviation errors smaller than 10% in most of the stratosphere. Since the biases are markedly similar, they have most probably the same causes: these can be deficiencies in the model and in the observation dataset, but not in the assimilation algorithm nor in the error calibration. The remarkably similar performance also shows that in the context of stratospheric transport, the choice of the assimilation method can be based on application-dependent factors, such as CPU cost or the ability to generate an ensemble
Covariant canonical quantization
Energy Technology Data Exchange (ETDEWEB)
Hippel, G.M. von [University of Regina, Department of Physics, Regina, Saskatchewan (Canada); Wohlfarth, M.N.R. [Universitaet Hamburg, Institut fuer Theoretische Physik, Hamburg (Germany)
2006-09-15
We present a manifestly covariant quantization procedure based on the de Donder-Weyl Hamiltonian formulation of classical field theory. This procedure agrees with conventional canonical quantization only if the parameter space is d=1 dimensional time. In d>1 quantization requires a fundamental length scale, and any bosonic field generates a spinorial wave function, leading to the purely quantum-theoretical emergence of spinors as a byproduct. We provide a probabilistic interpretation of the wave functions for the fields, and we apply the formalism to a number of simple examples. These show that covariant canonical quantization produces both the Klein-Gordon and the Dirac equation, while also predicting the existence of discrete towers of identically charged fermions with different masses. Covariant canonical quantization can thus be understood as a ''first'' or pre-quantization within the framework of conventional QFT. (orig.)
Covariant canonical quantization
Von Hippel, G M; Hippel, Georg M. von; Wohlfarth, Mattias N.R.
2006-01-01
We present a manifestly covariant quantization procedure based on the de Donder-Weyl Hamiltonian formulation of classical field theory. Covariant canonical quantization agrees with conventional canonical quantization only if the parameter space is d=1 dimensional time. In d>1 quantization requires a fundamental length scale, and any bosonic field generates a spinorial wave function, leading to the purely quantum-theoretical emergence of spinors as a byproduct. We provide a probabilistic interpretation of the wave functions for the fields, and apply the formalism to a number of simple examples. These show that covariant canonical quantization produces both the Klein-Gordon and the Dirac equation, while also predicting the existence of discrete towers of identically charged fermions with different masses.
Schneeweis, Lumelle A; Obenauer-Kutner, Linda; Kaur, Parminder; Yamniuk, Aaron P; Tamura, James; Jaffe, Neil; O'Mara, Brian W; Lindsay, Stuart; Doyle, Michael; Bryson, James
2015-12-01
Domain antibodies (dAbs) are single immunoglobulin domains that form the smallest functional unit of an antibody. This study investigates the behavior of these small proteins when covalently attached to the polyethylene glycol (PEG) moiety that is necessary for extending the half-life of a dAb. The effect of the 40 kDa PEG on hydrodynamic properties, particle behavior, and receptor binding of the dAb has been compared by both ensemble solution and surface methods [light scattering, isothermal titration calorimetry (ITC), surface Plasmon resonance (SPR)] and single-molecule atomic force microscopy (AFM) methods (topography, recognition imaging, and force microscopy). The large PEG dominates the properties of the dAb-PEG conjugate such as a hydrodynamic radius that corresponds to a globular protein over four times its size and a much reduced association rate. We have used AFM single-molecule studies to determine the mechanism of PEG-dependent reductions in the effectiveness of the dAb observed by SPR kinetic studies. Recognition imaging showed that all of the PEGylated dAb molecules are active, suggesting that some may transiently become inactive if PEG sterically blocks binding. This helps explain the disconnect between the SPR, determined kinetically, and the force microscopy and ITC results that demonstrated that PEG does not change the binding energy.
Directory of Open Access Journals (Sweden)
Jing Xu
2015-10-01
Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.
Wen, Yalu; Lu, Qing
2016-09-01
Although compelling evidence suggests that the genetic etiology of complex diseases could be heterogeneous in subphenotype groups, little attention has been paid to phenotypic heterogeneity in genetic association analysis of complex diseases. Simply ignoring phenotypic heterogeneity in association analysis could result in attenuated estimates of genetic effects and low power of association tests if subphenotypes with similar clinical manifestations have heterogeneous underlying genetic etiologies. To facilitate the family-based association analysis allowing for phenotypic heterogeneity, we propose a clustered multiclass likelihood-ratio ensemble (CMLRE) method. The proposed method provides an alternative way to model the complex relationship between disease outcomes and genetic variants. It allows for heterogeneous genetic causes of disease subphenotypes and can be applied to various pedigree structures. Through simulations, we found CMLRE outperformed the commonly adopted strategies in a variety of underlying disease scenarios. We further applied CMLRE to a family-based dataset from the International Consortium to Identify Genes and Interactions Controlling Oral Clefts (ICOC) to investigate the genetic variants and interactions predisposing to subphenotypes of oral clefts. The analysis suggested that two subphenotypes, nonsyndromic cleft lip without palate (CL) and cleft lip with palate (CLP), shared similar genetic etiologies, while cleft palate only (CP) had its own genetic mechanism. The analysis further revealed that rs10863790 (IRF6), rs7017252 (8q24), and rs7078160 (VAX1) were jointly associated with CL/CLP, while rs7969932 (TBK1), rs227731 (17q22), and rs2141765 (TBK1) jointly contributed to CP.
Dehzangi, Abdollah; Paliwal, Kuldip; Sharma, Alok; Dehzangi, Omid; Sattar, Abdul
2013-01-01
Better understanding of structural class of a given protein reveals important information about its overall folding type and its domain. It can also be directly used to provide critical information on general tertiary structure of a protein which has a profound impact on protein function determination and drug design. Despite tremendous enhancements made by pattern recognition-based approaches to solve this problem, it still remains as an unsolved issue for bioinformatics that demands more attention and exploration. In this study, we propose a novel feature extraction model that incorporates physicochemical and evolutionary-based information simultaneously. We also propose overlapped segmented distribution and autocorrelation-based feature extraction methods to provide more local and global discriminatory information. The proposed feature extraction methods are explored for 15 most promising attributes that are selected from a wide range of physicochemical-based attributes. Finally, by applying an ensemble of different classifiers namely, Adaboost.M1, LogitBoost, naive Bayes, multilayer perceptron (MLP), and support vector machine (SVM) we show enhancement of the protein structural class prediction accuracy for four popular benchmarks.
Origin of the Canonical Ensemble : Thermalization with Decoherence
Yuan, Shengjun; Katsnelson, Mikhail I.; De Raedt, Hans
2009-01-01
We solve the time-dependent Schrodinger equation for the combination of a spin system interacting with a spin bath environment. In particular, we focus on the time development of the reduced density matrix of the spin system. Under normal circumstances we show that the environment drives the reduced
Improved method for measuring the ensemble average of strand breaks in genomic DNA.
Bespalov, V A; Conconi, A; Zhang, X; Fahy, D; Smerdon, M J
2001-01-01
The cis-syn cyclobutane pyrimidine dimer (CPD) is the major photoproduct induced in DNA by low wavelength ultraviolet radiation. An improved method was developed to detect CPD formation and removal in genomic DNA that avoids the problems encountered with the standard method of endonuclease detection of these photoproducts. Since CPD-specific endonucleases make single-strand cuts at CPD sites, quantification of the frequency of CPDs in DNA is usually done by denaturing gel electrophoresis. The standard method of ethidium bromide staining and gel photography requires more than 10 microg of DNA per gel lane, and correction of the photographic signal for the nonlinear film response. To simplify this procedure, a standard Southern blot protocol, coupled with phosphorimage analysis, was developed. This method uses random hybridization probes to detect genomic sequences with minimal sequence bias. Because of the vast linearity range of phosphorimage detection, scans of the signal profiles for the heterogeneous population of DNA fragments can be integrated directly to determine the number-average size of the population.
Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves
Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley
2016-09-01
Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.
Hocker, Harrison J; Rambahal, Nandini; Gorfe, Alemayehu A
2014-02-24
Incorporation of receptor flexibility into computational drug discovery through the relaxed complex scheme is well suited for screening against a single binding site. In the absence of a known pocket or if there are multiple potential binding sites, it may be necessary to do docking against the entire surface of the target (global docking). However no suitable and easy-to-use tool is currently available to rank global docking results based on the preference of a ligand for a given binding site. We have developed a protocol, termed LIBSA for LIgand Binding Specificity Analysis, that analyzes multiple docked poses against a single or ensemble of receptor conformations and returns a metric for the relative binding to a specific region of interest. By using novel filtering algorithms and the signal-to-noise ratio (SNR), the relative ligand-binding frequency at different pockets can be calculated and compared quantitatively. Ligands can then be triaged by their tendency to bind to a site instead of ranking by affinity alone. The method thus facilitates screening libraries of ligand cores against a large library of receptor conformations without prior knowledge of specific pockets, which is especially useful to search for hits that selectively target a particular site. We demonstrate the utility of LIBSA by showing that it correctly identifies known ligand binding sites and predicts the relative preference of a set of related ligands for different pockets on the same receptor.
A composite state method for ensemble data assimilation with multiple limited-area models
Directory of Open Access Journals (Sweden)
Matthew Kretschmer
2015-04-01
Full Text Available Limited-area models (LAMs allow high-resolution forecasts to be made for geographic regions of interest when resources are limited. Typically, boundary conditions for these models are provided through one-way boundary coupling from a coarser resolution global model. Here, data assimilation is considered in a situation in which a global model supplies boundary conditions to multiple LAMs. The data assimilation method presented combines information from all of the models to construct a single ‘composite state’, on which data assimilation is subsequently performed. The analysis composite state is then used to form the initial conditions of the global model and all of the LAMs for the next forecast cycle. The method is tested by using numerical experiments with simple, chaotic models. The results of the experiments show that there is a clear forecast benefit to allowing LAM states to influence one another during the analysis. In addition, adding LAM information at analysis time has a strong positive impact on global model forecast performance, even at points not covered by the LAMs.
Revisiting Canonical Quantization
Klauder, John R
2012-01-01
Conventional canonical quantization procedures directly link various c-number and q-number quantities. Here, we advocate a different association of classical and quantum quantities that renders classical theory a natural subset of quantum theory with \\hbar>0. While keeping the good results of conventional procedures, some examples are noted where the new procedures offer better results than conventional ones.
Hopfion canonical quantization
Acus, A; Norvaisas, E; Shnir, Ya
2012-01-01
We study the effect of the canonical quantization of the rotational mode of the charge Q=1 and Q=2 spinning Hopfions. The axially-symmetric solutions are constructed numerically, it is shown the quantum corrections to the mass of the configurations are relatively large.
Hopfion canonical quantization
Energy Technology Data Exchange (ETDEWEB)
Acus, A. [Vilnius University, Institute of Theoretical Physics and Astronomy, Gostauto 12, Vilnius 01108 (Lithuania); Halavanau, A. [Department of Theoretical Physics and Astrophysics, BSU, Minsk (Belarus); Norvaisas, E. [Vilnius University, Institute of Theoretical Physics and Astronomy, Gostauto 12, Vilnius 01108 (Lithuania); Shnir, Ya., E-mail: shnir@maths.tcd.ie [Department of Theoretical Physics and Astrophysics, BSU, Minsk (Belarus); Institute of Physics, Carl von Ossietzky University Oldenburg (Germany)
2012-05-03
We study the effect of the canonical quantization of the rotational mode of the charge Q=1 and Q=2 spinning Hopfions. The axially-symmetric solutions are constructed numerically, it is shown the quantum corrections to the mass of the configurations are relatively large.
Escobedo, Fernando A.
2007-11-01
In the Grand Canonical, osmotic, and Gibbs ensembles, chemical potential equilibrium is attained via transfers of molecules between the system and either a reservoir or another subsystem. In this work, the expanded ensemble (EXE) methods described in part I [F. A. Escobedo and F. J. Martínez-Veracoechea, J. Chem. Phys. 127, 174103 (2007)] of this series are extended to these ensembles to overcome the difficulties associated with implementing such whole-molecule transfers. In EXE, such moves occur via a target molecule that undergoes transitions through a number of intermediate coupling states. To minimize the tunneling time between the fully coupled and fully decoupled states, the intermediate states could be either: (i) sampled with an optimal frequency distribution (the sampling problem) or (ii) selected with an optimal spacing distribution (staging problem). The sampling issue is addressed by determining the biasing weights that would allow generating an optimal ensemble; discretized versions of this algorithm (well suited for small number of coupling stages) are also presented. The staging problem is addressed by selecting the intermediate stages in such a way that a flat histogram is the optimized ensemble. The validity of the advocated methods is demonstrated by their application to two model problems, the solvation of large hard spheres into a fluid of small and large spheres, and the vapor-liquid equilibrium of a chain system.
Regularized canonical correlation analysis with unlabeled data
Institute of Scientific and Technical Information of China (English)
Xi-chuan ZHOU; Hai-bin SHEN
2009-01-01
In standard canonical correlation analysis (CCA), the data from definite datasets are used to estimate their canonical correlation. In real applications, for example in bilingual text retrieval, it may have a great portion of data that we do not know which set it belongs to. This part of data is called unlabeled data, while the rest from definite datasets is called labeled data. We propose a novel method called regularized canonical correlation analysis (RCCA), which makes use of both labeled and unlabeled samples. Specifically, we learn to approximate canonical correlation as if all data were labeled. Then. we describe a generalization of RCCA for the multi-set situation. Experiments on four real world datasets, Yeast, Cloud, Iris, and Haberman, demonstrate that,by incorporating the unlabeled data points, the accuracy of correlation coefficients can be improved by over 30%.
一种集成式不确定推理方法研究%Research on an Ensemble Method of Uncertainty Reasoning
Institute of Scientific and Technical Information of China (English)
贺怀清; 李建伏
2011-01-01
Ensemble learning is a machine learning paradigm where multiple models are strategically generated and combined to obtain better predictive performance than a single learning method.It was proven that ensemble learning is feasible and tends to yield better results.Uncertainty reasoning is one of the important directions in artificial intelligence.Various uncertainty reasoning methods have been developed and all have their own advantages and disadvantages.Motivated by ensemble learning, an ensemble method of uncertainty reasoning was proposed.The main idea of the new method is in accordance with the basic framework of ensemble learning,where multiple uncertainty reasoning methods is used in time and the result of various reasoning methods is integrated by some rules as the final result.Finally, theoretical analysis and experimental tests show that the ensemble uncertainty reasoning method is effective and feasible.%集成学习是采用某种规则把一系列学习器的结果进行整合以获得比单个学习器更好的学习效果的一种机器学习方法.研究表明集成学习是可行的,能取得比传统学习方法更好的性能.不确定推理是人工智能的重要研究方向之一,目前已经开发出了多种不确定推理方法,这些方法在实际应用中各有优缺点.借鉴集成学习,提出一种集成式不确定推理方法,其基本思想是按照一定的策略集成多种不确定推理方法,以提高推理的准确性.理论分析和实验结果验证了方法的合理性和可行性.
On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles
Luo, Xiaodong
2010-09-19
The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].
基于Choquet模糊积分SVM集成及其实证研究%Choquet-Fuzzy-Integral-Based SVM Ensemble Method and Its Empirical Study
Institute of Scientific and Technical Information of China (English)
倪渊; 林健
2012-01-01
In order to improve the classification performance of the support vector machine (SVM) ensemble methods, a modified SVM ensemble method is put forward by using Choquet fuzzy integral other than Sugeno integral. The proposed method takes the output of every SVM component into account such that it overcomes the drawback of the existing SVM ensemble methods that neglect the secondary information. As an example, based on the data collected in Shandong Province, the proposed method is used to evaluate the performance of social service made by the colleges in the Province. Simulation results show that the proposed method outperforms the existing SVM ensemble methods in the sense of classification performance.%为了进一步提高SVM集成的泛化能力,提出了基于Choquet模糊积分的SVMs集成方法,综合考虑各个子SVM输出重要性,避免了现有SVM集成方法中忽略次要信息的问题.应用该方法,以高校的区域经济贡献度为例进行仿真试验,结果表明基于Choquet模糊积分的SVMs集成方法较基于Sugeno模糊积分SVMs集成方法和基于投票策略的SVMs集成方法具有更高的准确性.该方法是可行、有效的,具有一定的推广价值.
Mester, Zoltan; Lynd, Nathaniel; Fredrickson, Glenn
2013-03-01
Melts of block copolymer blends can exhibit coexistence between compositionally and morphologically distinct phases. We derived a unit-cell approach for a field theoretic Gibbs ensemble formalism to rapidly map out such coexistence regions. We also developed a canonical ensemble model for the reversible reaction of supramolecular polymers and integrated it into the Gibbs ensemble scheme. This creates a faster method for generating phase diagrams in complex supramolecular systems than the usual grand canonical ensemble method and allows us to specify the system in experimentally accessible volume fractions rather than chemical potentials. The integrated approach is used to calculate phase diagrams for AB diblock copolymers reversibly reacting with B homopolymers to form a new diblocks we term ``ABB.'' For our case, we use a diblock that is sixty percent A monomer and a homopolymer that is the same length as the diblock. In the limits of infinite reaction favorability (large equilibrium constant), the system approaches cases of an ABB diblock-B homopolymer blend when the AB diblock is the limiting reactant and AB diblock-ABB diblock blend when the homopolymer is the limiting reactant. As reaction favorability is decreased, the phase boundaries shift towards higher homopolymer compositions so that sufficient reaction can take place to produce the ABB diblock that has a deciding role stabilizing the observed phases.
Realizations of the Canonical Representation
Indian Academy of Sciences (India)
M K Vemuri
2008-02-01
A characterisation of the maximal abelian subalgebras of the bounded operators on Hilbert space that are normalised by the canonical representation of the Heisenberg group is given. This is used to classify the perfect realizations of the canonical representation.
Canonical quantization of macroscopic electromagnetism
Energy Technology Data Exchange (ETDEWEB)
Philbin, T G, E-mail: tgp3@st-andrews.ac.u [School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS (United Kingdom)
2010-12-15
Application of the standard canonical quantization rules of quantum field theory to macroscopic electromagnetism has encountered obstacles due to material dispersion and absorption. This has led to a phenomenological approach to macroscopic quantum electrodynamics where no canonical formulation is attempted. In this paper macroscopic electromagnetism is canonically quantized. The results apply to any linear, inhomogeneous, magnetodielectric medium with dielectric functions that obey the Kramers-Kronig relations. The prescriptions of the phenomenological approach are derived from the canonical theory.
Canonical quantization of macroscopic electromagnetism
Philbin, T G
2010-01-01
Application of the standard canonical quantization rules of quantum field theory to macroscopic electromagnetism has encountered obstacles due to material dispersion and absorption. This has led to a phenomenological approach to macroscopic quantum electrodynamics where no canonical formulation is attempted. In this paper macroscopic electromagnetism is canonically quantized. The results apply to any linear, inhomogeneous, magnetoelectric medium with dielectric functions that obey the Kramers-Kronig relations. The prescriptions of the phenomenological approach are derived from the canonical theory.
Canonical Strangeness Enhancement
Sollfrank, J; Redlich, Krzysztof; Satz, Helmut
1998-01-01
According to recent experimental data and theoretical developments we discuss three distinct topics related to strangeness enhancement in nuclear reactions. We investigate the compatibility of multi-strange particle ratios measured in a restricted phase space with thermal model parameters extracted recently in 4pi. We study the canonical suppression as a possible reason for the observed strangeness enhancement and argue that a connection between QGP formation and the undersaturation of strangeness is not excluded.
2015-01-01
The traditional vision of Middleton as a playwright depicted him as an author of city comedies and tragicomedies, who in his very last years suddenly approached the tragic genre. Among his last four plays, three composed in succession are tragedies: Hengist, 1620, Women Beware Women, 1621 and The Changeling, 1622; the last two are recognized as masterpieces. In the last forty years, Middleton’s canon has changed with new attributions. This paper analyses the new pattern emerging in Middleton’...
Canonical Transformations of Kepler Trajectories
Mostowski, Jan
2010-01-01
In this paper, canonical transformations generated by constants of motion in the case of the Kepler problem are discussed. It is shown that canonical transformations generated by angular momentum are rotations of the trajectory. Particular attention is paid to canonical transformations generated by the Runge-Lenz vector. It is shown that these…
Grand canonical Molecular Dynamics Simulations
Fritsch, S; Junghans, C; Ciccotti, G; Site, L Delle; Kremer, K
2011-01-01
For simulation studies of (macro-) molecular liquids it would be of significant interest to be able to adjust/increase the level of resolution within one region of space, while allowing for the free exchange of molecules between (open) regions of different resolution/representation. In the present work we generalize the adaptive resolution idea in terms of a generalized Grand Canonical approach. This provides a robust framework for truly open Molecular Dynamics systems. We apply the method to liquid water at ambient conditions.
A Framework for Non-Equilibrium Statistical Ensemble Theory
Institute of Scientific and Technical Information of China (English)
BI Qiao; HE Zu-Tan; LIU Jie
2011-01-01
Since Gibbs synthesized a general equilibrium statistical ensemble theory, many theorists have attempted to generalized the Gibbsian theory to non-equilibrium phenomena domain, however the status of the theory of nonequilibrium phenomena can not be said as firm as well established as the Gibbsian ensemble theory. In this work, we present a framework for the non-equilibrium statistical ensemble formalism based on a subdynamic kinetic equation (SKE) rooted from the Brussels-Austin school and followed by some up-to-date works. The constructed key is to use a similarity transformation between Gibbsian ensembles formalism based on Liouville equation and the subdynamic ensemble formalism based on the SKE. Using this formalism, we study the spin-Boson system, as cases of weak coupling or strongly coupling, and obtain the reduced density operators for the Canonical ensembles easily.
Similarity measures for protein ensembles
DEFF Research Database (Denmark)
Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper
2009-01-01
Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations...... a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single...
Comparison of canonical and microcanonical definitions of entropy
Matty, Michael; Lancaster, Lachlan; Griffin, William; Swendsen, Robert H.
2017-02-01
For more than 100 years, one of the central concepts in statistical mechanics has been the microcanonical ensemble, which provides a way of calculating the thermodynamic entropy for a specified energy. A controversy has recently emerged between two distinct definitions of the entropy based on the microcanonical ensemble: (1) The Boltzmann entropy, defined by the density of states at a specified energy, and (2) The Gibbs entropy, defined by the sum or integral of the density of states below a specified energy. A critical difference between the consequences of these definitions pertains to the concept of negative temperatures, which by the Gibbs definition cannot exist. In this paper, we call into question the fundamental assumption that the microcanonical ensemble should be used to define the entropy. We base our analysis on a recently proposed canonical definition of the entropy as a function of energy. We investigate the predictions of the Boltzmann, Gibbs, and canonical definitions for a variety of classical and quantum models. Our results support the validity of the concept of negative temperature, but not for all models with a decreasing density of states. We find that only the canonical entropy consistently predicts the correct thermodynamic properties, while microcanonical definitions of entropy, including those of Boltzmann and Gibbs, are correct only for a limited set of models. For models which exhibit a first-order phase transition, we show that the use of the thermodynamic limit, as usually interpreted, can conceal the essential physics.
A Novel Method of Support Vector Classifiers Ensemble%一种支持向量分类器集成的方法
Institute of Scientific and Technical Information of China (English)
李满; 李春华
2011-01-01
A novel support vector machines ensembel method is proposed to eliminate the impact of the data representation and model selection of individual classification on the performance of support vector machines. Firstly, in order to decrease the dimensions in high-dimension feature space, principal component analysis is used to deal with the original data. Secondly, fuzzy integral is presented to ensemble the sub-SVM classifiers, and the importance support vector classifiers ( SVC ) ensemble has been proposed to improve the classification performance. The simulating results demonstrate that the proposed SVCs ensemble approach using fuzzy integral and PCA outperforms a single SVC and traditional SVCs ensemble technique via majority voting in terms of classificaiton accuracy.%针对支持向量机的分类精度受数据表达形式以及个体分类器模型选择因素影响较大,提出一种新的支持向量机集成算法来克服这个缺陷.首先利用PCA对原始数据进行处理,通过寻求原始数据的更好表达形式,以降低高维特征空间的维数；然后利用模糊积分进行多个子分类器决策融合,兼顾各子分类器的分类结果和各子分类器判决对最终决策的重要程度.仿真实验表明,该方法的分类准确率明显优于传统方法的支持向量分类器集成策略.
Generalized canonical correlation analysis with missing values
M. van de Velden (Michel); Y. Takane
2009-01-01
textabstractTwo new methods for dealing with missing values in generalized canonical correlation analysis are introduced. The first approach, which does not require iterations, is a generalization of the Test Equating method available for principal component analysis. In the second approach, missing
Concept maps and canonical models in neuropsychiatry.
Marin-Sanguino, A; del Rosario, R C H; Mendoza, E R
2009-05-01
Most bioscientists engage in informal modelling in their research and explicitly document this activity's results in diagrams or "concept maps". While canonical modelling approaches such as Biochemical Systems Theory (BST) immediately allow the construction of a corresponding system of equations, the problem of determining appropriate parameter values remains. Goel et al. introduced Concept Map Modelling (CMM) as a framework to address this problem through an interactive dialogue between experimenters and modellers. The CMM dialogue extracts the experimenters' implicit knowledge about dynamical behaviour of the parts of the system being modelled in form of rough sketches and verbal statements, e.g. value ranges. These are then used as inputs for parameter and initial value estimates for the symbolic canonical model based on the diagram. Canonical models have the big advantage that a great variety of parameter estimation methods have been developed for them in recent years. The paper discusses the suitability of this approach for neuropsychiatry using recent work of Qi et al. on a canonical model of presynaptic dopamine metabolism. Due to the complexity of systems encountered in neuropsychiatry, hybrid models are often used to complement the canonical models discussed here.
Brochero, D.; Anctil, F.; Gagné, C.
2012-04-01
Today, the availability of the Meteorological Ensemble Prediction Systems (MEPS) and its subsequent coupling with multiple hydrological models offer the possibility of building Hydrological Ensemble Prediction Systems (HEPS) consisting of a large number of members. However, this task is complex both in terms of the coupling of information and of the computational time, which may create an operational barrier. The evaluation of the prominence of each hydrological members can be seen as a non-parametric post-processing stage that seeks finding the optimal participation of the hydrological models (in a fashion similar to the Bayesian model averaging technique), maintaining or improving the quality of a probabilistic forecasts based on only x members drawn from a super ensemble of d members, thus allowing the reduction of the task required to issue the probabilistic forecast. The main objective of the current work consists in assessing the degree of simplification (reduction of the number of hydrological members) that can be achieved with a HEPS configured using 16 lumped hydrological models driven by the 50 weather ensemble forecasts from the European Centre for Medium-range Weather Forecasts (ECMWF), i.e. an 800-member HEPS. In a previous work (Brochero et al., 2011a, b), we demonstrated that the proportion of members allocated to each hydrological model is a sufficient criterion to reduce the number of hydrological members while improving the balance of the scores, taking into account interchangeability of the ECMWF MEPS. Here, we compare the proportion of members allocated to each hydrological model derived from three non-parametric techniques: correlation analysis of hydrological members, Backward Greedy Selection (BGS) and Nondominated Sorting Genetic Algorithm (NSGA II). The last two techniques allude to techniques developed in machine learning, in a multicriteria framework exploiting the relationship between bias, reliability, and the number of members of the
Yu, Alfred C H; Cobbold, Richard S C
2008-03-01
Because of their adaptability to the slow-time signal contents, eigen-based filters have shown potential in improving the flow detection performance of color flow images. This paper proposes a new eigen-based filter called the Hankel-SVD filter that is intended to process each slowtime ensemble individually. The new filter is derived using the notion of principal Hankel component analysis, and it achieves clutter suppression by retaining only the principal components whose order is greater than the clutter eigen-space dimension estimated from a frequency based analysis algorithm. To assess its efficacy, the Hankel-SVD filter was first applied to synthetic slow-time data (ensemble size: 10) simulated from two different sets of flow parameters that model: 1) arterial imaging (blood velocity: 0 to 38.5 cm/s, tissue motion: up to 2 mm/s, transmit frequency: 5 MHz, pulse repetition period: 0.4 ms) and 2) deep vessel imaging (blood velocity: 0 to 19.2 cm/s, tissue motion: up to 2 cm/s, transmit frequency: 2 MHz, pulse repetition period: 2.0 ms). In the simulation analysis, the post-filter clutter-to- blood signal ratio (CBR) was computed as a function of blood velocity. Results show that for the same effective stopband size (50 Hz), the Hankel-SVD filter has a narrower transition region in the post-filter CBR curve than that of another type of adaptive filter called the clutter-downmixing filter. The practical efficacy of the proposed filter was tested by application to in vivo color flow data obtained from the human carotid arteries (transmit frequency: 4 MHz, pulse repetition period: 0.333 ms, ensemble size: 10). The resulting power images show that the Hankel-SVD filter can better distinguish between blood and moving-tissue regions (about 9 dB separation in power) than the clutter-downmixing filter and a fixed-rank multi ensemble-based eigen-filter (which showed a 2 to 3 dB separation).
Azreg-Aïnou, Mustapha
2012-01-01
Properties pertaining to thermodynamical local stability of Reissner-Nordstr\\"om black holes surrounded by quintessence as well as adiabatic invariance, adiabatic charging and a generalized Smarr formula are discussed. Limits for the entropy, temperature and electric potential ensuring stability of canonical ensembles are determined by the classical thermodynamical and Poincar\\'e methods. By the latter approach we show that microcanonical ensembles (isolated black holes) are stable. Two geometrical approaches lead to determine the same states corresponding to second order phase transitions.
Bouallegue, Zied Ben; Theis, Susanne E; Pinson, Pierre
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new approach which preserves the dynamical development of the ensemble members is called dynamic ensemble copula coupling (...
Iterative algorithms to approximate canonical Gabor windows: Computational aspects
DEFF Research Database (Denmark)
Janssen, A.J.E.M; Søndergaard, Peter Lempel
In this paper we investigate the computational aspects of some recently proposed iterative methods for approximating the canonical tight and canonical dual window of a Gabor frame (g,a,b). The iterations start with the window g while the iteration steps comprise the window g, the k^th iterand...
de Sousa, Leonardo Evaristo; Ribeiro, Luiz Antonio; Fonseca, Antonio Luciano de Almeida; da Silva Filho, Demétrio Antonio
2016-07-14
The emission spectra of flexible and rigid organic molecules are theoretically investigated in the framework of the Franck-Condon (FC) and nuclear ensemble (NE) approaches, both of which rely on results from density functional theory but differ in the way vibrational contributions are taken into account. Our findings show that the emission spectra obtained using the NE approach are in better agreement with experiment than the ones produced by FC calculations considering both rigid and flexible molecules. Surprisingly, the description of a suitable balance between the vibronic progression and the emission spectra envelope shows dependency on the initial sampling for the NE calculations which must be judiciously selected. Our results intend to provide guidance for a better theoretical description of light emission properties of organic molecules with applications in organic electronic devices.
Directory of Open Access Journals (Sweden)
J. D. Giraldo
2011-04-01
Full Text Available The Sudano-Sahelian zone of West Africa, one of the poorest of the Earth, is characterized by high rainfall variability and rapid population growth. In this region, heavy storm events frequently cause extensive damage. Nonetheless, the projections for change in extreme rainfall values have shown a great divergence between Regional Climate Models (RCM, increasing the forecast uncertainty. Novel methodologies should be applied, taking into account both the variability provided by different RCMs, as well as the non-stationary nature of time series for the building of hazard maps of extreme rainfall events. The present work focuses in the probability density functions (PDFs-based evaluation and a simple quantitative measure of how well each RCM considered can capture the observed annual maximum daily rainfall (AMDR series on the Senegal River basin. Since meaningful trends have been detected in historical rainfall time series for the region, non-stationary probabilistic models were used to fit the PDF parameters to the AMDR time series. In the development of PDF ensemble by bootstrapping techniques, Reliability Ensemble Averaging (REA maps were applied to score the RCMs. The REA factors were computed using a metric to evaluate the agreement between observed -or best estimated- PDFs, and that simulated with each RCM. The assessment of plausible regional trends associated to the return period, from the hazard maps of AMDR, showed a general rise, owing to an increase in the mean and the variability of extreme precipitation. These spatial-temporal distributions could be considered by local stakeholders in such a way as to reach a better balance between mitigation and adaptation.
Directory of Open Access Journals (Sweden)
J. D. Giraldo Osorio
2011-11-01
Full Text Available The Sudano-Sahelian zone of West Africa, one of the poorest of the Earth, is characterized by high rainfall variability and rapid population growth. In this region, heavy storm events frequently cause extensive damage. Nonetheless, the projections for change in extreme rainfall values have shown a great divergence between Regional Climate Models (RCM, increasing the forecast uncertainty. Novel methodologies should be applied, taking into account both the variability provided by different RCMs, as well as the non-stationary nature of time series for the building of hazard maps of extreme rainfall events. The present work focuses on the probability density functions (PDFs-based evaluation and a simple quantitative measure of how well each RCM considered can capture the observed annual maximum daily rainfall (AMDR series on the Senegal River basin. Since meaningful trends have been detected in historical rainfall time series for the region, non-stationary probabilistic models were used to fit the PDF parameters to the AMDR time series. In the development of PDF ensemble by bootstrapping techniques, Reliability Ensemble Averaging (REA maps were applied to score the RCMs. The REA factors were computed using a metric to evaluate the agreement between observed -or best estimated- PDFs, and that simulated with each RCM. The assessment of plausible regional trends associated to the return period, from the hazard maps of AMDR, showed a general rise, owing to an increase in the mean and the variability of extreme precipitation. These spatial-temporal distributions could be considered by Organization for the Development of the Senegal River (Organisation pour la mise en valeur du fleuve Sénégal, OMVS, in such a way as to reach a better balance between mitigation and adaptation.
Giraldo Osorio, J. D.; García Galiano, S. G.
2011-11-01
The Sudano-Sahelian zone of West Africa, one of the poorest of the Earth, is characterized by high rainfall variability and rapid population growth. In this region, heavy storm events frequently cause extensive damage. Nonetheless, the projections for change in extreme rainfall values have shown a great divergence between Regional Climate Models (RCM), increasing the forecast uncertainty. Novel methodologies should be applied, taking into account both the variability provided by different RCMs, as well as the non-stationary nature of time series for the building of hazard maps of extreme rainfall events. The present work focuses on the probability density functions (PDFs)-based evaluation and a simple quantitative measure of how well each RCM considered can capture the observed annual maximum daily rainfall (AMDR) series on the Senegal River basin. Since meaningful trends have been detected in historical rainfall time series for the region, non-stationary probabilistic models were used to fit the PDF parameters to the AMDR time series. In the development of PDF ensemble by bootstrapping techniques, Reliability Ensemble Averaging (REA) maps were applied to score the RCMs. The REA factors were computed using a metric to evaluate the agreement between observed -or best estimated- PDFs, and that simulated with each RCM. The assessment of plausible regional trends associated to the return period, from the hazard maps of AMDR, showed a general rise, owing to an increase in the mean and the variability of extreme precipitation. These spatial-temporal distributions could be considered by Organization for the Development of the Senegal River (Organisation pour la mise en valeur du fleuve Sénégal, OMVS), in such a way as to reach a better balance between mitigation and adaptation.
Charoenkwan, Phasit; Shoombuatong, Watshara; Lee, Hua-Chin; Chaijaruwanich, Jeerayut; Huang, Hui-Ling; Ho, Shinn-Ying
2013-01-01
Existing methods for predicting protein crystallization obtain high accuracy using various types of complemented features and complex ensemble classifiers, such as support vector machine (SVM) and Random Forest classifiers. It is desirable to develop a simple and easily interpretable prediction method with informative sequence features to provide insights into protein crystallization. This study proposes an ensemble method, SCMCRYS, to predict protein crystallization, for which each classifier is built by using a scoring card method (SCM) with estimating propensity scores of p-collocated amino acid (AA) pairs (p=0 for a dipeptide). The SCM classifier determines the crystallization of a sequence according to a weighted-sum score. The weights are the composition of the p-collocated AA pairs, and the propensity scores of these AA pairs are estimated using a statistic with optimization approach. SCMCRYS predicts the crystallization using a simple voting method from a number of SCM classifiers. The experimental results show that the single SCM classifier utilizing dipeptide composition with accuracy of 73.90% is comparable to the best previously-developed SVM-based classifier, SVM_POLY (74.6%), and our proposed SVM-based classifier utilizing the same dipeptide composition (77.55%). The SCMCRYS method with accuracy of 76.1% is comparable to the state-of-the-art ensemble methods PPCpred (76.8%) and RFCRYS (80.0%), which used the SVM and Random Forest classifiers, respectively. This study also investigates mutagenesis analysis based on SCM and the result reveals the hypothesis that the mutagenesis of surface residues Ala and Cys has large and small probabilities of enhancing protein crystallizability considering the estimated scores of crystallizability and solubility, melting point, molecular weight and conformational entropy of amino acids in a generalized condition. The propensity scores of amino acids and dipeptides for estimating the protein crystallizability can aid
ORDERED ANALYTIC REPRESENTATION OF PDES BY HAMILTONIAN CANONICAL SYSTEM
Institute of Scientific and Technical Information of China (English)
ZhengYu; ChenYong
2002-01-01
Based on the method of symplectic geometry and variational calculation,the method for some PDEs to be ordered and analytically represented by Hamiltonian canonical system is discussed. Meanwhile some related necessary and sufficient conditions are obtained.
Long-range interacting systems in the unconstrained ensemble.
Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano
2017-01-01
Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.
Long-range interacting systems in the unconstrained ensemble
Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano
2017-01-01
Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.
Canonical duties, liabilities of trustees and administrators.
Morrisey, F G
1985-06-01
The new Code of Canon Law outlines a number of duties of those who have responsibility for administering the Church's temporal goods. Before assuming office, administrators must pledge to be efficient and faithful, and they must prepare an inventory of goods belonging to the juridic person they serve. Among their duties, administrators must: Ensure that adequate insurance is provided; Use civilly valid methods to protect canonical ownership of the goods; Observe civil and canon law prescriptions as well as donors' intentions; Collect and safeguard revenues, repay debts, and invest funds securely; Maintain accurate records, keep documents secure, and prepare an annual budget; Prepare an annual report and present it to the Ordinary where prescribed; Observe civil law concerning labor and social policy, and pay employees a just and decent wage. Administrators who carry out acts that are invalid canonically are liable for such acts. The juridic person is not liable, unless it derived benefit from the transaction. Liability is especially high when the sale of property is involved or when a contract is entered into without proper cannonical consent. Although Church law is relatively powerless to punish those who have been negligent, stewards, administrators, and trustees must do all they can to be truthful to the responsibility with which they have been entrusted.
Salmon, Loïc; Bascom, Gavin; Andricioaei, Ioan; Al-Hashimi, Hashim M
2013-04-10
The ability to modulate alignment and measure multiple independent sets of NMR residual dipolar couplings (RDCs) has made it possible to characterize internal motions in proteins at atomic resolution and with time scale sensitivity ranging from picoseconds up to milliseconds. The application of such methods to the study of RNA dynamics, however, remains fundamentally limited by the inability to modulate alignment and by strong couplings between internal and overall motions that complicate the quantitative interpretation of RDCs. Here, we address this problem by showing that RNA alignment can be generally modulated, in a controlled manner, by variable elongation of A-form helices and that the information contained within the measured RDCs can be extracted even in the presence of strong couplings between motions and overall alignment via structure-based prediction of alignment. Using this approach, four RDC data sets, and a broad conformational pool obtained from a 8.2 μs molecular dynamics simulation, we successfully construct and validate an atomic resolution ensemble of human immunodeficiency virus type I transactivation response element RNA. This ensemble reveals local motions in and around the bulge involving changes in stacking and hydrogen-bonding interactions, which are undetectable by traditional spin relaxation and drive global changes in interhelical orientation. This new approach broadens the scope of using RDCs in characterizing the dynamics of nucleic acids.
Canonical approach to finite density QCD with multiple precision computation
Fukuda, Ryutaro; Oka, Shotaro
2015-01-01
We calculate the baryon chemical potential ($\\mu_B$) dependence of thermodynamic observables, i.e., pressure, baryon number density and susceptibility by lattice QCD using the canonical approach. We compare the results with those by the multi parameter reweighting (MPR) method; Both methods give very consistent values in the regions where errors of the MPR are under control. The canonical method gives reliable results over $\\mu_ B/T=3$,with $T$ being temperature. Multiple precision operations play an important roll in the evaluation of canonical partition functions.
A mollified Ensemble Kalman filter
Bergemann, Kay
2010-01-01
It is well recognized that discontinuous analysis increments of sequential data assimilation systems, such as ensemble Kalman filters, might lead to spurious high frequency adjustment processes in the model dynamics. Various methods have been devised to continuously spread out the analysis increments over a fixed time interval centered about analysis time. Among these techniques are nudging and incremental analysis updates (IAU). Here we propose another alternative, which may be viewed as a hybrid of nudging and IAU and which arises naturally from a recently proposed continuous formulation of the ensemble Kalman analysis step. A new slow-fast extension of the popular Lorenz-96 model is introduced to demonstrate the properties of the proposed mollified ensemble Kalman filter.
Canonical quantization of constrained systems
Energy Technology Data Exchange (ETDEWEB)
Bouzas, A.; Epele, L.N.; Fanchiotti, H.; Canal, C.A.G. (Laboratorio de Fisica Teorica, Departamento de Fisica, Universidad Nacional de La Plata, Casilla de Correo No. 67, 1900 La Plata, Argentina (AR))
1990-07-01
The consideration of first-class constraints together with gauge conditions as a set of second-class constraints in a given system is shown to be incorrect when carrying out its canonical quantization.
The canon as text for a biblical theology
Directory of Open Access Journals (Sweden)
James A. Loader
2005-10-01
Full Text Available The novelty of the canonical approach is questioned and its fascination at least partly traced to the Reformation, as well as to the post-Reformation’s need for a clear and authoritative canon to perform the function previously performed by the church. This does not minimise the elusiveness and deeply contradictory positions both within the canon and triggered by it. On the one hand, the canon itself is a centripetal phenomenon and does play an important role in exegesis and theology. Even so, on the other hand, it not only contains many difficulties, but also causes various additional problems of a formal as well as a theological nature. The question is mooted whether the canonical approach alleviates or aggravates the dilemma. Since this approach has become a major factor in Christian theology, aspects of the Christian canon are used to gauge whether “canon” is an appropriate category for eliminating difficulties that arise by virtue of its own existence. Problematic uses and appropriations of several Old Testament canons are advanced, as well as evidence in the New Testament of a consciousness that the “old” has been surpassed(“Überbietungsbewußtsein”. It is maintained that at least the Childs version of the canonical approach fails to smooth out these and similar difficulties. As a method it can cater for the New Testament’s (superior role as the hermeneutical standard for evaluating the Old, but flounders on its inability to create the theological unity it claims can solve religious problems exposed by Old Testament historical criticism. It is concluded that canon as a category cannot be dispensed with, but is useful for the opposite of the purpose to which it is conventionally put: far from bringing about theological “unity” or producing a standard for “correct” exegesis, it requires different readings of different canons.
Improving the sampling efficiency of the Grand Canonical Simulated Quenching approach
Energy Technology Data Exchange (ETDEWEB)
Perez, Danny [Los Alamos National Laboratory; Vernon, Louis J. [Los Alamos National Laboratory
2012-04-04
Most common atomistic simulation techniques, like molecular dynamics or Metropolis Monte Carlo, operate under a constant interatomic Hamiltonian with a fixed number of atoms. Internal (atom positions or velocities) or external (simulation cell size or geometry) variables are then evolved dynamically or stochastically to yield sampling in different ensembles, such as microcanonical (NVE), canonical (NVT), isothermal-isobaric (NPT), etc. Averages are then taken to compute relevant physical properties. At least two limitations of these standard approaches can seriously hamper their application to many important systems: (1) they do not allow for the exchange of particles with a reservoir, and (2) the sampling efficiency is insufficient to allow the obtention of converged results because of the very long intrinsic timescales associated with these quantities. To fix ideas, one might want to identify low (free) energy configurations of grain boundaries (GB). In reality, grain boundaries are in contact the grains which act as reservoirs of defects (e.g., vacancies and interstitials). Since the GB can exchange particles with its environment, the most stable configuration cannot provably be found by sampling from NVE or NVT ensembles alone: one needs to allow the number of atoms in the sample to fluctuate. The first limitation can be circumvented by working in the grand canonical ensemble (TV ) or its derivatives (such as the semi-grand-canonical ensemble useful for the study of substitutional alloys). Monte Carlo methods have been the first to adapt to this kind of system where the number of atoms is allowed to fluctuate. Many of these methods are based on the Widom insertion method [Widom63] where the chemical potential of a given chemical species can be inferred from the potential energy changes upon random insertion of a new particle within the simulation cell. Other techniques, such as the Gibbs ensemble Monte Carlo [Panagiotopoulos87] where exchanges of particles are
Asymptotic distributions in the projection pursuit based canonical correlation analysis
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this paper, associations between two sets of random variables based on the projection pursuit (PP) method are studied. The asymptotic normal distributions of estimators of the PP based canonical correlations and weighting vectors are derived.
4DVAR by ensemble Kalman smoother
Mandel, Jan; Gratton, Serge
2013-01-01
We propose to use the ensemble Kalman smoother (EnKS) as linear least squares solver in the Gauss-Newton method for the large nonlinear least squares in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Further, adding a regularization term results in replacing the Gauss-Newton method, which may diverge, by^M the Levenberg-Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS.
On Complex Supermanifolds with Trivial Canonical Bundle
Groeger, Josua
2016-01-01
We give an algebraic characterisation for the triviality of the canonical bundle of a complex supermanifold in terms of a certain Batalin-Vilkovisky superalgebra structure. As an application, we study the Calabi-Yau case, in which an explicit formula in terms of the Levi-Civita connection is achieved. Our methods include the use of complex integral forms and the recently developed theory of superholonomy.
Canonical approach to 2D induced gravity
Popovic, D
2001-01-01
Using canonical method the Liouville theory has been obtained as a gravitational Wess-Zumino action of the Polyakov string. From this approach it is clear that the form of the Liouville action is the consequence of the bosonic representation of the Virasoro algebra, and that the coefficient in front of the action is proportional to the central charge and measures the quantum braking of the classical symmetry.
A 4D-Ensemble-Variational System for Data Assimilation and Ensemble Initialization
Bowler, Neill; Clayton, Adam; Jardak, Mohamed; Lee, Eunjoo; Jermey, Peter; Lorenc, Andrew; Piccolo, Chiara; Pring, Stephen; Wlasak, Marek; Barker, Dale; Inverarity, Gordon; Swinbank, Richard
2016-04-01
The Met Office has been developing a four-dimensional ensemble variational (4DEnVar) data assimilation system over the past four years. The 4DEnVar system is intended both as data assimilation system in its own right and also an improved means of initializing the Met Office Global and Regional Ensemble Prediction System (MOGREPS). The global MOGREPS ensemble has been initialized by running an ensemble of 4DEnVars (En-4DEnVar). The scalability and maintainability of ensemble data assimilation methods make them increasingly attractive, and 4DEnVar may be adopted in the context of the Met Office's LFRic project to redevelop the technical infrastructure to enable its Unified Model (MetUM) to be run efficiently on massively parallel supercomputers. This presentation will report on the results of the 4DEnVar development project, including experiments that have been run using ensemble sizes of up to 200 members.
Layered Ensemble Architecture for Time Series Forecasting.
Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin
2016-01-01
Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.
Observables in classical canonical gravity: folklore demystified
Pons, J M; Sundermeyer, K A
2010-01-01
We give an overview of some conceptual difficulties, sometimes called paradoxes, that have puzzled for years the physical interpetation of classical canonical gravity and, by extension, the canonical formulation of generally covariant theories. We identify these difficulties as stemming form some terminological misunderstandings as to what is meant by "gauge invariance", or what is understood classically by a "physical state". We make a thorough analysis of the issue and show that all purported paradoxes disappear when the right terminology is in place. Since this issue is connected with the search of observables - gauge invariant quantities - for these theories, we formally show that time evolving observables can be constructed for every observer. This construction relies on the fixation of the gauge freedom of diffeomorphism invariance by means of a scalar coordinatization. We stress the condition that the coordinatization must be made with scalars. As an example of our method for obtaining observables we d...
Saito, Kazuo; Hara, Masahiro; Kunii, Masaru; Seko, Hiromu; Yamaguchi, Munehiko
2011-05-01
Different initial perturbation methods for the mesoscale ensemble prediction were compared by the Meteorological Research Institute (MRI) as a part of the intercomparison of mesoscale ensemble prediction systems (EPSs) of the World Weather Research Programme (WWRP) Beijing 2008 Olympics Research and Development Project (B08RDP). Five initial perturbation methods for mesoscale ensemble prediction were developed for B08RDP and compared at MRI: (1) a downscaling method of the Japan Meteorological Agency (JMA)'s operational one-week EPS (WEP), (2) a targeted global model singular vector (GSV) method, (3) a mesoscale model singular vector (MSV) method based on the adjoint model of the JMA non-hydrostatic model (NHM), (4) a mesoscale breeding growing mode (MBD) method based on the NHM forecast and (5) a local ensemble transform (LET) method based on the local ensemble transform Kalman filter (LETKF) using NHM. These perturbation methods were applied to the preliminary experiments of the B08RDP Tier-1 mesoscale ensemble prediction with a horizontal resolution of 15 km. To make the comparison easier, the same horizontal resolution (40 km) was employed for the three mesoscale model-based initial perturbation methods (MSV, MBD and LET). The GSV method completely outperformed the WEP method, confirming the advantage of targeting in mesoscale EPS. The GSV method generally performed well with regard to root mean square errors of the ensemble mean, large growth rates of ensemble spreads throughout the 36-h forecast period, and high detection rates and high Brier skill scores (BSSs) for weak rains. On the other hand, the mesoscale model-based initial perturbation methods showed good detection rates and BSSs for intense rains. The MSV method showed a rapid growth in the ensemble spread of precipitation up to a forecast time of 6 h, which suggests suitability of the mesoscale SV for short-range EPSs, but the initial large growth of the perturbation did not last long. The
Thampi, Smitha V.; Bagiya, Mala S.; Chakrabarty, D.; Acharya, Y. B.; Yamamoto, M.
2014-12-01
A GNU Radio Beacon Receiver (GRBR) system for total electron content (TEC) measurements using 150 and 400 MHz transmissions from Low-Earth Orbiting Satellites (LEOS) is fabricated in house and made operational at Ahmedabad (23.04°N, 72.54°E geographic, dip latitude 17°N) since May 2013. This system receives the 150 and 400 MHz transmissions from high-inclination LEOS. The first few days of observations are presented in this work to bring out the efficacy of an ensemble average method to convert the relative TECs to absolute TECs. This method is a modified version of the differential Doppler-based method proposed by de Mendonca (1962) and suitable even for ionospheric regions with large spatial gradients. Comparison of TECs derived from a collocated GPS receiver shows that the absolute TECs estimated by this method are reliable estimates over regions with large spatial gradient. This method is useful even when only one receiving station is available. The differences between these observations are discussed to bring out the importance of the spatial differences between the ionospheric pierce points of these satellites. A few examples of the latitudinal variation of TEC during different local times using GRBR measurements are also presented, which demonstrates the potential of radio beacon measurements in capturing the large-scale plasma transport processes in the low-latitude ionosphere.
Periodicity, the Canon and Sport
Directory of Open Access Journals (Sweden)
Thomas F. Scanlon
2015-10-01
Full Text Available The topic according to this title is admittedly a broad one, embracing two very general concepts of time and of the cultural valuation of artistic products. Both phenomena are, in the present view, largely constructed by their contemporary cultures, and given authority to a great extent from the prestige of the past. The antiquity of tradition brings with it a certain cachet. Even though there may be peripheral debates in any given society which question the specifics of periodization or canonicity, individuals generally accept the consensus designation of a sequence of historical periods and they accept a list of highly valued artistic works as canonical or authoritative. We will first examine some of the processes of periodization and of canon-formation, after which we will discuss some specific examples of how these processes have worked in the sport of two ancient cultures, namely Greece and Mesoamerica.
Spin Foams and Canonical Quantization
Alexandrov, Sergei; Noui, Karim
2011-01-01
This review is devoted to the analysis of the mutual consistency of the spin foam and canonical loop quantizations in three and four spacetime dimensions. In the three-dimensional context, where the two approaches are in good agreement, we show how the canonical quantization \\`a la Witten of Riemannian gravity with a positive cosmological constant is related to the Turaev-Viro spin foam model, and how the Ponzano-Regge amplitudes are related to the physical scalar product of Riemannian loop quantum gravity without cosmological constant. In the four-dimensional case, we recall a Lorentz-covariant formulation of loop quantum gravity using projected spin networks, compare it with the new spin foam models, and identify interesting relations and their pitfalls. Finally, we discuss the properties which a spin foam model is expected to possess in order to be consistent with the canonical quantization, and suggest a new model illustrating these results.
Pool, René; Heringa, Jaap; Hoefling, Martin; Schulz, Roland; Smith, Jeremy C; Feenstra, K Anton
2012-05-01
We report on a python interface to the GROMACS molecular simulation package, GromPy (available at https://github.com/GromPy). This application programming interface (API) uses the ctypes python module that allows function calls to shared libraries, for example, written in C. To the best of our knowledge, this is the first reported interface to the GROMACS library that uses direct library calls. GromPy can be used for extending the current GROMACS simulation and analysis modes. In this work, we demonstrate that the interface enables hybrid Monte-Carlo/molecular dynamics (MD) simulations in the grand-canonical ensemble, a simulation mode that is currently not implemented in GROMACS. For this application, the interplay between GromPy and GROMACS requires only minor modifications of the GROMACS source code, not affecting the operation, efficiency, and performance of the GROMACS applications. We validate the grand-canonical application against MD in the canonical ensemble by comparison of equations of state. The results of the grand-canonical simulations are in complete agreement with MD in the canonical ensemble. The python overhead of the grand-canonical scheme is only minimal.
Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique
Directory of Open Access Journals (Sweden)
Jan D. Keller
2008-12-01
Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.
Canonical and Irish Gothic Features in Melmoth the Wanderer
González Rodríguez, Julia
2016-01-01
In the eighteenth century, a Gothic literary canon emerged. This B.A. Thesis aims to show that there is not a unique type of Gothic literary tradition. To illustrate this, a variant of the canonical Gothic, namely the Irish Gothic, is presented, with an Irish novel, Melmoth the Wanderer (1820) by Charles Robert Maturin, as an illustration of its main traits. Following an analytic method, the distinctive features of each Gothic tradition are explained separately. Then, an analysis of the major...
Application of the Clustering Method in Molecular Dynamics Simulation of the Diffusion Coefficient
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Using molecular dynamics (MD) simulation, the diffusion of oxygen, methane, ammonia and carbon dioxide in water was simulated in the canonical NVT ensemble, and the diffusion coefficient was analyzed by the clustering method. By comparing to the conventional method (using the Einstein model) and the differentiation-interval variation method, we found that the results obtained by the clustering method used in this study are more close to the experimental values. This method proved to be more reasonable than the other two methods.
2015-01-01
Model performance of the partial least squares method (PLS) alone and bagging-PLS was investigated in online near-infrared (NIR) sensor monitoring of pilot-scale extraction process in Fructus aurantii. High-performance liquid chromatography (HPLC) was used as a reference method to identify the active pharmaceutical ingredients: naringin, hesperidin and neohesperidin. Several preprocessing methods and synergy interval partial least squares (SiPLS) and moving window partial least squares (MWP...
Support Vector Machine Ensemble Based on Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
LI Ye; YIN Ru-po; CAI Yun-ze; XU Xiao-ming
2006-01-01
Support vector machines (SVMs) have been introduced as effective methods for solving classification problems.However, due to some limitations in practical applications,their generalization performance is sometimes far from the expected level. Therefore, it is meaningful to study SVM ensemble learning. In this paper, a novel genetic algorithm based ensemble learning method, namely Direct Genetic Ensemble (DGE), is proposed. DGE adopts the predictive accuracy of ensemble as the fitness function and searches a good ensemble from the ensemble space. In essence, DGE is also a selective ensemble learning method because the base classifiers of the ensemble are selected according to the solution of genetic algorithm. In comparison with other ensemble learning methods, DGE works on a higher level and is more direct. Different strategies of constructing diverse base classifiers can be utilized in DGE.Experimental results show that SVM ensembles constructed by DGE can achieve better performance than single SVMs,bagged and boosted SVM ensembles. In addition, some valuable conclusions are obtained.
Application of canonical coordinates for solving single-freedom constraint mechanical systems
Institute of Scientific and Technical Information of China (English)
高芳; 张晓波; 傅景礼
2014-01-01
This paper introduces the canonical coordinates method to obtain the first integral of a single-degree freedom constraint mechanical system that contains conserva-tive and non-conservative constraint homonomic systems. The definition and properties of canonical coordinates are introduced. The relation between Lie point symmetries and the canonical coordinates of the constraint mechanical system are expressed. By this re-lation, the canonical coordinates can be obtained. Properties of the canonical coordinates and the Lie symmetry theory are used to seek the first integrals of constraint mechanical system. Three examples are used to show applications of the results.
Transition from Poisson to circular unitary ensemble
Indian Academy of Sciences (India)
Vinayak; Akhilesh Pandey
2009-09-01
Transitions to universality classes of random matrix ensembles have been useful in the study of weakly-broken symmetries in quantum chaotic systems. Transitions involving Poisson as the initial ensemble have been particularly interesting. The exact two-point correlation function was derived by one of the present authors for the Poisson to circular unitary ensemble (CUE) transition with uniform initial density. This is given in terms of a rescaled symmetry breaking parameter Λ. The same result was obtained for Poisson to Gaussian unitary ensemble (GUE) transition by Kunz and Shapiro, using the contour-integral method of Brezin and Hikami. We show that their method is applicable to Poisson to CUE transition with arbitrary initial density. Their method is also applicable to the more general ℓ CUE to CUE transition where CUE refers to the superposition of ℓ independent CUE spectra in arbitrary ratio.
Work producing reservoirs: Stochastic thermodynamics with generalized Gibbs ensembles
Horowitz, Jordan M.; Esposito, Massimiliano
2016-08-01
We develop a consistent stochastic thermodynamics for environments composed of thermodynamic reservoirs in an external conservative force field, that is, environments described by the generalized or Gibbs canonical ensemble. We demonstrate that small systems weakly coupled to such reservoirs exchange both heat and work by verifying a local detailed balance relation for the induced stochastic dynamics. Based on this analysis, we help to rationalize the observation that nonthermal reservoirs can increase the efficiency of thermodynamic heat engines.
Romanticism, Sexuality, and the Canon.
Rowe, Kathleen K.
1990-01-01
Traces the Romanticism in the work and persona of film director Jean-Luc Godard. Examines the contradictions posed by Godard's politics and representations of sexuality. Asserts, that by bringing an ironic distance to the works of such canonized directors, viewers can take pleasure in those works despite their contradictions. (MM)
Spectral diagonal ensemble Kalman filters
Kasanický, Ivan; Vejmelka, Martin
2015-01-01
A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.
Extracting Value from Ensembles for Cloud-Free Forecasting
2011-09-01
for Medium range Weather Forecasting EMean Ensemble mean ETR Ensemble transform with rescaling EUMETSAT European Organization for the...transform method (ET) with rescaling ( ETR ) to define the initial atmospheric uncertainty (Wei et al. 2008). Adapted from the ET method devised by...variances of each grid point to further restrain the initial ensemble spread. The ETR method replaced the breeding method in GEFS during NCEP’s May
DEFF Research Database (Denmark)
Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.
2016-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts...... is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost. For example, the ensemble copula coupling (ECC) method rebuilds the multivariate aspect of the forecast from...... the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new...
DEFF Research Database (Denmark)
Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.
2015-01-01
Probabilistic forecasts in the form of ensemble of scenarios are required for complex decision making processes. Ensemble forecasting systems provide such products but the spatio-temporal structures of the forecast uncertainty is lost when statistical calibration of the ensemble forecasts...... is applied for each lead time and location independently. Non-parametric approaches allow the reconstruction of spatio-temporal joint probability distributions at a low computational cost.For example, the ensemble copula coupling (ECC) method consists in rebuilding the multivariate aspect of the forecast...... from the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error...
基于度量学习的邻域k凸包集成方法%Neighbor k-convex-hull ensemble method based on metric learning
Institute of Scientific and Technical Information of China (English)
牟廉明
2013-01-01
k局部凸包分类方法通过改进k近邻算法在处理小样本问题时的决策边界而显著提高分类性能,k子凸包分类方法通过克服k凸包分类对类数和样本环状分布的敏感性而改善了分类性能.但是,该方法仍然对样本距离度量方法敏感,并且在k邻域内不同类的样本数经常严重失衡,导致分类性能下降.针对上述问题,文章提出了一种邻域k凸包分类方法,并通过引入距离度量学习和集成学习技术来提高算法对样本空间度量的鲁棒性.大量实验表明,文中提出的基于度量学习的邻域k凸包集成方法具有显著的分类性能优势.%The k-local convex distance nearest neighbor classifier(CKNN) corrects the decision boundary of kNN when the amount of the training data is small,thus improving the performance of kNN.The k sub-convex-hull classifier(kCH) weakens the sensitivity of CKNN to the number of classes and the ring structure of samples distribution,hence improves the classification performance.But this method is still sensitive to the distance metric.Moreover,different types of samples in k nearest neighbors of a test instance are often seriously imbalanced,which leads to the decline of classification performance.In this paper,a neighbor k-convex-hull classifier(NCH) is proposed to address these problems.The robustness of the neighbor k-convex-hull classifier is improved by the techniques of metric learning and ensemble learning.Experimental results show that the proposed neighbor k-convex-hull classifier ensemble method,which is based on metric learning,is significantly superior to some state-of-the-art nearest neighbor classifiers.
Institute of Scientific and Technical Information of China (English)
杨娜; 秦志远; 张俊
2013-01-01
基于支持向量机的无限集成学习方法(SVM-based IEL)是机器学习领域新兴起的一种集成学习方法.本文将SVM-based IEL引入遥感图像的分类领域,并同时将SVM、Bagging、AdaBoost和SVM-based IEL等方法应用于遥感图像分类.实验表明:Bagging方法可以提高遥感图像的分类精度,而AdaBoost却降低了遥感图像的分类精度；同时,与SVM、有限集成的学习方法相比,SVM-based IEL方法具有可以显著地提高遥感图像的分类精度、分类效率的优势.%Support-vector-machines-based Infinite Ensemble Learning method ( SVM-based IEL) is one of the ensemble learning methods in the field of machine learning. In this paper, the SVM-based IEL was applied to the classification of remotely sensed imagery besides classic ensemble learning methods such as Bagging, AdaBoost and SVM etc. SVM was taken as the base classifier in Bagging, AdaBoost The experiments showed that the classic ensemble learning methods have different performances compared to SVM. In detail , the Bagging was capable of enhancing the classification accuracy but the AdaBoost was decreasing the classification accuracy. Furthermore, the experiments suggested that compared to SVM and classic ensemble learning methods, SVM-based IEL has many merits such as increasing both of the classification accuracy and classification efficiency.
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
DEFF Research Database (Denmark)
Christensen, Eva Arnspang; Schwartzentruber, J.; Clausen, M. P.;
2013-01-01
The lateral dynamics of proteins and lipids in the mammalian plasma membrane are heterogeneous likely reflecting both a complex molecular organization and interactions with other macromolecules that reside outside the plane of the membrane. Several methods are commonly used for characterizing...... the lateral dynamics of lipids and proteins. These experimental and data analysis methods differ in equipment requirements, labeling complexities, and further oftentimes give different results. It would therefore be very convenient to have a single method that is flexible in the choice of fluorescent label...... for analyzing lateral dynamics in samples that are labeled at high densities, can also be used for fast and accurate analysis of single molecule density data of lipids and proteins labeled with quantum dots (QDs). We have further used kICS to investigate the effect of the label size and by comparing the results...
Cluster Ensemble-based Image Segmentation
Xiaoru Wang; Junping Du; Shuzhe Wu; Xu Li; Fu Li
2013-01-01
Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories ...
Statistical Analysis of Protein Ensembles
Máté, Gabriell; Heermann, Dieter
2014-04-01
As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
Statistical Analysis of Protein Ensembles
Directory of Open Access Journals (Sweden)
Gabriell eMáté
2014-04-01
Full Text Available As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
The Transport of Relative Canonical Helicity
You, Setthivoine
2012-01-01
The evolution of relative canonical helicity is examined in the two-fluid magnetohydrodynamic formalism. Canonical helicity is defined here as the helicity of the plasma species' canonical momentum. The species' canonical helicity are coupled together and can be converted from one into the other while the total gauge-invariant relative canonical helicity remains globally invariant. The conversion is driven by enthalpy differences at a surface common to ion and electron canonical flux tubes. The model provides an explanation for why the threshold for bifurcation in counter-helicity merging depends on the size parameter. The size parameter determines whether magnetic helicity annihilation channels enthalpy into the magnetic flux tube or into the vorticity flow tube components of the canonical flux tube. The transport of relative canonical helicity constrains the interaction between plasma flows and magnetic fields, and provides a more general framework for driving flows and currents from enthalpy or inductive b...
Miller, W. P.; Lamb, K. W.; Piechota, T. C.; Lakshmi, V.; Santos, N. I.; Tootle, G. A.; Kalra, A.; Fayne, J.
2015-12-01
Water resource managers throughout the Western United States have struggled with persistent and severe drought since the early 2000s. In the Colorado River Basin, the National Oceanic and Atmospheric Administration's (NOAA's) Colorado Basin River Forecast Center (CBRFC) provides forecasts of water supply conditions to resource managers throughout the basin using Ensemble Streamflow Prediction (ESP) methods that are largely driven by historical observations of temperature and precipitation. Currently, the CBRFC does not have a way to incorporate information from climatic teleconnections such as the El Niño Southern Oscillation (ENSO). ENSO describes warming sea surface temperatures in the Pacific Ocean that typically correlate with cool and wet winter precipitation events in California and the Lower Colorado River Basin during an El Niño event. Past research indicates the potential to identify analog ENSO events to evaluate the impact to reservoir storage in the Colorado River Basin. Current forecasts indicate the potential for one of the strongest El Niño events on record this winter. In this study, information regarding the upcoming ENSO event is used to inform water supply forecasts over the Upper Colorado River Basin. These forecasts are then compared to traditionally derived water supply forecast in an attempt to evaluate the possible impact of the El Niño event to water supply over the Colorado River Basin.
A selective ensemble method for traveling salesman problems%旅行商问题的一种选择性集成求解方法
Institute of Scientific and Technical Information of China (English)
王立宏; 李强
2016-01-01
针对大型 TSP（traveling salesman problem）实例很难找到最优解的问题，提出了一种选择性集成求解方法。首先通过扩大路径法来选择集成多个较好解，构造出若干个极大路径；然后采用顶点插入法将剩余顶点和这些极大路径连接成一个哈密顿回路；最后使用2-opt 方法对该回路进行提升。试验结果表明，算法在5个 TSP 实例上得出的最好解的最大偏差为1.69％，说明本算法可以有效求解 TSP。%To solve the problem of finding the optimum solution of very large TSP (traveling salesman problem),a selective ensemble method was proposed.Firstly,expanding path method was used to selective integrate some high quality solutions,and several maximum paths were obtained.And then vertex insertion method was employed to connect these paths and the remainder vertices to form a Hamiltonian tour.Finally,the tour was improved by 2-opt method.Experimental results on 5 TSP instances showed that the maximal bias was 1.69%,and the effectiveness was proved.
Titchmarsh-Weyl theory for canonical systems
Directory of Open Access Journals (Sweden)
Keshav Raj Acharya
2014-11-01
Full Text Available The main purpose of this paper is to develop Titchmarsh- Weyl theory of canonical systems. To this end, we first observe the fact that Schrodinger and Jacobi equations can be written into canonical systems. We then discuss the theory of Weyl m-function for canonical systems and establish the relation between the Weyl m-functions of Schrodinger equations and that of canonical systems which involve Schrodinger equations.
Various multistage ensembles for prediction of heating energy consumption
Directory of Open Access Journals (Sweden)
Radisa Jovanovic
2015-04-01
Full Text Available Feedforward neural network models are created for prediction of daily heating energy consumption of a NTNU university campus Gloshaugen using actual measured data for training and testing. Improvement of prediction accuracy is proposed by using neural network ensemble. Previously trained feed-forward neural networks are first separated into clusters, using k-means algorithm, and then the best network of each cluster is chosen as member of an ensemble. Two conventional averaging methods for obtaining ensemble output are applied; simple and weighted. In order to achieve better prediction results, multistage ensemble is investigated. As second level, adaptive neuro-fuzzy inference system with various clustering and membership functions are used to aggregate the selected ensemble members. Feedforward neural network in second stage is also analyzed. It is shown that using ensemble of neural networks can predict heating energy consumption with better accuracy than the best trained single neural network, while the best results are achieved with multistage ensemble.
Modern Canonical Quantum General Relativity
Thiemann, Thomas
2008-11-01
Preface; Notation and conventions; Introduction; Part I. Classical Foundations, Interpretation and the Canonical Quantisation Programme: 1. Classical Hamiltonian formulation of general relativity; 2. The problem of time, locality and the interpretation of quantum mechanics; 3. The programme of canonical quantisation; 4. The new canonical variables of Ashtekar for general relativity; Part II. Foundations of Modern Canonical Quantum General Relativity: 5. Introduction; 6. Step I: the holonomy-flux algebra [P]; 7. Step II: quantum-algebra; 8. Step III: representation theory of [A]; 9. Step IV: 1. Implementation and solution of the kinematical constraints; 10. Step V: 2. Implementation and solution of the Hamiltonian constraint; 11. Step VI: semiclassical analysis; Part III. Physical Applications: 12. Extension to standard matter; 13. Kinematical geometrical operators; 14. Spin foam models; 15. Quantum black hole physics; 16. Applications to particle physics and quantum cosmology; 17. Loop quantum gravity phenomenology; Part IV. Mathematical Tools and their Connection to Physics: 18. Tools from general topology; 19. Differential, Riemannian, symplectic and complex geometry; 20. Semianalytical category; 21. Elements of fibre bundle theory; 22. Holonomies on non-trivial fibre bundles; 23. Geometric quantisation; 24. The Dirac algorithm for field theories with constraints; 25. Tools from measure theory; 26. Elementary introduction to Gel'fand theory for Abelean C* algebras; 27. Bohr compactification of the real line; 28. Operatir -algebras and spectral theorem; 29. Refined algebraic quantisation (RAQ) and direct integral decomposition (DID); 30. Basics of harmonic analysis on compact Lie groups; 31. Spin network functions for SU(2); 32. + Functional analytical description of classical connection dynamics; Bibliography; Index.
Kato expansion in quantum canonical perturbation theory
Nikolaev, Andrey
2016-06-01
This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson's ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. We compare the efficiency of the corresponding computational algorithm with the efficiencies of the Van Vleck and Magnus methods for high perturbative orders.
Canonical formalism for coupled beam optics
Energy Technology Data Exchange (ETDEWEB)
Kheifets, S.A.
1989-09-01
Beam optics of a lattice with an inter-plane coupling is treated using canonical Hamiltonian formalism. The method developed is equally applicable both to a circular (periodic) machine and to an open transport line. A solution of the equation of a particle motion (and correspondingly transfer matrix between two arbitrary points of the lattice) are described in terms of two amplitude functions (and their derivatives and corresponding phases of oscillations) and four coupling functions, defined by a solution of the system of the first-order nonlinear differential equations derived in the paper. Thus total number of independent parameters is equal to ten. 8 refs.
Kato expansion in quantum canonical perturbation theory
Nikolaev, A S
2015-01-01
This work establishes a connection between canonical perturbation series in quantum mechanics and a Kato expansion for the resolvent of the Liouville superoperator. Our approach leads to an explicit expression for a generator of a block-diagonalizing Dyson ordered exponential in arbitrary perturbation order. Unitary intertwining of perturbed and unperturbed averaging superprojectors allows for a description of ambiguities in the generator and block-diagonalized Hamiltonian. The corresponding computational algorithm is more efficient for high perturbative orders than the algorithms of Van Vleck and Magnus methods.
Institute of Scientific and Technical Information of China (English)
ZHENG Fei; ZHU Jiang
2010-01-01
The initial ensemble perturbations for an ensemble data assimilation system are expected to reasonably sample model uncertainty at the time of analysis to further reduce analysis uncertainty.Therefore,the careful choice of an initial ensemble perturbation method that dynamically cycles ensemble perturbations is required for the optimal performance of the system.Based on the multivariate empirical onhogonal function(MEOF)method,a new ensemble initialization scheme is developed to generate balanced initial perturbations for the ensemble Kalman filter(EnKF)data assimilation,with a reasonable consideration of the physical relationships between different model variables.The scheme is applied in assimilation experiments with a global spectral atmospheric model and with real observations.The proposed perturbation method is compared to the commonly used method of spatially-correlated random perturbations.The comparisons show that the model uncertainties prior to the first analysis time,which are forecasted from the balanced ensemble initial fields,maintain a much more reasonable spread and a more accurate forecast error covariance than those from the randomly perturbed initial fields.The analysis results are further improved by the balanced ensemble initialization scheme due to more accurate background information.Also,a 20-day continuous assimilation experiment shows that the ensemble spreads for each model variable are still retained in reasonable ranges without considering additional perturbations or inflations during the assimilation cycles,while the ensemble spreads from the randomly perturbed initialization scheme decrease and collapse rapidly.
MBARI CANON Experiment Visualization and Analysis
Fatland, R.; Oscar, N.; Ryan, J. P.; Bellingham, J. G.
2013-12-01
We describe the task of understanding a marine drift experiment conducted by MBARI in Fall 2012 ('CANON'). Datasets were aggregated from a drifting ADCP, from the MBARI Environmental Sample Processor, from Long Range Autonomous Underwater Vehicles (LRAUVs), from other in situ sensors, from NASA and NOAA remote sensing platforms, from moorings, from shipboard CTD casts and from post-experiment metagenomic analysis. We seek to combine existing approaches to data synthesis -- visual inspection, cross correlation and co.-- with three new ideas. This approach has the purpose of differentiating biological signals into three causal categories: Microcurrent advection, physical factors and microbe metabolism. Respective examples are aberrance from Lagrangian frame drift due to windage, changes in solar flux over several days, and microbial population responses to shifts in nitrate concentration. The three ideas we implemented are as follows: First, we advect LRAUV data to look for patterns in time series data for conserved quanitities such as salinity. We investigate whether such patterns can be used to support or undermine the premise of Lagrangian motion of the experiment ensemble. Second we built a set of configurable filters that enable us to visually isolate segments of data: By type, value, time, anomaly and location. Third we associated data hypotheses with a Bayesian inferrence engine for the purpose of model validation, again across sections taken from within the complete data complex. The end result is towards a free-form exploration of experimental data with low latency: from question to view, from hypothesis to test (albeit with considerable preparatory effort.) Preliminary results show the three causal categories shifting in relative influence.
Canonical group quantization and boundary conditions
Energy Technology Data Exchange (ETDEWEB)
Jung, Florian
2012-07-16
In the present thesis, we study quantization of classical systems with non-trivial phase spaces using the group-theoretical quantization technique proposed by Isham. Our main goal is a better understanding of global and topological aspects of quantum theory. In practice, the group-theoretical approach enables direct quantization of systems subject to constraints and boundary conditions in a natural and physically transparent manner -- cases for which the canonical quantization method of Dirac fails. First, we provide a clarification of the quantization formalism. In contrast to prior treatments, we introduce a sharp distinction between the two group structures that are involved and explain their physical meaning. The benefit is a consistent and conceptually much clearer construction of the Canonical Group. In particular, we shed light upon the 'pathological' case for which the Canonical Group must be defined via a central Lie algebra extension and emphasise the role of the central extension in general. In addition, we study direct quantization of a particle restricted to a half-line with 'hard wall' boundary condition. Despite the apparent simplicity of this example, we show that a naive quantization attempt based on the cotangent bundle over the half-line as classical phase space leads to an incomplete quantum theory; the reflection which is a characteristic aspect of the 'hard wall' is not reproduced. Instead, we propose a different phase space that realises the necessary boundary condition as a topological feature and demonstrate that quantization yields a suitable quantum theory for the half-line model. The insights gained in the present special case improve our understanding of the relation between classical and quantum theory and illustrate how contact interactions may be incorporated.
Calibrating ensemble reliability whilst preserving spatial structure
Directory of Open Access Journals (Sweden)
Jonathan Flowerdew
2014-03-01
Full Text Available Ensemble forecasts aim to improve decision-making by predicting a set of possible outcomes. Ideally, these would provide probabilities which are both sharp and reliable. In practice, the models, data assimilation and ensemble perturbation systems are all imperfect, leading to deficiencies in the predicted probabilities. This paper presents an ensemble post-processing scheme which directly targets local reliability, calibrating both climatology and ensemble dispersion in one coherent operation. It makes minimal assumptions about the underlying statistical distributions, aiming to extract as much information as possible from the original dynamic forecasts and support statistically awkward variables such as precipitation. The output is a set of ensemble members preserving the spatial, temporal and inter-variable structure from the raw forecasts, which should be beneficial to downstream applications such as hydrological models. The calibration is tested on three leading 15-d ensemble systems, and their aggregation into a simple multimodel ensemble. Results are presented for 12 h, 1° scale over Europe for a range of surface variables, including precipitation. The scheme is very effective at removing unreliability from the raw forecasts, whilst generally preserving or improving statistical resolution. In most cases, these benefits extend to the rarest events at each location within the 2-yr verification period. The reliability and resolution are generally equivalent or superior to those achieved using a Local Quantile-Quantile Transform, an established calibration method which generalises bias correction. The value of preserving spatial structure is demonstrated by the fact that 3×3 averages derived from grid-scale precipitation calibration perform almost as well as direct calibration at 3×3 scale, and much better than a similar test neglecting the spatial relationships. Some remaining issues are discussed regarding the finite size of the output
Heuser, Frank
2011-01-01
Public school music education in the USA remains wedded to large ensemble performance. Instruction tends to be teacher directed, relies on styles from the Western canon and exhibits little concern for musical interests of students. The idea that a fundamental purpose of education is the creation of a just society is difficult for many music…
Canonical metrics on complex manifold
Institute of Scientific and Technical Information of China (English)
YAU Shing-Tung
2008-01-01
@@ Complex manifolds are topological spaces that are covered by coordinate charts where the Coordinate changes are given by holomorphic transformations. For example, Riemann surfaces are one dimensional complex manifolds. In order to understand complex manifolds, it is useful to introduce metrics that are compatible with the complex structure. In general, we should have a pair (M, ds2M) where ds2M is the metric. The metric is said to be canonical if any biholomorphisms of the complex manifolds are automatically isometries. Such metrics can naturally be used to describe invariants of the complex structures of the manifold.
Canonical metrics on complex manifold
Institute of Scientific and Technical Information of China (English)
YAU; Shing-Tung(Yau; S.-T.)
2008-01-01
Complex manifolds are topological spaces that are covered by coordinate charts where the coordinate changes are given by holomorphic transformations.For example,Riemann surfaces are one dimensional complex manifolds.In order to understand complex manifolds,it is useful to introduce metrics that are compatible with the complex structure.In general,we should have a pair(M,ds~2_M)where ds~2_M is the metric.The metric is said to be canonical if any biholomorphisms of the complex manifolds are automatically isometries.Such metrics can naturally be used to describe invariants of the complex structures of the manifold.
What is "Relativistic Canonical Quantization"?
Arbatsky, D. A.
2005-01-01
The purpose of this review is to give the most popular description of the scheme of quantization of relativistic fields that was named relativistic canonical quantization (RCQ). I do not give here the full exact account of this scheme. But with the help of this review any physicist, even not a specialist in the relativistic quantum theory, will be able to get a general view of the content of RCQ, of its connection with other known approaches, of its novelty and of its fruitfulness.
Data assimilation in integrated hydrological modeling using ensemble Kalman filtering
DEFF Research Database (Denmark)
Rasmussen, Jørn; Madsen, H.; Jensen, Karsten Høgh
2015-01-01
Groundwater head and stream discharge is assimilated using the ensemble transform Kalman filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members......, an adaptive localization method is used. The performance of the adaptive localization method is compared to the more common distance-based localization. The relationship between filter performance in terms of hydraulic head and discharge error and the number of ensemble members is investigated for varying...... and estimating parameters requires a much larger ensemble size than just assimilating groundwater head observations. However, the required ensemble size can be greatly reduced with the use of adaptive localization, which by far outperforms distance-based localization. The study is conducted using synthetic data...
DEFF Research Database (Denmark)
Hansen, Lars Kai; Salamon, Peter
1990-01-01
We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...
Wakefield, M. E.
1982-01-01
Protective garment ensemble with internally-mounted environmental- control unit contains its own air supply. Alternatively, a remote-environmental control unit or an air line is attached at the umbilical quick disconnect. Unit uses liquid air that is vaporized to provide both breathing air and cooling. Totally enclosed garment protects against toxic substances.
Directory of Open Access Journals (Sweden)
Leandro Machado Colli
Full Text Available INTRODUCTION: Canonical and non-canonical Wnt pathways are involved in the genesis of multiple tumors; however, their role in pituitary tumorigenesis is mostly unknown. OBJECTIVE: This study evaluated gene and protein expression of Wnt pathways in pituitary tumors and whether these expression correlate to clinical outcome. MATERIALS AND METHODS: Genes of the WNT canonical pathway: activating ligands (WNT11, WNT4, WNT5A, binding inhibitors (DKK3, sFRP1, β-catenin (CTNNB1, β-catenin degradation complex (APC, AXIN1, GSK3β, inhibitor of β-catenin degradation complex (AKT1, sequester of β-catenin (CDH1, pathway effectors (TCF7, MAPK8, NFAT5, pathway mediators (DVL-1, DVL-2, DVL-3, PRICKLE, VANGL1, target genes (MYB, MYC, WISP2, SPRY1, TP53, CCND1; calcium dependent pathway (PLCB1, CAMK2A, PRKCA, CHP; and planar cell polarity pathway (PTK7, DAAM1, RHOA were evaluated by QPCR, in 19 GH-, 18 ACTH-secreting, 21 non-secreting (NS pituitary tumors, and 5 normal pituitaries. Also, the main effectors of canonical (β-catenin, planar cell polarity (JNK, and calcium dependent (NFAT5 Wnt pathways were evaluated by immunohistochemistry. RESULTS: There are no differences in gene expression of canonical and non-canonical Wnt pathways between all studied subtypes of pituitary tumors and normal pituitaries, except for WISP2, which was over-expressed in ACTH-secreting tumors compared to normal pituitaries (4.8x; p = 0.02, NS pituitary tumors (7.7x; p = 0.004 and GH-secreting tumors (5.0x; p = 0.05. β-catenin, NFAT5 and JNK proteins showed no expression in normal pituitaries and in any of the pituitary tumor subtypes. Furthermore, no association of the studied gene or protein expression was observed with tumor size, recurrence, and progressive disease. The hierarchical clustering showed a regular pattern of genes of the canonical and non-canonical Wnt pathways randomly distributed throughout the dendrogram. CONCLUSIONS: Our data reinforce previous reports
Institute of Scientific and Technical Information of China (English)
Ren Wen-Xiu; Alatancang
2007-01-01
Using factorization viewpoint of differential operator, this paper discusses how ti transform a nonlinear evolution equation to infinite-dimensional Hamiltonian linear canonical formulation. It proves a sufficient condition of canonical factorization of operator, and provides a kind of mechanical algebraic method to achieve canonical '(δ)/(δ)x'-type expression, correspondingly. Then three examples are given, which show the application of the obtained algorithm. Thus a novel idea for inverse problem can be derived fegibly.
De praeceptis ferendis: good practice in multi-model ensembles
Directory of Open Access Journals (Sweden)
I. Kioutsioukis
2014-06-01
Full Text Available Ensembles of air quality models have been formally and empirically shown to outperform single models in many cases. Evidence suggests that ensemble error is reduced when the members form a diverse and accurate ensemble. Diversity and accuracy are hence two factors that should be taken care of while designing ensembles in order for them to provide better predictions. There exists a trade-off between diversity and accuracy for which one cannot be gained without expenses of the other. Theoretical aspects like the bias-variance-covariance decomposition and the accuracy-diversity decomposition are linked together and support the importance of creating ensemble that incorporates both the elements. Hence, the common practice of unconditional averaging of models without prior manipulation limits the advantages of ensemble averaging. We demonstrate the importance of ensemble accuracy and diversity through an inter-comparison of ensemble products for which a sound mathematical framework exists, and provide specific recommendations for model selection and weighting for multi model ensembles. To this end we have devised statistical tools that can be used for diagnostic evaluation of ensemble modelling products, complementing existing operational methods.
Face hallucination using orthogonal canonical correlation analysis
Zhou, Huiling; Lam, Kin-Man
2016-05-01
A two-step face-hallucination framework is proposed to reconstruct a high-resolution (HR) version of a face from an input low-resolution (LR) face, based on learning from LR-HR example face pairs using orthogonal canonical correlation analysis (orthogonal CCA) and linear mapping. In the proposed algorithm, face images are first represented using principal component analysis (PCA). Canonical correlation analysis (CCA) with the orthogonality property is then employed, to maximize the correlation between the PCA coefficients of the LR and the HR face pairs to improve the hallucination performance. The original CCA does not own the orthogonality property, which is crucial for information reconstruction. We propose using orthogonal CCA, which is proven by experiments to achieve a better performance in terms of global face reconstruction. In addition, in the residual-compensation process, a linear-mapping method is proposed to include both the inter- and intrainformation about manifolds of different resolutions. Compared with other state-of-the-art approaches, the proposed framework can achieve a comparable, or even better, performance in terms of global face reconstruction and the visual quality of face hallucination. Experiments on images with various parameter settings and blurring distortions show that the proposed approach is robust and has great potential for real-world applications.
Observables in classical canonical gravity: Folklore demystified
Pons, J. M.; Salisbury, D. C.; Sundermeyer, K. A.
2010-04-01
We give an overview of some conceptual difficulties, sometimes called paradoxes, that have puzzled for years the physical interpetation of classical canonical gravity and, by extension, the canonical formulation of generally covariant theories. We identify these difficulties as stemming form some terminological misunderstandings as to what is meant by "gauge invariance", or what is understood classically by a "physical state". We make a thorough analysis of the issue and show that all purported paradoxes disappear when the right terminology is in place. Since this issue is connected with the search of observables - gauge invariant quantities - for these theories, we formally show that time evolving observables can be constructed for every observer. This construction relies on the fixation of the gauge freedom of diffeomorphism invariance by means of a scalar coordinatization. We stress the condition that the coordinatization must be made with scalars. As an example of our method for obtaining observables we discuss the case of the massive particle in AdS spacetime.
Observables in classical canonical gravity: Folklore demystified
Energy Technology Data Exchange (ETDEWEB)
Pons, J M [Departament d' Estructura i Constituents de la Materia and Institut de Ciencies del Cosmos, Universitat de Barcelona, Diagonal 647, 08028 Barcelona, Catalonia (Spain); Salisbury, D C [Department of Physics, Austin College, Sherman, Texas 75090-4440, USA, and Max-Planck-Institut fuer Wissenschaftsgeschichte, Boltzmannstrasse 22, 14195 Berlin (Germany); Sundermeyer, K A, E-mail: pons@ecm.ub.e, E-mail: dsalisbury@austincollege.ed, E-mail: ksun@gmx.d [Freie Universitaet Berlin, Fachbereich Physik, Institute for Theoretical Physics, Arnimallee 14, 14195 Berlin (Germany)
2010-04-01
We give an overview of some conceptual difficulties, sometimes called paradoxes, that have puzzled for years the physical interpetation of classical canonical gravity and, by extension, the canonical formulation of generally covariant theories. We identify these difficulties as stemming form some terminological misunderstandings as to what is meant by 'gauge invariance', or what is understood classically by a 'physical state'. We make a thorough analysis of the issue and show that all purported paradoxes disappear when the right terminology is in place. Since this issue is connected with the search of observables - gauge invariant quantities - for these theories, we formally show that time evolving observables can be constructed for every observer. This construction relies on the fixation of the gauge freedom of diffeomorphism invariance by means of a scalar coordinatization. We stress the condition that the coordinatization must be made with scalars. As an example of our method for obtaining observables we discuss the case of the massive particle in AdS spacetime.
Canonical and alternative MAPK signaling.
Pimienta, Genaro; Pascual, Jaime
2007-11-01
The archetype of MAPK cascade activation is somewhat challenged by the most recent discovery of unexpected phosphorylation patterns, alternative activation mechanisms and sub-cellular localization, in various members of this protein kinase family. In particular, activation by autophosphorylation pathways has now been described for the three best understood MAPK subgroups: ERK1/2; JNK1/2 and p38 alpha/beta. Also, a form of dosage compensation between homologs has been shown to occur in the case of ERK1/2 and JNK1/2. In this paper we summarize the MAPK activation pathway, with an emphasis on non-canonical examples. We use this information to propose a model for MAPK signal transduction that considers a cross-talk between MAPKs with different activation loop sequence motifs and unique C-terminal extensions. We highlight the occurrence of non-canonical substrate specificity during MAPK auto-activation, in strong connection with MAPK homo- and hetero-dimerization events.
A Localized Ensemble Kalman Smoother
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
Visualizing ensembles in structural biology.
Melvin, Ryan L; Salsbury, Freddie R
2016-06-01
Displaying a single representative conformation of a biopolymer rather than an ensemble of states mistakenly conveys a static nature rather than the actual dynamic personality of biopolymers. However, there are few apparent options due to the fixed nature of print media. Here we suggest a standardized methodology for visually indicating the distribution width, standard deviation and uncertainty of ensembles of states with little loss of the visual simplicity of displaying a single representative conformation. Of particular note is that the visualization method employed clearly distinguishes between isotropic and anisotropic motion of polymer subunits. We also apply this method to ligand binding, suggesting a way to indicate the expected error in many high throughput docking programs when visualizing the structural spread of the output. We provide several examples in the context of nucleic acids and proteins with particular insights gained via this method. Such examples include investigating a therapeutic polymer of FdUMP (5-fluoro-2-deoxyuridine-5-O-monophosphate) - a topoisomerase-1 (Top1), apoptosis-inducing poison - and nucleotide-binding proteins responsible for ATP hydrolysis from Bacillus subtilis. We also discuss how these methods can be extended to any macromolecular data set with an underlying distribution, including experimental data such as NMR structures.
Calculation of the chemical potential in the Gibbs ensemble
Smit, B.; Frenkel, D.
1989-01-01
An expression for the chemical potential in the Gibbs ensemble is derived. For finite system sizes this expression for the chemical potential differs system-atically from Widom's test particle insertion method for the N, V, T ensemble. In order to compare these two methods for calculating the chemic
Gillers, Benjamin S; Chiplunkar, Aditi; Aly, Haytham; Valenta, Tomas; Basler, Konrad; Christoffels, Vincent M.; Efimov, Igor R; Boukens, Bastiaan J; Rentschler, Stacey
2014-01-01
Rationale Proper patterning of the atrioventricular canal (AVC) is essential for delay of electrical impulses between atria and ventricles, and defects in AVC maturation can result in congenital heart disease. Objective To determine the role of canonical Wnt signaling in the myocardium during AVC development. Methods and Results We utilized a novel allele of β-catenin that preserves β-catenin’s cell adhesive functions but disrupts canonical Wnt signaling, allowing us to probe the effects of Wnt loss of function independently. We show that loss of canonical Wnt signaling in the myocardium results in tricuspid atresia with hypoplastic right ventricle associated with loss of AVC myocardium. In contrast, ectopic activation of Wnt signaling was sufficient to induce formation of ectopic AV junction-like tissue as assessed by morphology, gene expression, and electrophysiologic criteria. Aberrant AVC development can lead to ventricular preexcitation, a characteristic feature of Wolff-Parkinson-White syndrome. We demonstrate that postnatal activation of Notch signaling downregulates canonical Wnt targets within the AV junction. Stabilization of β-catenin protein levels can rescue Notch-mediated ventricular preexcitation and dysregulated ion channel gene expression. Conclusions Our data demonstrate that myocardial canonical Wnt signaling is an important regulator of AVC maturation and electrical programming upstream of Tbx3. Our data further suggests that ventricular preexcitation may require both morphologic patterning defects, as well as myocardial lineage reprogramming, to allow robust conduction across accessory pathway tissue. PMID:25599332
Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît
2016-04-12
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.
Canonical Entropy and Phase Transition of Rotating Black Hole
Institute of Scientific and Technical Information of China (English)
ZHAO Ren; WU Yue-Qin; ZHANG Li-Chun
2008-01-01
Recently, the Hawking radiation of a black hole has been studied using the tunnel effect method. The radiation spectrum of a black hole is derived. By discussing the correction to spectrum of the rotating black hole, we obtain the canonical entropy. The derived canonical entropy is equal to the sum of Bekenstein-Hawking entropy and correction term. The correction term near the critical point is different from the one near others. This difference plays an important role in studying the phase transition of the black hole. The black hole thermal capacity diverges at the critical point. However, the canonical entropy is not a complex number at this point. Thus we think that the phase transition created by this critical point is the second order phase transition. The discussed black hole is a five-dimensional Kerr-AdS black hole. We provide a basis for discussing thermodynamic properties of a higher-dimensional rotating black hole.
Convolution theorems for the linear canonical transform and their applications
Institute of Scientific and Technical Information of China (English)
DENG Bing; TAO Ran; WANG Yue
2006-01-01
As generalization of the fractional Fourier transform (FRFT), the linear canonical transform (LCT) has been used in several areas, including optics and signal processing. Many properties for this transform are already known, but the convolution theorems, similar to the version of the Fourier transform, are still to be determined. In this paper, the authors derive the convolution theorems for the LCT, and explore the sampling theorem and multiplicative filter for the band limited signal in the linear canonical domain. Finally, the sampling and reconstruction formulas are deduced, together with the construction methodology for the above mentioned multiplicative filter in the time domain based on fast Fourier transform (FFT), which has much lower computational load than the construction method in the linear canonical domain.
Kernel canonical-correlation Granger causality for multiple time series
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Canonical curves with low apolarity
Ballico, Edoardo; Notari, Roberto
2010-01-01
Let $k$ be an algebraically closed field and let $C$ be a non--hyperelliptic smooth projective curve of genus $g$ defined over $k$. Since the canonical model of $C$ is arithmetically Gorenstein, Macaulay's theory of inverse systems allows to associate to $C$ a cubic form $f$ in the divided power $k$--algebra $R$ in $g-2$ variables. The apolarity of $C$ is the minimal number $t$ of linear form in $R$ needed to write $f$ as sum of their divided power cubes. It is easy to see that the apolarity of $C$ is at least $g-2$ and P. De Poi and F. Zucconi classified curves with apolarity $g-2$ when $k$ is the complex field. In this paper, we give a complete, characteristic free, classification of curves $C$ with apolarity $g-1$ (and $g-2$).
Directory of Open Access Journals (Sweden)
Knud Jeppesen
2003-11-01
Full Text Available The Psalter, read as a coherent book instead of being read as 150 independent poems, reveals some patterns and a continuum of ideas, which might not express the editors’ original intention, but support the readers’ understanding of this canonical book. The article suggests that, even if the majority of texts are laments, the Psalter is a book of praise, underlined for instance by the endings of the Psalter’s five books. The five books relate the Psalter to the Pentateuch, and a form of competition between David and Moses is found (see esp Book 4, of which David was the winner. This is one of the reasons why the Christians were able to read the Psalter as a Christian book.
Ye, Aizhong; Deng, Xiaoxue; Ma, Feng; Duan, Qingyun; Zhou, Zheng; Du, Chao
2017-04-01
Despite the tremendous improvement made in numerical weather and climate models over the recent years, the forecasts generated by those models still cannot be used directly for hydrological forecasting. A post-processor like the Ensemble Pre-Processor (EPP) developed by U.S. National Weather Service must be used to remove various biases and to extract useful predictive information from those forecasts. In this paper, we investigate how different designs of canonical events in the EPP can help post-process precipitation forecasts from the Global Ensemble Forecast System (GEFS) and Climate Forecast System Version 2 (CFSv2). The use of canonical events allow those products to be linked seamlessly and then the post-processed ensemble precipitation forecasts can be generated using the Schaake Shuffle procedure. We used the post-processed ensemble precipitation forecasts to drive a distributed hydrological model to obtain ensemble streamflow forecasts and evaluated those forecasts against the observed streamflow. We found that the careful design of canonical events can help extract more useful information, especially when up-to-date observed precipitation is used to setup the canonical events. We also found that streamflow forecasts using post-processed precipitation forecasts have longer lead times and higher accuracy than streamflow forecasts made by traditional Extend Streamflow Prediction (ESP) and the forecasts based on original GEFS and CFSv2 precipitation forecasts.
Institute of Scientific and Technical Information of China (English)
卿光辉; 王亚辉; 李顶河
2011-01-01
In the application of symplectic numerical methods to Hamiltonian systems, it is important to recognize that a nearby Hamiltonian is approximately conserved for.exponentially long times. The numerical result of separable differential equation is very accurate by using the symplectic numerical methods. Based on the modified Hellinger-Reissner (H-R) variational principle of piezoelectricity, Hamiltonian four-node rectangular element matrix was constructed in this paper. Then the separable K-canonical formulation of the Hamiltonian element was derived by exchanging the row-column of Hamiltonian element formulation. Finally, the explicit symplectic schemes was employed to solve the static problem of piezoelectric material laminated plate. The numerical examples show that the explicit symplectic method can be applied to the large-scale differential equation.%显式辛数值算法有一个重要的特性,即在长时间内保存Hamilton函数的指数幂,用这种方法求解可分的微分方程所得到的解逼近精确解.该文基于压电材料修正后的H-R混合变分原理,首先推导了Hamiltonian四节点有限元列式,然后通过对该列式进行行列变换,得到了K正则方程.最后将显式辛数值算法用于求解压电材料层合板的静力学问题,数值算例说明显式辛数值算法完全可以应用到高维的微分方程中.
UNIVARIATE DECOMPOSE-ENSEMBLE METHOD BASED MILK DEMAND FORECASTING%基于单变量分解集成的牛奶消费需求预测研究
Institute of Scientific and Technical Information of China (English)
王帅; 汤铃; 余乐安
2013-01-01
Prediction for future market demand of milk is important for stabilizing milk price, developing marketing strategies and production planning decisions. This paper proposes a novel univariate decompose-ensemble methodology which uses the ensemble empirical mode decomposition (EEMD), wavelet decomposition, and least squares support vector regression (LSSVR) to predict milk consumption in China. At the same time, the single LSSVR method is applied for comparison purpose. In the decompose-ensemble methods, EEMD and wavelet decomposition methods are first used to decompose the original data and then LSSVR approach is used to predict the separated components. Finally, the prediction results of different components are combined to formulate the ensemble result. It can be seen from the forecasting results that the milk demand from 2010 to 2012 will increase. Based on the result, the related departments should take actions to ensure the healthy development of dairy market in China.%牛奶消费需求预测对牛奶价格的稳定以及奶业生产的计划安排、销售决策具有重要意义.选取牛奶的全国年度总消费量作为研究对象,提出基于集合经验模态分解(Ensemble Empirical Mode Decomposition,EEMD)/小波(Wavelet)分解和最小二乘支持向量回归(Least Squares Support Vector Regression,LSSVR)的单变量分解集成方法,以对牛奶消费需求量进行预测研究.实证检验表明,所提出的单变量分解集成预测方法相比单一预测模型能更为有效地预测牛奶消费需求.外推预测结果显示:2010-2012年我国牛奶消费量将呈现出上升的趋势.牛奶预测精度的有效提高将有助于有关决策部门提前做好调控工作,从而保证奶业市场的健康发展.
Effective Visualization of Temporal Ensembles.
Hao, Lihua; Healey, Christopher G; Bass, Steffen A
2016-01-01
An ensemble is a collection of related datasets, called members, built from a series of runs of a simulation or an experiment. Ensembles are large, temporal, multidimensional, and multivariate, making them difficult to analyze. Another important challenge is visualizing ensembles that vary both in space and time. Initial visualization techniques displayed ensembles with a small number of members, or presented an overview of an entire ensemble, but without potentially important details. Recently, researchers have suggested combining these two directions, allowing users to choose subsets of members to visualization. This manual selection process places the burden on the user to identify which members to explore. We first introduce a static ensemble visualization system that automatically helps users locate interesting subsets of members to visualize. We next extend the system to support analysis and visualization of temporal ensembles. We employ 3D shape comparison, cluster tree visualization, and glyph based visualization to represent different levels of detail within an ensemble. This strategy is used to provide two approaches for temporal ensemble analysis: (1) segment based ensemble analysis, to capture important shape transition time-steps, clusters groups of similar members, and identify common shape changes over time across multiple members; and (2) time-step based ensemble analysis, which assumes ensemble members are aligned in time by combining similar shapes at common time-steps. Both approaches enable users to interactively visualize and analyze a temporal ensemble from different perspectives at different levels of detail. We demonstrate our techniques on an ensemble studying matter transition from hadronic gas to quark-gluon plasma during gold-on-gold particle collisions.
Uncertainty relations, zero point energy and the linear canonical group
Sudarshan, E. C. G.
1993-01-01
The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.
AN ALGORITHM FOR JORDAN CANONICAL FORM OF A QUATERNION MATRIX
Institute of Scientific and Technical Information of China (English)
姜同松; 魏木生
2003-01-01
In this paper, we first introduce a concept of companion vector, and studythe Jordan canonical forms of quaternion matrices by using the methods of complex representation and companion vector, not only give out a practical algorithm for Jordancanonical form J of a quaternion matrix A, but also provide a practical algorithm forcorresponding nonsingular matrix P with P- 1 AP = J.
Canonical derivation of the Vlasov-Coulomb noncanonical Poisson structure
Energy Technology Data Exchange (ETDEWEB)
Kaufman, A.N.; Dewar, R.L.
1983-09-01
Starting from a Lagrangian formulation of the Vlasov-Coulomb system, canonical methods are used to define a Poisson structure for this system. Successive changes of representation then lead systematically to the noncanonical Lie-Poisson structure for functionals of the Vlasov distribution.
CANONICAL COMPUTATIONAL FORMS FOR AR 2-D SYSTEMS
ROCHA, P; WILLEMS, JC
1990-01-01
A canonical form for AR 2-D systems representations is introduced. This yields a method for computing the system trajectories by means of a line-by-line recursion, and displays some relevant information about the system structure such as the choice of inputs and initial conditions.
Multiscale macromolecular simulation: role of evolving ensembles.
Singharoy, A; Joshi, H; Ortoleva, P J
2012-10-22
Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin time step is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers.
Nimon, Kim; Henson, Robin K.; Gates, Michael S.
2010-01-01
In the face of multicollinearity, researchers face challenges interpreting canonical correlation analysis (CCA) results. Although standardized function and structure coefficients provide insight into the canonical variates produced, they fall short when researchers want to fully report canonical effects. This article revisits the interpretation of…
Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling
Vrugt, J.A.; Diks, C.G.H.; Clark, M.
2008-01-01
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t
CANONICAL EXTENSIONS OF SYMMETRIC LINEAR RELATIONS
Sandovici, Adrian; Davidson, KR; Gaspar, D; Stratila, S; Timotin, D; Vasilescu, FH
2006-01-01
The concept of canonical extension of Hermitian operators has been recently introduced by A. Kuzhel. This paper deals with a generalization of this notion to the case of symmetric linear relations. Namely, canonical regular extensions of symmetric linear relations in Hilbert spaces are studied. The
De canon : een oude katholieke kerkstructuur?
Smit, P.B.A.
2011-01-01
Op 30 november 2011 houdt theoloog prof. dr. Peter-Ben Smit zijn oratie aan de Universiteit Utrecht. Daarin gaat hij na hoe de canon van het Nieuwe Testament tot stand kwam binnen de vroege kerk, en wat de functie van de canon was bij de uitleg - oftewel exegese - van de Schrift. Dit onderwerp kwam
The Current Canon in British Romantics Studies.
Linkin, Harriet Kramer
1991-01-01
Describes and reports on a survey of 164 U.S. universities to ascertain what is taught as the current canon of British Romantic literature. Asserts that the canon may now include Mary Shelley with the former standard six major male Romantic poets, indicating a significant emergence of a feminist perspective on British Romanticism in the classroom.…
Subsets of configurations and canonical partition functions
DEFF Research Database (Denmark)
Bloch, J.; Bruckmann, F.; Kieburg, M.;
2013-01-01
We explain the physical nature of the subset solution to the sign problem in chiral random matrix theory: the subset sum over configurations is shown to project out the canonical determinant with zero quark charge from a given configuration. As the grand canonical chiral random matrix partition...
Canonical structure of 2D black holes
Navarro-Salas, J; Talavera, C F
1994-01-01
We determine the canonical structure of two-dimensional black-hole solutions arising in $2D$ dilaton gravity. By choosing the Cauchy surface appropriately we find that the canonically conjugate variable to the black hole mass is given by the difference of local (Schwarzschild) time translations at right and left spatial infinities. This can be regarded as a generalization of Birkhoff's theorem.
Control and Synchronization of Neuron Ensembles
Li, Jr-Shin; Ruths, Justin
2011-01-01
Synchronization of oscillations is a phenomenon prevalent in natural, social, and engineering systems. Controlling synchronization of oscillating systems is motivated by a wide range of applications from neurological treatment of Parkinson's disease to the design of neurocomputers. In this article, we study the control of an ensemble of uncoupled neuron oscillators described by phase models. We examine controllability of such a neuron ensemble for various phase models and, furthermore, study the related optimal control problems. In particular, by employing Pontryagin's maximum principle, we analytically derive optimal controls for spiking single- and two-neuron systems, and analyze the applicability of the latter to an ensemble system. Finally, we present a robust computational method for optimal control of spiking neurons based on pseudospectral approximations. The methodology developed here is universal to the control of general nonlinear phase oscillators.
Ensemble Enabled Weighted PageRank
Luo, Dongsheng; Hu, Renjun; Duan, Liang; Ma, Shuai
2016-01-01
This paper describes our solution for WSDM Cup 2016. Ranking the query independent importance of scholarly articles is a critical and challenging task, due to the heterogeneity and dynamism of entities involved. Our approach is called Ensemble enabled Weighted PageRank (EWPR). To do this, we first propose Time-Weighted PageRank that extends PageRank by introducing a time decaying factor. We then develop an ensemble method to assemble the authorities of the heterogeneous entities involved in scholarly articles. We finally propose to use external data sources to further improve the ranking accuracy. Our experimental study shows that our EWPR is a good choice for ranking scholarly articles.
Reconstruction of the coupling architecture in an ensemble of coupled time-delay systems
Sysoev, I. V.; Ponomarenko, V. I.; Prokhorov, M. D.
2012-08-01
A method for reconstructing the coupling architecture and values in an ensemble of time-delay interacting systems with an arbitrary number of couplings between ensemble elements is proposed. This method is based on reconstruction of the model equations of ensemble elements and diagnostics of the coupling significance by successive trial exclusion or adding coupling coefficients to the model.
Canon, Jubilees 23 and Psalm 90
Directory of Open Access Journals (Sweden)
Pieter M. Venter
2014-02-01
Full Text Available There never existed only one form of the biblical canon. This can be seen in the versions as well as editions of the Hebrew and Greek Bibles. History and circumstances played a central role in the gradual growth of eventually different forms of the biblical canon. This process can be studied using the discipline of intertextuality. There always was a movement from traditum to traditio in the growth of these variant forms of biblical canon. This can be seen in an analysis of the intertextuality in Jubilees 23:8–32. The available canon of the day was interpreted there, not according to a specific demarcated volume of canonical scriptures, but in line with the theology presented in those materials, especially that of Psalm 90.
Statistical ensembles of virialized halo matter density profiles
Carron, Julien
2013-01-01
We define and study statistical ensembles of matter density profiles describing spherically symmetric, virialized dark matter haloes of finite extent with a given mass and total gravitational potential energy. We provide an exact solution for the grand canonical partition functional, and show its equivalence to that of the microcanonical ensemble. We obtain analytically the mean profiles that correspond to an overwhelming majority of micro-states. All such profiles have an infinitely deep potential well, with the singular isothermal sphere arising in the infinite temperature limit. Systems with virial radius larger than gravitational radius exhibit a localization of a finite fraction of the energy in the very center. The universal logarithmic inner slope of unity of the NFW haloes is predicted at any mass and energy if an upper bound is set to the maximal depth of the potential well. In this case, the statistically favored mean profiles compare well to the NFW profiles. For very massive haloes the agreement b...
Total probabilities of ensemble runoff forecasts
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2016-04-01
Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative
Multilevel ensemble Kalman filtering
Hoel, Hakon
2016-06-14
This work embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. The resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.
Institute of Scientific and Technical Information of China (English)
朱群雄; 赵乃伟; 徐圆
2012-01-01
Chemical processes are complex, for which traditional neural network models usually can not lead to satisfactory accuracy. Selective neural network ensemble is an effective way to enhance the generalization accuracy of networks, but there are some problems, e.g., lacking of unified definition of diversity among component neural networks and difficult to improve the accuracy by selecting if the diversities of available networks are small. In this study, the output errors of networks are vectorized, the diversity of networks is defined based on the error vectors, and the size of ensemble is analyzed. Then an error vectorization based selective neural network ensemble (EVSNE) is proposed, in which the error vector of each network can offset that of the other networks by training the component networks orderly. Thus the component networks have large diversity. Experiments and comparisons over standard data sets and actual chemical process data set for production of high-density polyethylene demonstrate that EVSNE performs better in generalization ability.
Ensemble nonequivalence in random graphs with modular structure
Garlaschelli, Diego; Roccaverde, Andrea
2016-01-01
Breaking of equivalence between the microcanonical ensemble and the canonical ensemble, describing a large system subject to hard and soft constraints, respectively, was recently shown to occur in large random graphs. Hard constraints must be met by every graph, soft constraints must be met only on average, subject to maximal entropy. In Squartini et al. (2015) it was shown that ensembles of random graphs are non-equivalent when the degrees of the nodes are constrained, in the sense of a non-zero limiting specific relative entropy as the number of nodes diverges. In that paper, the nodes were placed either on a single layer (uni-partite graphs) or on two layers (bi-partite graphs). In the present paper we consider an arbitrary number of intra-connected and inter-connected layers, thus allowing for modular graphs with a multi-partite, multiplex, block-model or community structure. We give a full classification of ensemble equivalence, proving that breakdown occurs if and only if the number of local constraints...
Critical behavior in topological ensembles
Bulycheva, K; Nechaev, S
2014-01-01
We consider the relation between three physical problems: 2D directed lattice random walks in an external magnetic field, ensembles of torus knots, and 5d Abelian SUSY gauge theory with massless hypermultiplet in $\\Omega$ background. All these systems exhibit the critical behavior typical for the "area+length" statistics of grand ensembles of 2D directed paths. In particular, using the combinatorial description, we have found the new critical behavior in the ensembles of the torus knots and in the instanton ensemble in 5d gauge theory. The relation with the integrable model is discussed.
Ensemble Forecast: A New Approach to Uncertainty and Predictability
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
Ensemble techniques have been used to generate daily numerical weather forecasts since the 1990s in numerical centers around the world due to the increase in computation ability. One of the main purposes of numerical ensemble forecasts is to try to assimilate the initial uncertainty (initial error) and the forecast uncertainty (forecast error) by applying either the initial perturbation method or the multi-model/multiphysics method. In fact, the mean of an ensemble forecast offers a better forecast than a deterministic (or control) forecast after a short lead time (3 5 days) for global modelling applications. There is about a 1-2-day improvement in the forecast skill when using an ensemble mean instead of a single forecast for longer lead-time. The skillful forecast (65% and above of an anomaly correlation) could be extended to 8 days (or longer) by present-day ensemble forecast systems. Furthermore, ensemble forecasts can deliver a probabilistic forecast to the users, which is based on the probability density function (PDF)instead of a single-value forecast from a traditional deterministic system. It has long been recognized that the ensemble forecast not only improves our weather forecast predictability but also offers a remarkable forecast for the future uncertainty, such as the relative measure of predictability (RMOP) and probabilistic quantitative precipitation forecast (PQPF). Not surprisingly, the success of the ensemble forecast and its wide application greatly increase the confidence of model developers and research communities.
The canonical form of the Rabi hamiltonian
Szopa, M; Ceulemans, A; Szopa, Marek; Mys, Geert; Ceulemans, Arnout
1996-01-01
The Rabi Hamiltonian, describing the coupling of a two-level system to a single quantized boson mode, is studied in the Bargmann-Fock representation. The corresponding system of differential equations is transformed into a canonical form in which all regular singularities between zero and infinity have been removed. The canonical or Birkhoff-transformed equations give rise to a two-dimensional eigenvalue problem, involving the energy and a transformational parameter which affects the coupling strength. The known isolated exact solutions of the Rabi Hamiltonian are found to correspond to the uncoupled form of the canonical system.
Ensemble Deep Learning for Biomedical Time Series Classification
Directory of Open Access Journals (Sweden)
Lin-peng Jin
2016-01-01
Full Text Available Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Ensemble Deep Learning for Biomedical Time Series Classification.
Jin, Lin-Peng; Dong, Jun
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
Ensemble Deep Learning for Biomedical Time Series Classification
2016-01-01
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.
The role of ensemble post-processing for modeling the ensemble tail
Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-04-01
The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol
Excitations and benchmark ensemble density functional theory for two electrons
Pribram-Jones, Aurora; Trail, John R; Burke, Kieron; Needs, Richard J; Ullrich, Carsten A
2014-01-01
A new method for extracting ensemble Kohn-Sham potentials from accurate excited state densities is applied to a variety of two electron systems, exploring the behavior of exact ensemble density functional theory. The issue of separating the Hartree energy and the choice of degenerate eigenstates is explored. A new approximation, spin eigenstate Hartree-exchange (SEHX), is derived. Exact conditions that are proven include the signs of the correlation energy components, the virial theorem for both exchange and correlation, and the asymptotic behavior of the potential for small weights of the excited states. Many energy components are given as a function of the weights for two electrons in a one-dimensional flat box, in a box with a large barrier to create charge transfer excitations, in a three-dimensional harmonic well (Hooke's atom), and for the He atom singlet-triplet ensemble, singlet-triplet-singlet ensemble, and triplet bi-ensemble.
Nandi, Debottam; Shankaranarayanan, S.
2016-10-01
In this work, we present a consistent Hamiltonian analysis of cosmological perturbations for generalized non-canonical scalar fields. In order to do so, we introduce a new phase-space variable that is uniquely defined for different non-canonical scalar fields. We also show that this is the simplest and efficient way of expressing the Hamiltonian. We extend the Hamiltonian approach of [1] to non-canonical scalar field and obtain an unique expression of speed of sound in terms of phase-space variable. In order to invert generalized phase-space Hamilton's equations to Euler-Lagrange equations of motion, we prescribe a general inversion formulae and show that our approach for non-canonical scalar field is consistent. We also obtain the third and fourth order interaction Hamiltonian for generalized non-canonical scalar fields and briefly discuss the extension of our method to generalized Galilean scalar fields.
Nandi, Debottam
2016-01-01
In this work, we present a consistent Hamiltonian analysis of cosmological perturbations for generalized non-canonical scalar fields. In order to do so, we introduce a new phase-space variable that is uniquely defined for different non-canonical scalar fields. We also show that this is the simplest and efficient way of expressing the Hamiltonian. We extend the Hamiltonian approach of [arXiv:1512.02539] to non-canonical scalar field and obtain a new definition of speed of sound in phase-space. In order to invert generalized phase-space Hamilton's equations to Euler-Lagrange equations of motion, we prescribe a general inversion formulae and show that our approach for non-canonical scalar field is consistent. We also obtain the third and fourth order interaction Hamiltonian for generalized non-canonical scalar fields and briefly discuss the extension of our method to generalized Galilean scalar fields.
Modality-Driven Classification and Visualization of Ensemble Variance
Energy Technology Data Exchange (ETDEWEB)
Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.
2016-10-01
Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.
Canonical equations of Hamilton with beautiful symmetry
Liang, Guo
2012-01-01
The Hamiltonian formulation plays the essential role in constructing the framework of modern physics. In this paper, a new form of canonical equations of Hamilton with the complete symmetry is obtained, which are valid not only for the first-order differential system, but also for the second-order differential system. The conventional form of the canonical equations without the symmetry [Goldstein et al., Classical Mechanics, 3rd ed, Addison-Wesley, 2001] are only for the second-order differential system. It is pointed out for the first time that the number of the canonical equations for the first-order differential system is half of that for the second-order differential system. The nonlinear Schr\\"{o}dinger equation, a universal first-order differential system, can be expressed with the new canonical equations in a consistent way.
ESPC Coupled Global Ensemble Design
2014-09-30
coupled system infrastructure and forecasting capabilities. Initial operational capability is targeted for 2018. APPROACH 1. It is recognized...provided will be the probability distribution function (PDF) of environmental conditions. It is expected that this distribution will have skill. To...system would be the initial capability for ensemble forecasts . Extensions to fully coupled ensembles would be the next step. 2. Develop an extended
Canonical equations of Hamilton with beautiful symmetry
Liang, Guo; Guo, Qi
2012-01-01
The Hamiltonian formulation plays the essential role in constructing the framework of modern physics. In this paper, a new form of canonical equations of Hamilton with the complete symmetry is obtained, which are valid not only for the first-order differential system, but also for the second-order differential system. The conventional form of the canonical equations without the symmetry [Goldstein et al., Classical Mechanics, 3rd ed, Addison-Wesley, 2001] are only for the second-order differe...
基于选择性SVM集成的模拟电路故障诊断方法%A Method of Analog Circuit Fault Diagnosis Based on Selective SVM Ensemble
Institute of Scientific and Technical Information of China (English)
吴杰长; 刘海松; 陈国钧
2011-01-01
为克服支持向量机在故障诊断应用中存在的不足,设计了基于聚类分析的选择性支持向量机集成学习算法,并应用于模拟电路故障诊断.该方法采用K-means聚类算法去除相似冗余个体,提高剩余个体学习机的差异性,增强了支持向量机集成模型的泛化能力.以ITC' 97标准电路中的Leap-Frog滤波电路为诊断实例进行了仿真实验.%A method of analog circuit fault diagnosis based on selective SVM ensemble is pres-ented in this paper. K - means clustering algorithm is used to improve the diversity of individuals in SVM ensemble, he method overcomes disadvantages of single SVM and greatly improves the generation ability. Simulation experiments on a Leap -Frog filter circuit are carried out.
Investigating the Dynamics of Canonical Flux Tubes
von der Linden, Jens; Sears, Jason; Intrator, Thomas; You, Setthivoine
2016-10-01
Canonical flux tubes are flux tubes of the circulation of a species' canonical momentum. They provide a convenient generalization of magnetic flux tubes to regimes beyond magnetohydrodynamics (MHD). We hypothesize that hierarchies of instabilities which couple disparate scales could transfer magnetic pitch into helical flows and vice versa while conserving the total canonical helicity. This work first explores the possibility of a sausage instability existing on top of a kink as mechanism for coupling scales, then presents the evolution of canonical helicity in a gyrating kinked flux rope. Analytical and numerical stability spaces derived for magnetic flux tubes with core and skin currents indicate that, as a flux tube lengthens and collimates, it may become kink unstable with a sausage instability developing on top of the kink. A new analysis of 3D magnetic field and ion flow data on gyrating kinked magnetic flux ropes from the Reconnection Scaling Experiment tracks the evolution of canonical flux tubes and their helicity. These results and methodology are being developed as part of the Mochi experiment specifically designed to observe the dynamics of canonical flux tubes. This work is supported by DOE Grant DE-SC0010340 and the DOE Office of Science Graduate Student Research Program and prepared in part by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-697161.
Directory of Open Access Journals (Sweden)
Alexander M Many
Full Text Available The characterization of mammary stem cells, and signals that regulate their behavior, is of central importance in understanding developmental changes in the mammary gland and possibly for targeting stem-like cells in breast cancer. The canonical Wnt/β-catenin pathway is a signaling mechanism associated with maintenance of self-renewing stem cells in many tissues, including mammary epithelium, and can be oncogenic when deregulated. Wnt1 and Wnt3a are examples of ligands that activate the canonical pathway. Other Wnt ligands, such as Wnt5a, typically signal via non-canonical, β-catenin-independent, pathways that in some cases can antagonize canonical signaling. Since the role of non-canonical Wnt signaling in stem cell regulation is not well characterized, we set out to investigate this using mammosphere formation assays that reflect and quantify stem cell properties. Ex vivo mammosphere cultures were established from both wild-type and Wnt1 transgenic mice and were analyzed in response to manipulation of both canonical and non-canonical Wnt signaling. An increased level of mammosphere formation was observed in cultures derived from MMTV-Wnt1 versus wild-type animals, and this was blocked by treatment with Dkk1, a selective inhibitor of canonical Wnt signaling. Consistent with this, we found that a single dose of recombinant Wnt3a was sufficient to increase mammosphere formation in wild-type cultures. Surprisingly, we found that Wnt5a also increased mammosphere formation in these assays. We confirmed that this was not caused by an increase in canonical Wnt/β-catenin signaling but was instead mediated by non-canonical Wnt signals requiring the receptor tyrosine kinase Ror2 and activity of the Jun N-terminal kinase, JNK. We conclude that both canonical and non-canonical Wnt signals have positive effects promoting stem cell activity in mammosphere assays and that they do so via independent signaling mechanisms.
The dark sector from interacting canonical and non-canonical scalar fields
Energy Technology Data Exchange (ETDEWEB)
De Souza, Rudinei C; Kremer, Gilberto M, E-mail: kremer@Fisica.ufpr.b [Departamento de Fisica, Universidade Federal do Parana, Curitiba (Brazil)
2010-09-07
In this work general models with interactions between two canonical scalar fields and between one non-canonical (tachyon type) and one canonical scalar field are investigated. The potentials and couplings to the gravity are selected through the Noether symmetry approach. These general models are employed to describe interactions between dark energy and dark matter, with the fields being constrained by the astronomical data. The cosmological solutions of some cases are compared with the observed evolution of the late Universe.
Dark Sector from Interacting Canonical and Non-Canonical Scalar Fields
de Souza, Rudinei C
2010-01-01
In this work it is investigated general models with interactions between two canonical scalar fields and between one non-canonical (tachyon-type) and one canonical scalar field. The potentials and couplings to the gravity are selected through the Noether symmetry approach. These general models are employed to describe interactions between dark energy and dark matter, with the fields being constrained by the astronomical data. The cosmological solutions of some cases are compared with the observed evolution of the late Universe.
Eigenstate Gibbs ensemble in integrable quantum systems
Nandy, Sourav; Sen, Arnab; Das, Arnab; Dhar, Abhishek
2016-12-01
The eigenstate thermalization hypothesis conjectures that for a thermodynamically large system in one of its energy eigenstates, the reduced density matrix describing any finite subsystem is determined solely by a set of relevant conserved quantities. In a chaotic quantum system, only the energy is expected to play that role and hence eigenstates appear locally thermal. Integrable systems, on the other hand, possess an extensive number of such conserved quantities and therefore the reduced density matrix requires specification of all the corresponding parameters (generalized Gibbs ensemble). However, here we show by unbiased statistical sampling of the individual eigenstates with a given finite energy density that the local description of an overwhelming majority of these states of even such an integrable system is actually Gibbs-like, i.e., requires only the energy density of the eigenstate. Rare eigenstates that cannot be represented by the Gibbs ensemble can also be sampled efficiently by our method and their local properties are then shown to be described by appropriately truncated generalized Gibbs ensembles. We further show that the presence of these rare eigenstates differentiates the model from the chaotic case and leads to the system being described by a generalized Gibbs ensemble at long time under a unitary dynamics following a sudden quench, even when the initial state is a typical (Gibbs-like) eigenstate of the prequench Hamiltonian.
Unified expression for the calculation of thermal conductivity in the canonical ensemble
Chialvo, Ariel A.; Cummings, Peter T.
A proof of the theoretical equivalence between the E. Helfand, 1960, Phys. Rev., 119, 1 and the D. McQuarrie, 1976, Statistical Mechanics (Harper & Row), Chap. 21 equations for the calculation of thermal conductivity via the Einsteintype relations is presented here. Some theoretical implications of that equivalence are also discussed, such as the unification of the thermal conductivity expressions into one similar to that given for linear transport coefficients by F. C. Andrews, 1967, J. Chem. Phys., 47, 3161.
Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.
2009-01-01
This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in
New QCD sum rules based on canonical commutation relations
Hayata, Tomoya
2012-04-01
New derivation of QCD sum rules by canonical commutators is developed. It is the simple and straightforward generalization of Thomas-Reiche-Kuhn sum rule on the basis of Kugo-Ojima operator formalism of a non-abelian gauge theory and a suitable subtraction of UV divergences. By applying the method to the vector and axial vector current in QCD, the exact Weinberg’s sum rules are examined. Vector current sum rules and new fractional power sum rules are also discussed.
The canonical Kravchuk basis for discrete quantum mechanics
Hakioglu, Tugrul; Wolf, Kurt Bernardo
2000-04-01
The well known Kravchuk formalism of the harmonic oscillator obtained from the direct discretization method is shown to be a new way of formulating discrete quantum phase space. It is shown that the Kravchuk oscillator Hamiltonian has a well defined unitary canonical partner which we identify with the quantum phase of the Kravchuk oscillator. The generalized discrete Wigner function formalism based on the action and angle variables is applied to the Kravchuk oscillator and its continuous limit is examined.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing
DEFF Research Database (Denmark)
Pinson, Pierre; Madsen, Henrik
. The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...
2011-09-01
variable is appropriately sized for the region ( UCAR 2010). 4. An Isotropic Joint-Ensemble Majumdar and Finochio (2010) develop a probability circle...Forecasting, 22, 671–675. UCAR , cited 2010: NCEP Perturbation Method. [Available online at http://www.meted.ucar.edu/nwp/pcu2/ens_matrix
基于噪声辅助分析的总体局部均值分解方法%Ensemble Local Mean Decomposition Method Based on Noise-assisted Analysis
Institute of Scientific and Technical Information of China (English)
程军圣; 张亢; 杨宇
2011-01-01
The local mean decomposition (LMD) is a newly self-adaptive time-frequency analysis method. Mode mixing phenomenon which makes the decomposition results distortion may be produced when LMD is performed. The filter bank structure of LMD in white noise is obtained by numerical experiments, and based on this, the ensemble local mean decomposition method (ELMD) is proposed to overcome the shortcomings of mode mixing. In ELMD, different white noise is added to the targeted signal.The noise-added signal is decomposed by using LMD. Several decomposed results is severed as the final decomposition result. The analytical results from simulated signal and experimental rotor local rub-impact signal demonstrate that the ELMD method can be used to improve the mode mixing of the original LMD method effectively.%局部均值分解(Local mean decomposition, LMD)方法是一种新的自适应时频分析方法,但在其实现过程中会发生模态混淆现象,使分析结果失真.通过数值试验得到了LMD对白噪声的滤波器组结构,并在此基础上,针对模态混淆现象提出总体局部均值分解(Ensemble local mean decomposition, ELMD)方法.在该方法中添加不同的白噪声到目标信号,分别对加噪后的信号进行LMD分解,最后将多次分解结果的平均值作为最终的分解结果.对仿真信号和试验转子局部碰摩信号进行分析,结果表明ELMD方法能有效地克服原LMD方法的模态混淆现象.
Agonistic and antagonistic roles for TNIK and MINK in non-canonical and canonical Wnt signalling.
Directory of Open Access Journals (Sweden)
Alexander Mikryukov
Full Text Available Wnt signalling is a key regulatory factor in animal development and homeostasis and plays an important role in the establishment and progression of cancer. Wnt signals are predominantly transduced via the Frizzled family of serpentine receptors to two distinct pathways, the canonical ß-catenin pathway and a non-canonical pathway controlling planar cell polarity and convergent extension. Interference between these pathways is an important determinant of cellular and phenotypic responses, but is poorly understood. Here we show that TNIK (Traf2 and Nck-interacting kinase and MINK (Misshapen/NIKs-related kinase MAP4K signalling kinases are integral components of both canonical and non-canonical pathways in Xenopus. xTNIK and xMINK interact and are proteolytically cleaved in vivo to generate Kinase domain fragments that are active in signal transduction, and Citron-NIK-Homology (CNH Domain fragments that are suppressive. The catalytic activity of the Kinase domain fragments of both xTNIK and xMINK mediate non-canonical signalling. However, while the Kinase domain fragments of xTNIK also mediate canonical signalling, the analogous fragments derived from xMINK strongly antagonize this signalling. Our data suggest that the proteolytic cleavage of xTNIK and xMINK determines their respective activities and is an important factor in controlling the balance between canonical and non-canonical Wnt signalling in vivo.
Linear response calculation using the canonical-basis TDHFB with a schematic pairing functional
Ebata, Shuichiro; Yabana, Kazuhiro
2010-01-01
A canonical-basis formulation of the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory is obtained with an approximation that the pair potential is assumed to be diagonal in the time-dependent canonical basis. The canonical-basis formulation significantly reduces the computational cost. We apply the method to linear-response calculations for even-even nuclei. E1 strength distributions for proton-rich Mg isotopes are systematically calculated. The calculation suggests strong Landau damping of giant dipole resonance for drip-line nuclei.
Minary, Peter; Martyna, Glenn J.; Tuckerman, Mark E.
2003-02-01
In this paper (Paper I) and a companion paper (Paper II), novel new algorithms and applications of the isokinetic ensemble as generated by Gauss' principle of least constraint, pioneered for use with molecular dynamics 20 years ago, are presented for biophysical, path integral, and Car-Parrinello based ab initio molecular dynamics. In Paper I, a new "extended system" version of the isokinetic equations of motion that overcomes the ergodicity problems inherent in the standard approach, is developed using a new theory of non-Hamiltonian phase space analysis [M. E. Tuckerman et al., Europhys. Lett. 45, 149 (1999); J. Chem. Phys. 115, 1678 (2001)]. Reversible multiple time step integrations schemes for the isokinetic methods, first presented by Zhang [J. Chem. Phys. 106, 6102 (1997)] are reviewed. Next, holonomic constraints are incorporated into the isokinetic methodology for use in fast efficient biomolecular simulation studies. Model and realistic examples are presented in order to evaluate, critically, the performance of the new isokinetic molecular dynamic schemes. Comparisons are made to the, now standard, canonical dynamics method, Nosé-Hoover chain dynamics [G. J. Martyna et al., J. Chem. Phys. 97, 2635 (1992)]. The new isokinetic techniques are found to yield more efficient sampling than the Nosé-Hoover chain method in both path integral molecular dynamics and biophysical molecular dynamics calculations. In Paper II, the use of isokinetic methods in Car-Parrinello based ab initio molecular dynamics calculations is presented.
Staying Thermal with Hartree Ensemble Approximations
Salle, M; Vink, Jeroen C
2000-01-01
Using Hartree ensemble approximations to compute the real time dynamics of scalar fields in 1+1 dimension, we find that with suitable initial conditions, approximate thermalization is achieved much faster than found in our previous work. At large times, depending on the interaction strength and temperature, the particle distribution slowly changes: the Bose-Einstein distribution of the particle densities develops classical features. We also discuss variations of our method which are numerically more efficient.
Canonical and micro-canonical typical entanglement of continuous variable systems
Energy Technology Data Exchange (ETDEWEB)
Serafini, A [Institute for Mathematical Sciences, 53 Prince' s Gate, Imperial College London, London SW7 2PG (United Kingdom); Dahlsten, O C O [Institute for Mathematical Sciences, 53 Prince' s Gate, Imperial College London, London SW7 2PG (United Kingdom); Gross, D [Institute for Mathematical Sciences, 53 Prince' s Gate, Imperial College London, London SW7 2PG (United Kingdom); Plenio, M B [Institute for Mathematical Sciences, 53 Prince' s Gate, Imperial College London, London SW7 2PG (United Kingdom)
2007-08-03
We present a framework, compliant with the general canonical principle of statistical mechanics, to define measures on the set of pure Gaussian states of continuous variable systems. Within such a framework, we define two specific measures, referred to as 'micro-canonical' and 'canonical', and apply them to study systematically the statistical properties of the bipartite entanglement of n-mode pure Gaussian states at, respectively, given maximal energy and given temperature. We prove the 'concentration of measure' around a finite average, occurring for the entanglement in the thermodynamical limit in both the canonical and the micro-canonical approach. For finite n, we determine analytically the average and standard deviation of the entanglement (as quantified by the reduced purity) between one mode and all the other modes. Furthermore, we numerically investigate more general situations, clearly showing that the onset of the concentration of measure already occurs at relatively small n.
Canonical reduction for dilatonic gravity in 3+1 dimensions
Scott, T C; Mann, R B; Fee, G J
2016-01-01
We generalize the 1+1-dimensional gravity formalism of Ohta and Mann to 3+1 dimensions by developing the canonical reduction of a proposed formalism applied to a system coupled with a set of point particles. This is done via the Arnowitt-Deser-Misner method and by eliminating the resulting constraints and imposing coordinate conditions. The reduced Hamiltonian is completely determined in terms of the particles' canonical variables (coordinates, dilaton field and momenta). It is found that the equation governing the dilaton field under suitable gauge and coordinate conditions, including the absence of transverse-traceless metric components, is a logarithmic Schroedinger equation. Thus, although different, the 3+1 formalism retains some essential features of the earlier 1+1 formalism, in particular the means of obtaining a quantum theory for dilatonic gravity.
Diurnal Ensemble Surface Meteorology Statistics
U.S. Environmental Protection Agency — Excel file containing diurnal ensemble statistics of 2-m temperature, 2-m mixing ratio and 10-m wind speed. This Excel file contains figures for Figure 2 in the...
DEFF Research Database (Denmark)
2004-01-01
Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output is a...... is available via two password-protected web-pages hosted at IMM and is used daily by Elsam and E2....
Universal canonical entropy for gravitating systems
Indian Academy of Sciences (India)
Ashok Chatterjee; Parthasarathi Majumdar
2004-10-01
The thermodynamics of general relativistic systems with boundary, obeying a Hamiltonian constraint in the bulk, is determined solely by the boundary quantum dynamics, and hence by the area spectrum. Assuming, for large area of the boundary, (a) an area spectrum as determined by non-perturbative canonical quantum general relativity (NCQGR), (b) an energy spectrum that bears a power law relation to the area spectrum, (c) an area law for the leading order microcanonical entropy, leading thermal fluctuation corrections to the canonical entropy are shown to be logarithmic in area with a universal coefficient. Since the microcanonical entropy also has universal logarithmic corrections to the area law (from quantum space-time fluctuations, as found earlier) the canonical entropy then has a universal form including logarithmic corrections to the area law. This form is shown to be independent of the index appearing in assumption (b). The index, however, is crucial in ascertaining the domain of validity of our approach based on thermal equilibrium.
Global canonical symmetry in a quantum system
Institute of Scientific and Technical Information of China (English)
李子平
1996-01-01
Based on the phase-space path integral for a system with a regular or singular Lagrangian the generalized canonical Ward identities under the global symmetry transformation in extended phase space are deduced respectively, thus the relations among Green functions can be found. The connection between canonical symmetries and conservation laws at the quantum level is established. It is pointed out that this connection in classical theories, in general, is no longer always preserved in quantum theories. The advantage of our formulation is that we do not need to carry out the integration over the canonical momenta in phase-space generating functional as usually performed. A precise discussion of quantization for a nonlinear sigma model with Hopf and Chern-Simons terms is reexamined. The property of fractional spin at quantum level has been clarified.
Data assimilation using a climatologically augmented local ensemble transform Kalman filter
Directory of Open Access Journals (Sweden)
Matthew Kretschmer
2015-05-01
Full Text Available Ensemble data assimilation methods are potentially attractive because they provide a computationally affordable (and computationally parallel means of obtaining flow-dependent background-error statistics. However, a limitation of these methods is that the rank of their flow-dependent background-error covariance estimate, and hence the space of possible analysis increments, is limited by the number of forecast ensemble members. To overcome this deficiency ensemble methods typically use empirical localisation, which allows more degrees of freedom for the analysis increment by suppressing spatially distant background correlations. The method presented here improves the performance of an Ensemble Kalman filter by increasing the size of the ensemble at analysis time in order to boost the rank of its background-error covariance estimate. The additional ensemble members added to the forecast ensemble at analysis time are created by adding a collection of ‘climatological’ perturbations to the forecast ensemble mean. These perturbations are constant in time and provide state space directions, possibly missed by the dynamically forecasted background ensemble, in which the analysis increment can correct the forecast mean based on observations. As the climatological perturbations are calculated once, there is negligible computational cost in obtaining the additional ensemble members at each analysis cycle. Included here are a formulation of the method, results of numerical experiments conducted with a spatiotemporally chaotic model in one spatial dimension and discussion of possible future extensions and applications. The numerical tests indicate that the method presented here has significant potential for improving analyses and forecasts.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.
Kelly, David; Majda, Andrew J; Tong, Xin T
2015-08-25
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence
Kelly, David; Majda, Andrew J.; Tong, Xin T.
2015-01-01
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335
Jordan Canonical Form Theory and Practice
Weintraub, Steven H
2009-01-01
Jordan Canonical Form (JCF) is one of the most important, and useful, concepts in linear algebra. The JCF of a linear transformation, or of a matrix, encodes all of the structural information about that linear transformation, or matrix. This book is a careful development of JCF. After beginning with background material, we introduce Jordan Canonical Form and related notions: eigenvalues, (generalized) eigenvectors, and the characteristic and minimum polynomials. We decide the question of diagonalizability, and prove the Cayley-Hamilton theorem. Then we present a careful and complete proof of t
Dispersion Operators Algebra and Linear Canonical Transformations
Andriambololona, Raoelina; Ranaivoson, Ravo Tokiniaina; Hasimbola Damo Emile, Randriamisy; Rakotoson, Hanitriarivo
2017-02-01
This work intends to present a study on relations between a Lie algebra called dispersion operators algebra, linear canonical transformation and a phase space representation of quantum mechanics that we have introduced and studied in previous works. The paper begins with a brief recall of our previous works followed by the description of the dispersion operators algebra which is performed in the framework of the phase space representation. Then, linear canonical transformations are introduced and linked with this algebra. A multidimensional generalization of the obtained results is given.
The Use of Artificial-Intelligence-Based Ensembles for Intrusion Detection: A Review
Directory of Open Access Journals (Sweden)
Gulshan Kumar
2012-01-01
Full Text Available In supervised learning-based classification, ensembles have been successfully employed to different application domains. In the literature, many researchers have proposed different ensembles by considering different combination methods, training datasets, base classifiers, and many other factors. Artificial-intelligence-(AI- based techniques play prominent role in development of ensemble for intrusion detection (ID and have many benefits over other techniques. However, there is no comprehensive review of ensembles in general and AI-based ensembles for ID to examine and understand their current research status to solve the ID problem. Here, an updated review of ensembles and their taxonomies has been presented in general. The paper also presents the updated review of various AI-based ensembles for ID (in particular during last decade. The related studies of AI-based ensembles are compared by set of evaluation metrics driven from (1 architecture & approach followed; (2 different methods utilized in different phases of ensemble learning; (3 other measures used to evaluate classification performance of the ensembles. The paper also provides the future directions of the research in this area. The paper will help the better understanding of different directions in which research of ensembles has been done in general and specifically: field of intrusion detection systems (IDSs.
Ensemble Forecasting of Major Solar Flares
Guerra, J A; Uritsky, V M
2015-01-01
We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte Carlo-type algorithm by applying a decision threshold $P_{th}$ to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the probabilities and events time series from 13 recent solar active regions (2012 - 2014), we found that a linear combination of probabilities can improve both probabilistic and categorical forecasts. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of $P_{th}$. According to the maximum values of HSS, a performance-based weights ...
Eigenstate Gibbs Ensemble in Integrable Quantum Systems
Nandy, Sourav; Das, Arnab; Dhar, Abhishek
2016-01-01
The Eigenstate Thermalization Hypothesis implies that for a thermodynamically large system in one of its eigenstates, the reduced density matrix describing any finite subsystem is determined solely by a set of {\\it relevant} conserved quantities. In a generic system, only the energy plays that role and hence eigenstates appear locally thermal. Integrable systems, on the other hand, possess an extensive number of such conserved quantities and hence the reduced density matrix requires specification of an infinite number of parameters (Generalized Gibbs Ensemble). However, here we show by unbiased statistical sampling of the individual eigenstates with a given finite energy density, that the local description of an overwhelming majority of these states of even such an integrable system is actually Gibbs-like, i.e. requires only the energy density of the eigenstate. Rare eigenstates that cannot be represented by the Gibbs ensemble can also be sampled efficiently by our method and their local properties are then s...
Entanglement in a Solid State Spin Ensemble
Simmons, Stephanie; Riemann, Helge; Abrosimov, Nikolai V; Becker, Peter; Pohl, Hans-Joachim; Thewalt, Mike L W; Itoh, Kohei M; Morton, John J L
2010-01-01
Entanglement is the quintessential quantum phenomenon and a necessary ingredient in most emerging quantum technologies, including quantum repeaters, quantum information processing (QIP) and the strongest forms of quantum cryptography. Spin ensembles, such as those in liquid state nuclear magnetic resonance, have been powerful in the development of quantum control methods, however, these demonstrations contained no entanglement and ultimately constitute classical simulations of quantum algorithms. Here we report the on-demand generation of entanglement between an ensemble of electron and nuclear spins in isotopically engineered phosphorus-doped silicon. We combined high field/low temperature electron spin resonance (3.4 T, 2.9 K) with hyperpolarisation of the 31P nuclear spin to obtain an initial state of sufficient purity to create a non-classical, inseparable state. The state was verified using density matrix tomography based on geometric phase gates, and had a fidelity of 98% compared with the ideal state a...
Interplanetary magnetic field ensemble at 1 AU
Energy Technology Data Exchange (ETDEWEB)
Matthaeus, W.H.; Goldstein, M.L.; King, J.H.
1985-04-01
A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.
Rényi entropy, abundance distribution, and the equivalence of ensembles
Mora, Thierry; Walczak, Aleksandra M.
2016-05-01
Distributions of abundances or frequencies play an important role in many fields of science, from biology to sociology, as does the Rényi entropy, which measures the diversity of a statistical ensemble. We derive a mathematical relation between the abundance distribution and the Rényi entropy, by analogy with the equivalence of ensembles in thermodynamics. The abundance distribution is mapped onto the density of states, and the Rényi entropy to the free energy. The two quantities are related in the thermodynamic limit by a Legendre transform, by virtue of the equivalence between the micro-canonical and canonical ensembles. In this limit, we show how the Rényi entropy can be constructed geometrically from rank-frequency plots. This mapping predicts that non-concave regions of the rank-frequency curve should result in kinks in the Rényi entropy as a function of its order. We illustrate our results on simple examples, and emphasize the limitations of the equivalence of ensembles when a thermodynamic limit is not well defined. Our results help choose reliable diversity measures based on the experimental accuracy of the abundance distributions in particular frequency ranges.
A canonical theory of dynamic decision-making
Directory of Open Access Journals (Sweden)
John eFox
2013-04-01
Full Text Available Decision-making behaviour is studied in many very different fields, from medicine and economics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI and other technical disciplines. However the conceptualisation of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision making with respect to other high-level cognitive capabilities like problem-solving, planning and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuro-psychology, artificial intelligence, and decision engineering.
A canonical theory of dynamic decision-making.
Fox, John; Cooper, Richard P; Glasspool, David W
2013-01-01
Decision-making behavior is studied in many very different fields, from medicine and economics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptualization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering.
Combining 2-m temperature nowcasting and short range ensemble forecasting
Directory of Open Access Journals (Sweden)
A. Kann
2011-12-01
Full Text Available During recent years, numerical ensemble prediction systems have become an important tool for estimating the uncertainties of dynamical and physical processes as represented in numerical weather models. The latest generation of limited area ensemble prediction systems (LAM-EPSs allows for probabilistic forecasts at high resolution in both space and time. However, these systems still suffer from systematic deficiencies. Especially for nowcasting (0–6 h applications the ensemble spread is smaller than the actual forecast error. This paper tries to generate probabilistic short range 2-m temperature forecasts by combining a state-of-the-art nowcasting method and a limited area ensemble system, and compares the results with statistical methods. The Integrated Nowcasting Through Comprehensive Analysis (INCA system, which has been in operation at the Central Institute for Meteorology and Geodynamics (ZAMG since 2006 (Haiden et al., 2011, provides short range deterministic forecasts at high temporal (15 min–60 min and spatial (1 km resolution. An INCA Ensemble (INCA-EPS of 2-m temperature forecasts is constructed by applying a dynamical approach, a statistical approach, and a combined dynamic-statistical method. The dynamical method takes uncertainty information (i.e. ensemble variance from the operational limited area ensemble system ALADIN-LAEF (Aire Limitée Adaptation Dynamique Développement InterNational Limited Area Ensemble Forecasting which is running operationally at ZAMG (Wang et al., 2011. The purely statistical method assumes a well-calibrated spread-skill relation and applies ensemble spread according to the skill of the INCA forecast of the most recent past. The combined dynamic-statistical approach adapts the ensemble variance gained from ALADIN-LAEF with non-homogeneous Gaussian regression (NGR which yields a statistical mbox{correction} of the first and second moment (mean bias and dispersion for Gaussian distributed continuous
Canonical equivalence between massive spin 1 theories
Arias, P J; Arias, Pio J.; Perez-Mosquera, Jean C.
2004-01-01
The model of Cremmer-Scherck and Proca are considered in dimensions greater than 3+1. It is obtained that the Proca model correspond to a gauged fixed version of the Cremmer-Scherck one, and we show their canonical equivalence.
Scalar potentials out of canonical quantum cosmology
Guzman, W; Socorro, J; Urena-Lopez, L A
2005-01-01
Using canonical quantization of a flat FRW cosmological model containing a real scalar field $\\phi$ endowed with a scalar potential $V(\\phi)$, we are able to obtain exact and semiclassical solutions of the so called Wheeler-DeWitt equation for a particular family of scalar potentials. Some features of the solutions and their classical limit are discussed.
Green's Conjecture for the generic canonical curve
Teixidor-I-Bigas, Montserrat
1998-01-01
Green's Conjecture states the following : syzygies of the canonical model of a curve are simple up to the p^th stage if and only if the Clifford index of C is greater than p. We prove that the generic curve of genus g satisfies Green's conjecture.
Canonical analysis based on mutual information
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2015-01-01
combinations with the information theoretical measure mutual information (MI). We term this type of analysis canonical information analysis (CIA). MI allows for the actual joint distribution of the variables involved and not just second order statistics. While CCA is ideal for Gaussian data, CIA facilitates...
Regularized Multiple-Set Canonical Correlation Analysis
Takane, Yoshio; Hwang, Heungsun; Abdi, Herve
2008-01-01
Multiple-set canonical correlation analysis (Generalized CANO or GCANO for short) is an important technique because it subsumes a number of interesting multivariate data analysis techniques as special cases. More recently, it has also been recognized as an important technique for integrating information from multiple sources. In this paper, we…
Infants' Recognition of Objects Using Canonical Color
Kimura, Atsushi; Wada, Yuji; Yang, Jiale; Otsuka, Yumiko; Dan, Ippeita; Masuda, Tomohiro; Kanazawa, So; Yamaguchi, Masami K.
2010-01-01
We explored infants' ability to recognize the canonical colors of daily objects, including two color-specific objects (human face and fruit) and a non-color-specific object (flower), by using a preferential looking technique. A total of 58 infants between 5 and 8 months of age were tested with a stimulus composed of two color pictures of an object…
Probing the small distance structure of canonical
t Hooft, G.
2010-01-01
In canonical quantum gravity, the formal functional integral includes an integration over the local conformal factor, and we propose to perform the functional integral over this factor before doing any of the other functional integrals. By construction, the resulting effective theory would be expect
Kuidas Canon suureks kasvas / Andres Eilart
Eilart, Andres
2004-01-01
Jaapani kaamerate ja büroomasinate tootja Canon Groupi arengust, tegevusest kolmes regioonis - USA-s, Euroopas ja Aasias ning ettevõtte pikaajalise edu põhjustest - ärifilosoofiast ning ajastatud tootearendusest. Vt. samas: Firma esialgne nimi oli Kwanon; Konkurendid koonduvad
Energy Technology Data Exchange (ETDEWEB)
Dinpajooh, Mohammadhasan [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Bai, Peng; Allan, Douglas A. [Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States); Siepmann, J. Ilja, E-mail: siepmann@umn.edu [Department of Chemistry and Chemical Theory Center, University of Minnesota, 207 Pleasant Street SE, Minneapolis, Minnesota 55455 (United States); Department of Chemical Engineering and Materials Science, University of Minnesota, 421 Washington Avenue SE, Minneapolis, Minnesota 55455 (United States)
2015-09-21
Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard
Phase-selective entrainment of nonlinear oscillator ensembles
Zlotnik, Anatoly; Nagao, Raphael; Kiss, István Z.; Li-Shin, Jr.
2016-03-01
The ability to organize and finely manipulate the hierarchy and timing of dynamic processes is important for understanding and influencing brain functions, sleep and metabolic cycles, and many other natural phenomena. However, establishing spatiotemporal structures in biological oscillator ensembles is a challenging task that requires controlling large collections of complex nonlinear dynamical units. In this report, we present a method to design entrainment signals that create stable phase patterns in ensembles of heterogeneous nonlinear oscillators without using state feedback information. We demonstrate the approach using experiments with electrochemical reactions on multielectrode arrays, in which we selectively assign ensemble subgroups into spatiotemporal patterns with multiple phase clusters. The experimentally confirmed mechanism elucidates the connection between the phases and natural frequencies of a collection of dynamical elements, the spatial and temporal information that is encoded within this ensemble, and how external signals can be used to retrieve this information.
A Spectral Canonical Electrostatic Algorithm
Webb, Stephen D
2015-01-01
Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton's Principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm's energy- and momentum-conserving properties.
SVM and SVM Ensembles in Breast Cancer Prediction
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807
SVM and SVM Ensembles in Breast Cancer Prediction.
Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong
2017-01-01
Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.
The influence of canon law on ius commune in its formative period
Directory of Open Access Journals (Sweden)
Mehmeti Sami
2015-12-01
Full Text Available In the Medieval period, Roman law and canon law formed ius commune or the common European law. The similarity between Roman and canon law was that they used the same methods and the difference was that they relied on different authoritative texts. In their works canonists and civilists combined the ancient Greek achievements in philosophy with the Roman achievements in the field of law. Canonists were the first who carried out research on the distinctions between various legal sources and systematized them according to a hierarchical order. The Medieval civilists sought solutions in canon law for a large number of problems that Justinian’s Codification did not hinge on or did it only superficially. Solutions offered by canon law were accepted not only in the civil law of Continental Europe, but also in the English law.
Estimating preselected and postselected ensembles
Energy Technology Data Exchange (ETDEWEB)
Massar, Serge [Laboratoire d' Information Quantique, C.P. 225, Universite libre de Bruxelles (U.L.B.), Av. F. D. Rooselvelt 50, B-1050 Bruxelles (Belgium); Popescu, Sandu [H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Hewlett-Packard Laboratories, Stoke Gifford, Bristol BS12 6QZ (United Kingdom)
2011-11-15
In analogy with the usual quantum state-estimation problem, we introduce the problem of state estimation for a pre- and postselected ensemble. The problem has fundamental physical significance since, as argued by Y. Aharonov and collaborators, pre- and postselected ensembles are the most basic quantum ensembles. Two new features are shown to appear: (1) information is flowing to the measuring device both from the past and from the future; (2) because of the postselection, certain measurement outcomes can be forced never to occur. Due to these features, state estimation in such ensembles is dramatically different from the case of ordinary, preselected-only ensembles. We develop a general theoretical framework for studying this problem and illustrate it through several examples. We also prove general theorems establishing that information flowing from the future is closely related to, and in some cases equivalent to, the complex conjugate information flowing from the past. Finally, we illustrate our approach on examples involving covariant measurements on spin-1/2 particles. We emphasize that all state-estimation problems can be extended to the pre- and postselected situation. The present work thus lays the foundations of a much more general theory of quantum state estimation.
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
CME Ensemble Forecasting - A Primer
Pizzo, V. J.; de Koning, C. A.; Cash, M. D.; Millward, G. H.; Biesecker, D. A.; Codrescu, M.; Puga, L.; Odstrcil, D.
2014-12-01
SWPC has been evaluating various approaches for ensemble forecasting of Earth-directed CMEs. We have developed the software infrastructure needed to support broad-ranging CME ensemble modeling, including composing, interpreting, and making intelligent use of ensemble simulations. The first step is to determine whether the physics of the interplanetary propagation of CMEs is better described as chaotic (like terrestrial weather) or deterministic (as in tsunami propagation). This is important, since different ensemble strategies are to be pursued under the two scenarios. We present the findings of a comprehensive study of CME ensembles in uniform and structured backgrounds that reveals systematic relationships between input cone parameters and ambient flow states and resulting transit times and velocity/density amplitudes at Earth. These results clearly indicate that the propagation of single CMEs to 1 AU is a deterministic process. Thus, the accuracy with which one can forecast the gross properties (such as arrival time) of CMEs at 1 AU is determined primarily by the accuracy of the inputs. This is no tautology - it means specifically that efforts to improve forecast accuracy should focus upon obtaining better inputs, as opposed to developing better propagation models. In a companion paper (deKoning et al., this conference), we compare in situ solar wind data with forecast events in the SWPC operational archive to show how the qualitative and quantitative findings presented here are entirely consistent with the observations and may lead to improved forecasts of arrival time at Earth.
The Algebraic Riccati Matrix Equation for Eigendecomposition of Canonical Forms
Directory of Open Access Journals (Sweden)
M. Nouri
2013-01-01
Full Text Available The algebraic Riccati matrix equation is used for eigendecomposition of special structured matrices. This is achieved by similarity transformation and then using the algebraic Riccati matrix equation to the triangulation of matrices. The process is the decomposition of matrices into small and specially structured submatrices with low dimensions for easy finding of eigenpairs. Here, we show that previous canonical forms I, II, III, and so on are special cases of the presented method. Numerical and structural examples are included to show the efficiency of the present method.
Linking neuronal ensembles by associative synaptic plasticity.
Directory of Open Access Journals (Sweden)
Qi Yuan
Full Text Available Synchronized activity in ensembles of neurons recruited by excitatory afferents is thought to contribute to the coding information in the brain. However, the mechanisms by which neuronal ensembles are generated and modified are not known. Here we show that in rat hippocampal slices associative synaptic plasticity enables ensembles of neurons to change by incorporating neurons belonging to different ensembles. Associative synaptic plasticity redistributes the composition of different ensembles recruited by distinct inputs such as to specifically increase the similarity between the ensembles. These results show that in the hippocampus, the ensemble of neurons recruited by a given afferent projection is fluid and can be rapidly and persistently modified to specifically include neurons from different ensembles. This linking of ensembles may contribute to the formation of associative memories.
Directory of Open Access Journals (Sweden)
Zwinderman Aeilko H
2009-09-01
Full Text Available Abstract Background We generalized penalized canonical correlation analysis for analyzing microarray gene-expression measurements for checking completeness of known metabolic pathways and identifying candidate genes for incorporation in the pathway. We used Wold's method for calculation of the canonical variates, and we applied ridge penalization to the regression of pathway genes on canonical variates of the non-pathway genes, and the elastic net to the regression of non-pathway genes on the canonical variates of the pathway genes. Results We performed a small simulation to illustrate the model's capability to identify new candidate genes to incorporate in the pathway: in our simulations it appeared that a gene was correctly identified if the correlation with the pathway genes was 0.3 or more. We applied the methods to a gene-expression microarray data set of 12, 209 genes measured in 45 patients with glioblastoma, and we considered genes to incorporate in the glioma-pathway: we identified more than 25 genes that correlated > 0.9 with canonical variates of the pathway genes. Conclusion We concluded that penalized canonical correlation analysis is a powerful tool to identify candidate genes in pathway analysis.
Excitation energies from ensemble DFT
Borgoo, Alex; Teale, Andy M.; Helgaker, Trygve
2015-12-01
We study the evaluation of the Gross-Oliveira-Kohn expression for excitation energies E1-E0=ɛ1-ɛ0+∂E/xc,w[ρ] ∂w | ρ =ρ0. This expression gives the difference between an excitation energy E1 - E0 and the corresponding Kohn-Sham orbital energy difference ɛ1 - ɛ0 as a partial derivative of the exchange-correlation energy of an ensemble of states Exc,w[ρ]. Through Lieb maximisation, on input full-CI density functions, the exchange-correlation energy is evaluated accurately and the partial derivative is evaluated numerically using finite difference. The equality is studied numerically for different geometries of the H2 molecule and different ensemble weights. We explore the adiabatic connection for the ensemble exchange-correlation energy. The latter may prove useful when modelling the unknown weight dependence of the exchange-correlation energy.
Support vector machine ensemble using rough sets theory
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A support vector machine (SVM) ensemble classifier is proposed. Performance of SVM trained in an input space consisting of all the information from many sources is not always good. The strategy that the original input space is partitioned into several input subspaces usually works for improving the performance. Different from conventional partition methods, the partition method used in this paper, rough sets theory based attribute reduction, allows the input subspaces partially overlapped. These input subspaces can offer complementary information about hidden data patterns. In every subspace, an SVM sub-classifier is learned. With the information fusion techniques, those SVM sub-classifiers with better performance are selected and combined to construct an SVM ensemble. The proposed method is applied to decisionmaking of medical diagnosis. Comparison of performance between our method and several other popular ensemble methods is done. Experimental results demonstrate that our proposed approach can make full use of the information contained in data and improve the decision-making performance.
The Partition Ensemble Fallacy Fallacy
Nemoto, K; Nemoto, Kae; Braunstein, Samuel L.
2002-01-01
The Partition Ensemble Fallacy was recently applied to claim no quantum coherence exists in coherent states produced by lasers. We show that this claim relies on an untestable belief of a particular prior distribution of absolute phase. One's choice for the prior distribution for an unobservable quantity is a matter of `religion'. We call this principle the Partition Ensemble Fallacy Fallacy. Further, we show an alternative approach to construct a relative-quantity Hilbert subspace where unobservability of certain quantities is guaranteed by global conservation laws. This approach is applied to coherent states and constructs an approximate relative-phase Hilbert subspace.
Dayyani, Z; Dehghani, M H
2016-01-01
We investigate the critical behavior of an $(n+1)$-dimensional topological dilaton black holes, in an extended phase space in both canonical and grand-canonical ensembles, when the gauge field is in the form of power-Maxwell field. In order to do this we introduce for the first time the counterterms that remove the divergences of the action in dilaton gravity for the solutions with curved boundary. Using the counterterm method, we calculate the conserved quantities and the action and therefore Gibbs free energy in both the canonical and grand-canonical ensembles. We treat the cosmological constant as a thermodynamic pressure, and its conjugate quantity as a thermodynamic volume. In the presence of power-Maxwell field, we find an analogy between the topological dilaton black holes with van der Walls liquid-gas system in all dimensions provided the dilaton coupling constant $\\alpha$ and the power parameter $p$ are chosen properly. Interestingly enough, we observe that the power-Maxwell dilaton black holes admit...
Seismology of an Ensemble of ZZ Ceti Stars
Clemens, J C; Dunlap, Bart H; Hermes, J J
2016-01-01
We combine all the reliably-measured eigenperiods for hot, short-period ZZ Ceti stars onto one diagram and show that it has the features expected from evolutionary and pulsation theory. To make a more detailed comparison with theory we concentrate on a subset of 16 stars for which rotational splitting or other evidence gives clues to the spherical harmonic index (l) of the modes. The suspected l=1 periods in this subset of stars form a pattern of consecutive radial overtones that allow us to conduct ensemble seismology using published theoretical model grids. We find that the best-matching models have hydrogen layer masses most consistent with the canonically thick limit calculated from nuclear burning. We also find that the evolutionary models with masses and temperatures from spectroscopic fits cannot correctly reproduce the periods of the k=1 to 4 mode groups in these stars, and speculate that the mass of the helium layer in the models is too large.
Canonical Quantum Gravity on Noncommutative Spacetime
Kober, Martin
2014-01-01
In this paper canonical quantum gravity on noncommutative space-time is considered. The corresponding generalized classical theory is formulated by using the moyal star product, which enables the representation of the field quantities depending on noncommuting coordinates by generalized quantities depending on usual coordinates. But not only the classical theory has to be generalized in analogy to other field theories. Besides, the necessity arises to replace the commutator between the gravitational field operator and its canonical conjugated quantity by a corresponding generalized expression on noncommutative space-time. Accordingly the transition to the quantum theory has also to be performed in a generalized way and leads to extended representations of the quantum theoretical operators. If the generalized representations of the operators are inserted to the generalized constraints, one obtains the corresponding generalized quantum constraints including the Hamiltonian constraint as dynamical constraint. Af...
Deformed Special Relativity in a Canonical Framework
Ghosh, S; Ghosh, Subir; Pal, Probir
2007-01-01
In this paper we have studied the nature of kinematical and dynamical laws in $\\kappa $-Minkowski spacetime from a new perspective: the canonical phase space approach. We have introduced a new form of $\\kappa$-Minkowski phase space algebra from which we recover the $\\kappa$-extended finite Lorentz transformations derived in \\cite{kim}. This is a particular form of a Deformed Special Relativity model that admits a modified energy-momentum dispersion law as well as noncommutative $\\kappa$-Minkowski phase space. We show that this system can be completely mapped to a set of phase space variables that obey canonical (and {\\it{not}} $\\kappa$-Minkowski) phase space algebra and Special Relativity Lorentz transformation (and {\\it{not}} $\\kappa$-extended Lorentz transformation). We demonstrate the usefulness and simplicity of this approach through a number of applications both in classical and quantum mechanics. We also construct a Lagrangian for the $\\kappa$-particle.
Non-canonical modulators of nuclear receptors.
Tice, Colin M; Zheng, Ya-Jun
2016-09-01
Like G protein-coupled receptors (GPCRs) and protein kinases, nuclear receptors (NRs) are a rich source of pharmaceutical targets. Over 80 NR-targeting drugs have been approved for 18 NRs. The focus of drug discovery in NRs has hitherto been on identifying ligands that bind to the canonical ligand binding pockets of the C-terminal ligand binding domains (LBDs). Due to the development of drug resistance and selectivity concerns, there has been considerable interest in exploring other, non-canonical ligand binding sites. Unfortunately, the potencies of compounds binding at other sites have generally not been sufficient for clinical development. However, the situation has changed dramatically over the last 3years, as compounds with sufficient potency have been reported for several NR targets. Here we review recent developments in this area from a medicinal chemistry point of view in the hope of stimulating further interest in this area of research.
Baby Skyrmions stabilized by canonical quantization
Energy Technology Data Exchange (ETDEWEB)
Acus, A.; Norvaisas, E. [Vilnius University, Institute of Theoretical Physics and Astronomy, Gostauto 12, Vilnius 01108 (Lithuania); Shnir, Ya., E-mail: shnir@maths.tcd.i [School of Theoretical Physics - DIAS, 10 Burlington Road, Dublin 4 (Ireland); Institute of Physics, Jagiellonian University, Krakow (Poland)
2009-11-23
We analyse the effect of the canonical quantization of the rotational mode of the O(3)sigma-model which includes the Skyrme term. Numerical evidence is presented that the quantum correction to the mass of the rotationally-invariant charge n=1,2 configurations may stabilize the solution even in the limit of vanishing potential. The corresponding range of values of the parameters is discussed.
Baby Skyrmions stabilized by canonical quantization
Acus, A; Shnir, Ya
2009-01-01
We analyse the effect of the canonical quantization of the rotational mode of the O(3) $\\sigma$-model which includes the Skyrme term. Numerical evidence is presented that the quantum correction to the mass of the rotationally-invariant charge $n=1,2$ configurations may stabilize the solution even in the limit of vanishing potential. The corresponding range of values of the parameters is discussed.
CANONICAL FORMULATION OF NONHOLONOMIC CONSTRAINED SYSTEMS
Institute of Scientific and Technical Information of China (English)
GUO YONG-XIN; YU YING; HUANG HAI-JUN
2001-01-01
Based on the Ehresmann connection theory and symplectic geometry, the canonical formulation of nonholonomic constrained mechanical systems is described. Following the Lagrangian formulation of the constrained system, the Hamiltonian formulation is given by Legendre transformation. The Poisson bracket defined by an anti-symmetric tensor does not satisfy the Jacobi identity for the nonintegrability of nonholonomic constraints. The constraint manifold can admit symplectic submanifold for some cases, in which the Lie algebraic structure exists.
Il Canone Linguistico Boccacciano, Non Senza Dissenso
Directory of Open Access Journals (Sweden)
Cecilia Casini
2015-06-01
Full Text Available Author of prose’s greatest masterpiece of medieval literature in the vernacular, Giovanni Boccaccio was crucial to defining the Italian language canon, especially since Pietro Bembo proposed its coding in the sixteenth century. Not without controversy, however, since shortly after the publication of Prose Della Volgar Language, Bembo presents the first contrasting theories that support the linguistic model presented by Machiavelli
A new ensemble feature selection and its application to pattern classification
Institute of Scientific and Technical Information of China (English)
Dongbo ZHANG; Yaonan WANG
2009-01-01
Neural network ensemble based on rough sets reduct is proposed to decrease the computational complexity of conventional ensemble feature selection algorithm. First, a dynamic reduction technology combining genetic algorithm with resampling method is adopted to obtain reducts with good generalization ability. Second, Multiple BP neural networks based on different reducts are built as base classifiers. According to the idea of selective ensemble, the neural network ensemble with best generalization ability can be found by search strategies. Finally, classification based on neural network ensemble is implemented by combining the predictions of component networks with voting. The method has been verified in the experiment of remote sensing image and five UCI datasets classification. Compared with conventional ensemble feature selection algorithms, it costs less time and lower computing complexity, and the classification accuracy is satisfactory.
Mapping of shape invariant potentials by the point canonical transformation
Setare, M R
2008-01-01
In this paper by using the method of point canonical transformation we find that the Coulomb and Kratzer potentials can be mapped to the Morse potential. Then we show that the P\\"{o}schl-Teller potential type I belongs to the same subclass of shape invariant potentials as Hulth\\'{e}n potential. Also we show that the shape-invariant algebra for Coulomb, Kratzer, and Morse potentials is SU(1,1), while the shape-invariant algebra for P\\"{o}schl-Teller type I and Hulth\\'{e}n is SU(2).
Canonical Sets of Best L1-Approximation
Directory of Open Access Journals (Sweden)
Dimiter Dryanov
2012-01-01
Full Text Available In mathematics, the term approximation usually means either interpolation on a point set or approximation with respect to a given distance. There is a concept, which joins the two approaches together, and this is the concept of characterization of the best approximants via interpolation. It turns out that for some large classes of functions the best approximants with respect to a certain distance can be constructed by interpolation on a point set that does not depend on the choice of the function to be approximated. Such point sets are called canonical sets of best approximation. The present paper summarizes results on canonical sets of best L1-approximation with emphasis on multivariate interpolation and best L1-approximation by blending functions. The best L1-approximants are characterized as transfinite interpolants on canonical sets. The notion of a Haar-Chebyshev system in the multivariate case is discussed also. In this context, it is shown that some multivariate interpolation spaces share properties of univariate Haar-Chebyshev systems. We study also the problem of best one-sided multivariate L1-approximation by sums of univariate functions. Explicit constructions of best one-sided L1-approximants give rise to well-known and new inequalities.
An algorithm for calculation of the Jordan canonical form of a matrix
Sridhar, B.; Jordan, D.
1973-01-01
Jordan canonical forms are used extensively in the literature on control systems. However, very few methods are available to compute them numerically. Most numerical methods compute a set of basis vectors in terms of which the given matrix is diagonalized when such a change of basis is possible. Here, a simple and efficient method is suggested for computing the Jordan canonical form and the corresponding transformation matrix. The method is based on the definition of a generalized eigenvector, and a natural extension of Gauss elimination techniques.
Multimodel ensembles of wheat growth
DEFF Research Database (Denmark)
Martre, Pierre; Wallach, Daniel; Asseng, Senthold;
2015-01-01
Crop models of crop growth are increasingly used to quantify the impact of global changes due to climate or crop management. Therefore, accuracy of simulation results is a major concern. Studies with ensembles of crop models can give valuable information about model accuracy and uncertainty, but ...
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Directory of Open Access Journals (Sweden)
Michael U Gutmann
Full Text Available Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
Global Ensemble Forecast System (GEFS) [1 Deg.
National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...
Equivalence of matrix product ensembles of trajectories in open quantum systems.
Kiukas, Jukka; Guţă, Mădălin; Lesanovsky, Igor; Garrahan, Juan P
2015-07-01
The equivalence of thermodynamic ensembles is at the heart of statistical mechanics and central to our understanding of equilibrium states of matter. Recently, a formal connection has been established between the dynamics of open quantum systems and statistical mechanics in an extra dimension: an open system dynamics generates a matrix product state (MPS) encoding all possible quantum jump trajectories which allows to construct generating functions akin to partition functions. For dynamics generated by a Lindblad master equation, the corresponding MPS is a so-called continuous MPS which encodes the set of continuous measurement records terminated at some fixed total observation time. Here, we show that if one instead terminates trajectories after a fixed total number of quantum jumps, e.g., emission events into the environment, the associated MPS is discrete. The continuous and discrete MPS correspond to different ensembles of quantum trajectories, one characterized by total time, the other by total number of quantum jumps. Hence, they give rise to quantum versions of different thermodynamic ensembles, akin to "grand canonical" and "isobaric," but for trajectories. Here, we prove that these trajectory ensembles are equivalent in a suitable limit of long time or large number of jumps. This is in direct analogy to equilibrium statistical mechanics where equivalence between ensembles is only strictly established in the thermodynamic limit. An intrinsic quantum feature is that the equivalence holds only for all observables that commute with the number of quantum jumps.
Bayesian Model Averaging for Ensemble-Based Estimates of Solvation Free Energies
Gosink, Luke J; Reehl, Sarah M; Whitney, Paul D; Mobley, David L; Baker, Nathan A
2016-01-01
This paper applies the Bayesian Model Averaging (BMA) statistical ensemble technique to estimate small molecule solvation free energies. There is a wide range methods for predicting solvation free energies, ranging from empirical statistical models to ab initio quantum mechanical approaches. Each of these methods are based on a set of conceptual assumptions that can affect a method's predictive accuracy and transferability. Using an iterative statistical process, we have selected and combined solvation energy estimates using an ensemble of 17 diverse methods from the SAMPL4 blind prediction study to form a single, aggregated solvation energy estimate. The ensemble design process evaluates the statistical information in each individual method as well as the performance of the aggregate estimate obtained from the ensemble as a whole. Methods that possess minimal or redundant information are pruned from the ensemble and the evaluation process repeats until aggregate predictive performance can no longer be improv...
Quantifying Monte Carlo uncertainty in ensemble Kalman filter
Energy Technology Data Exchange (ETDEWEB)
Thulin, Kristian; Naevdal, Geir; Skaug, Hans Julius; Aanonsen, Sigurd Ivar
2009-01-15
This report is presenting results obtained during Kristian Thulin PhD study, and is a slightly modified form of a paper submitted to SPE Journal. Kristian Thulin did most of his portion of the work while being a PhD student at CIPR, University of Bergen. The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated fram the updated ensemble, but because of the low rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with smaller ensemble size to obtain truly independent samples from the posterior distribution. This allows a point-wise confidence interval for the posterior cumulative distribution function (CDF) to be constructed. We present a methodology for finding an optimal combination of ensemble batch size (n) and number of EnKF runs (m) while keeping the total number of ensemble members ( m x n) constant. The optimal combination of n and m is found through minimizing the integrated mean square error (MSE) for the CDFs and we choose to define an EnKF run with 10.000 ensemble members as having zero Monte Carlo error. The methodology is tested on a simplistic, synthetic 2D model, but should be applicable also to larger, more realistic models. (author). 12 refs., figs.,tabs
Sysoev, I. V.; Ponomarenko, V. I.; Prokhorov, M. D.
2016-01-01
A method for the reconstruction of the architecture, strength of couplings, and parameters of elements in ensembles of coupled time-delay systems from their time series is proposed. The effectiveness of the method is demonstrated on chaotic time series of the ensemble of diffusively coupled nonidentical Ikeda equations in the presence of noise.
Squeezing of Collective Excitations in Spin Ensembles
DEFF Research Database (Denmark)
Kraglund Andersen, Christian; Mølmer, Klaus
2012-01-01
We analyse the possibility to create two-mode spin squeezed states of two separate spin ensembles by inverting the spins in one ensemble and allowing spin exchange between the ensembles via a near resonant cavity field. We investigate the dynamics of the system using a combination of numerical an...
Generalized canonical correlation analysis of matrices with missing rows : A simulation study
van de Velden, Michel; Bijmolt, Tammo H. A.
2006-01-01
A method is presented for generalized canonical correlation analysis of two or more matrices with missing rows. The method is a combination of Carroll's (1968) method and the missing data approach of the OVERALS technique (Van der Burg, 1988). In a simulation study we assess the performance of the m
Classical and Quantum Ensembles via Multiresolution. II. Wigner Ensembles
2004-01-01
We present the application of the variational-wavelet analysis to the analysis of quantum ensembles in Wigner framework. (Naive) deformation quantization, the multiresolution representations and the variational approach are the key points. We construct the solutions of Wigner-like equations via the multiscale expansions in the generalized coherent states or high-localized nonlinear eigenmodes in the base of the compactly supported wavelets and the wavelet packets. We demonstrate the appearanc...
Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data
Hydrological Ensemble Prediction System (HEPS)
Thielen-Del Pozo, J.; Schaake, J.; Martin, E.; Pailleux, J.; Pappenberger, F.
2010-09-01
Flood forecasting systems form a key part of ‘preparedness' strategies for disastrous floods and provide hydrological services, civil protection authorities and the public with information of upcoming events. Provided the warning leadtime is sufficiently long, adequate preparatory actions can be taken to efficiently reduce the impacts of the flooding. Following on the success of the use of ensembles for weather forecasting, the hydrological community now moves increasingly towards Hydrological Ensemble Prediction Systems (HEPS) for improved flood forecasting using operationally available NWP products as inputs. However, these products are often generated on relatively coarse scales compared to hydrologically relevant basin units and suffer systematic biases that may have considerable impact when passed through the non-linear hydrological filters. Therefore, a better understanding on how best to produce, communicate and use hydrologic ensemble forecasts in hydrological short-, medium- und long term prediction of hydrological processes is necessary. The "Hydrologic Ensemble Prediction Experiment" (HEPEX), is an international initiative consisting of hydrologists, meteorologist and end-users to advance probabilistic hydrologic forecast techniques for flood, drought and water management applications. Different aspects of the hydrological ensemble processor are being addressed including • Production of useful meteorological products relevant for hydrological applications, ranging from nowcasting products to seasonal forecasts. The importance of hindcasts that are consistent with the operational weather forecasts will be discussed to support bias correction and downscaling, statistically meaningful verification of HEPS, and the development and testing of operating rules; • Need for downscaling and post-processing of weather ensembles to reduce bias before entering hydrological applications; • Hydrological model and parameter uncertainty and how to correct and
Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter
Luo, Xiaodong
2011-12-01
A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.
Reservoir History Matching Using Ensemble Kalman Filters with Anamorphosis Transforms
Aman, Beshir M.
2012-12-01
This work aims to enhance the Ensemble Kalman Filter performance by transforming the non-Gaussian state variables into Gaussian variables to be a step closer to optimality. This is done by using univariate and multivariate Box-Cox transformation. Some History matching methods such as Kalman filter, particle filter and the ensemble Kalman filter are reviewed and applied to a test case in the reservoir application. The key idea is to apply the transformation before the update step and then transform back after applying the Kalman correction. In general, the results of the multivariate method was promising, despite the fact it over-estimated some variables.
Ensemble Forecasting of Major Solar Flares -- First Results
Pulkkinen, A. A.; Guerra, J. A.; Uritsky, V. M.
2015-12-01
We present the results from the first ensemble prediction model for major solar flares (M and X classes). Using the probabilistic forecasts from three models hosted at the Community Coordinated Modeling Center (NASA-GSFC) and the NOAA forecasts, we developed an ensemble forecast by linearly combining the flaring probabilities from all four methods. Performance-based combination weights were calculated using a Monte-Carlo-type algorithm that applies a decision threshold PthP_{th} to the combined probabilities and maximizing the Heidke Skill Score (HSS). Using the data for 13 recent solar active regions between years 2012 - 2014, we found that linear combination methods can improve the overall probabilistic prediction and improve the categorical prediction for certain values of decision thresholds. Combination weights vary with the applied threshold and none of the tested individual forecasting models seem to provide more accurate predictions than the others for all values of PthP_{th}. According to the maximum values of HSS, a performance-based weights calculated by averaging over the sample, performed similarly to a equally weighted model. The values PthP_{th} for which the ensemble forecast performs the best are 25 % for M-class flares and 15 % for X-class flares. When the human-adjusted probabilities from NOAA are excluded from the ensemble, the ensemble performance in terms of the Heidke score, is reduced.
An adaptive additive inflation scheme for Ensemble Kalman Filters
Sommer, Matthias; Janjic, Tijana
2016-04-01
Data assimilation for atmospheric dynamics requires an accurate estimate for the uncertainty of the forecast in order to obtain an optimal combination with available observations. This uncertainty has two components, firstly the uncertainty which originates in the the initial condition of that forecast itself and secondly the error of the numerical model used. While the former can be approximated quite successfully with an ensemble of forecasts (an additional sampling error will occur), little is known about the latter. For ensemble data assimilation, ad-hoc methods to address model error include multiplicative and additive inflation schemes, possibly also flow-dependent. The additive schemes rely on samples for the model error e.g. from short-term forecast tendencies or differences of forecasts with varying resolutions. However since these methods work in ensemble space (i.e. act directly on the ensemble perturbations) the sampling error is fixed and can be expected to affect the skill substiantially. In this contribution we show how inflation can be generalized to take into account more degrees of freedom and what improvements for future operational ensemble data assimilation can be expected from this, also in comparison with other inflation schemes.
Building Orff Ensemble Skills with Mentally Handicapped Adolescents.
Dervan, Nancy
1982-01-01
Discusses how Orff-Schulwerk methods are used to teach music ensemble skills to mentally retarded adolescents. The author describes how the analysis of basic musical tasks reveals the essential subskills of motor coordination, timing, and attentiveness necessary to music-making. Specific teaching methods for skill development and Orff…
Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics
Scheuerer, Michael
2013-01-01
Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...
Energy Technology Data Exchange (ETDEWEB)
Bracegirdle, Thomas J. [British Antarctic Survey, Cambridge (United Kingdom); Stephenson, David B. [University of Exeter, Mathematics Research Institute, Exeter (United Kingdom); NCAS-Climate, Reading (United Kingdom)
2012-12-15
This study presents projections of twenty-first century wintertime surface temperature changes over the high-latitude regions based on the third Coupled Model Inter-comparison Project (CMIP3) multi-model ensemble. The state-dependence of the climate change response on the present day mean state is captured using a simple yet robust ensemble linear regression model. The ensemble regression approach gives different and more precise estimated mean responses compared to the ensemble mean approach. Over the Arctic in January, ensemble regression gives less warming than the ensemble mean along the boundary between sea ice and open ocean (sea ice edge). Most notably, the results show 3 C less warming over the Barents Sea ({proportional_to} 7 C compared to {proportional_to} 10 C). In addition, the ensemble regression method gives projections that are 30 % more precise over the Sea of Okhostk, Bering Sea and Labrador Sea. For the Antarctic in winter (July) the ensemble regression method gives 2 C more warming over the Southern Ocean close to the Greenwich Meridian ({proportional_to} 7 C compared to {proportional_to} 5 C). Projection uncertainty was almost half that of the ensemble mean uncertainty over the Southern Ocean between 30 W to 90 E and 30 % less over the northern Antarctic Peninsula. The ensemble regression model avoids the need for explicit ad hoc weighting of models and exploits the whole ensemble to objectively identify overly influential outlier models. Bootstrap resampling shows that maximum precision over the Southern Ocean can be obtained with ensembles having as few as only six climate models. (orig.)
Study on ETKF-Based Initial Perturbation Scheme for GRAPES Global Ensemble Prediction
Institute of Scientific and Technical Information of China (English)
MA Xulin; XUE Jishan; LU Weisong
2009-01-01
Initial perturbation scheme is one of the important problems for ensemble prediction. In this paper,ensemble initial perturbation scheme for Global/Regional Assimilation and PrEdiction System (GRAPES)global ensemble prediction is developed in terms of the ensemble transform Kalman filter (ETKF) method.A new GRAPES global ensemble prediction system (GEPS) is also constructed. The spherical simplex 14-member ensemble prediction experiments, using the simulated observation network and error character-lstics of simulated observations and innovation-based inflation, are carried out for about two months. The structure characters and perturbation amplitudes of the ETKF initial perturbations and the perturbation growth characters are analyzed, and their qualities and abilities for the ensemble initial perturbations are given.The preliminary experimental results indicate that the ETKF-based GRAPES ensemble initial perturba- tions could identify main normal structures of analysis error variance and reflect the perturbation amplitudes.The initial perturbations and the spread are reasonable. The initial perturbation variance, which is approx-imately equal to the forecast error variance, is found to respond to changes in the observational spatial variations with simulated observational network density. The perturbations generated through the simplex method are also shown to exhibit a very high degree of consistency between initial analysis and short-range forecast perturbations. The appropriate growth and spread of ensemble perturbations can be maintained up to 96-h lead time. The statistical results for 52-day ensemble forecasts show that the forecast scores of ensemble average for the Northern Hemisphere are higher than that of the control forecast. Provided that using more ensemble members, a real-time observational network and a more appropriate inflation factor,better effects of the ETKF-based initial scheme should be shown.
A Comparison of ETKF and Downscaling in a Regional Ensemble Prediction System
Directory of Open Access Journals (Sweden)
Hanbin Zhang
2015-03-01
Full Text Available Based on the operational regional ensemble prediction system (REPS in China Meteorological Administration (CMA, this paper carried out comparison of two initial condition perturbation methods: an ensemble transform Kalman filter (ETKF and a dynamical downscaling of global ensemble perturbations. One month consecutive tests are implemented to evaluate the performance of both methods in the operational REPS environment. The perturbation characteristics are analyzed and ensemble forecast verifications are conducted; furthermore, a TC case is investigated. The main conclusions are as follows: the ETKF perturbations contain more power at small scales while the ones derived from downscaling contain more power at large scales, and the relative difference of the two types of perturbations on scales become smaller with forecast lead time. The growth of downscaling perturbations is more remarkable, and the downscaling perturbations have larger magnitude than ETKF perturbations at all forecast lead times. However, the ETKF perturbation variance can represent the forecast error variance better than downscaling. Ensemble forecast verification shows slightly higher skill of downscaling ensemble over ETKF ensemble. A TC case study indicates that the overall performance of the two systems are quite similar despite the slightly smaller error of DOWN ensemble than ETKF ensemble at long range forecast lead times.
Institute of Scientific and Technical Information of China (English)
曹冬寅; 王琼; 张兴敢
2016-01-01
Based on the sparse representation computed by l2-minimization and ensemble learning,we propose a general classification algorithm for image classification.This new framework provides new insights into two crucial issues in image classification:feature extraction and classification accuracy.Since it was proposed,random forest has become a well-known data analysis method,and it has been applied to a wide variety of scientific areas.As the random forest classification has a good performance and high stability on classification,in this paper,we choose random forest as an ensemble learning classifier.The classifier based on sparse representation classified the test sample by calculate its l2 norm of residual vector between its real values and its reconstructed values.While in some cases,due to the difference of the residuals are very small,it is hard to decide the right class that the test sample belongs.We have proposed a reconstruction algorithm of sparse representation to extract image features and classify the images by random forest classifier.First,a learning dictionary is obtained based on the trained image data set.We generate a sparse vector on the over-complete dictionary,and then calculate the residuals between the real values and the reconstructed values of the training samples.The residual vector is used as the training sample of the random forest classifier.Finally the image is classified by the trained random forest classifier.Random forests are respectively constructed based on residuals,and the classification result is decided by voting strategy.Our Experiments use the standard digital database MNIST as the image recognition database.The recognition rate of the method proposed in this paper is obviously prior to some other popular classification methods,such as SVM.We use MATLAB to finish the research experiment.The experimental results indicate that the method we proposed has better performance than methods based on random forest and sparse
Symanzik flow on HISQ ensembles
Bazavov, A; Brown, N; DeTar, C; Foley, J; Gottlieb, Steven; Heller, U M; Hetrick, J E; Laiho, J; Levkova, L; Oktay, M; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2013-01-01
We report on a scale determination with gradient-flow techniques on the $N_f = 2 + 1 + 1$ HISQ ensembles generated by the MILC collaboration. The lattice scale $w_0/a$, originally proposed by the BMW collaboration, is computed using Symanzik flow at four lattice spacings ranging from 0.15 to 0.06 fm. With a Taylor series ansatz, the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We give a preliminary determination of the scale $w_0$ in physical units, along with associated systematic errors, and compare with results from other groups. We also present a first estimate of autocorrelation lengths as a function of flowtime for these ensembles.
Ritzefeld, Markus; Walhorn, Volker; Kleineberg, Christin; Bieker, Adeline; Kock, Klaus; Herrmann, Christian; Anselmetti, Dario; Sewald, Norbert
2013-11-19
A combined approach based on isothermal titration calorimetry (ITC), fluorescence resonance energy transfer (FRET) experiments, circular dichroism spectroscopy (CD), atomic force microscopy (AFM) dynamic force spectroscopy (DFS), and surface plasmon resonance (SPR) was applied to elucidate the mechanism of protein-DNA complex formation and the impact of protein dimerization of the DNA-binding domain of PhoB (PhoB(DBD)). These insights can be translated to related members of the family of winged helix-turn-helix proteins. One central question was the assembly of the trimeric complex formed by two molecules of PhoB(DBD) and two cognate binding sites of a single oligonucleotide. In addition to the native protein WT-PhoB(DBD), semisynthetic covalently linked dimers with different linker lengths were studied. The ITC, SPR, FRET, and CD results indicate a positive cooperative binding mechanism and a decisive contribution of dimerization on the complex stability. Furthermore, an alanine scan was performed and binding of the corresponding point mutants was analyzed by both techniques to discriminate between different binding types involved in the protein-DNA interaction and to compare the information content of the two methods DFS and SPR. In light of the published crystal structure, four types of contribution to the recognition process of the pho box by the protein PhoB(DBD) could be differentiated and quantified. Consequently, it could be shown that investigating the interactions between DNA and proteins with complementary techniques is necessary to fully understand the corresponding recognition process.
Canonical energy and linear stability of Schwarzschild
Prabhu, Kartik; Wald, Robert
2017-01-01
Consider linearised perturbations of the Schwarzschild black hole in 4 dimensions. Using the linearised Newman-Penrose curvature component, which satisfies the Teukolsky equation, as a Hertz potential we generate a `new' metric perturbation satisfying the linearised Einstein equation. We show that the canonical energy, given by Hollands and Wald, of the `new' metric perturbation is the conserved Regge-Wheeler-like energy used by Dafermos, Holzegel and Rodnianski to prove linear stability and decay of perturbations of Schwarzschild. We comment on a generalisation of this strategy to prove the linear stability of the Kerr black hole.
Women and Textiles: Warping the Architectural Canon
Aron, Jamie
2012-01-01
Textiles have long been a part of the canon of Western architecture—from the folds of draped female forms in ancient Greek temples to the abstract Mayan patterns “knitted” together in Frank Lloyd Wright’s textile block houses of the 1920s. Yet just as any façade may conceal what’s inside, architecture’s shared history with weaving is often obscured. Today architecture sits at the top alongside the “fine arts” of painting and sculpture, while woven textiles occupy a less prominent position in ...
Institute of Scientific and Technical Information of China (English)
孔英会; 景美丽
2012-01-01
For the multi-classification problem, a classification method based on confusion matrix and ensemble learning is proposed in this paper. A hierarchical structure is generated from the similarities between patterns. The classification method chooses support vector machine (SVM) as the basic binary classifier. The AdaBoost algorithm applies weighted voting on SVM whose classification accuracy is not ideal. Taking object recognition in the substation environmental monitoring as an example (related to people, animals, ordinary flames (red and yellow flames), white flames, incandescent lamps), object classification is achieved. The experiments show that the proposed method can effectively improve the classification accuracy.%针对多分类问题,本文提出一种基于混淆矩阵和集成学习的分类方法.从模式间的相似性关系入手,基于混淆矩阵产生层次化分类器结构；以支持向量机(SVM)作为基本的两类分类器,对于分类精度不理想的SVM,通过AdaBoost算法对SVM分类器进行加权投票.以变电站环境监控中的目标识别为例(涉及到人、动物、普通火焰(红黄颜色火焰)、白色火焰、白炽灯),实现了变电站环境监控中的目标分类.实验表明,所提出的方法有效提高了分类精度.
Numerical weather prediction model tuning via ensemble prediction system
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Cluster Ensemble-based Image Segmentation
Directory of Open Access Journals (Sweden)
Xiaoru Wang
2013-07-01
Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new\tcluster ensemble-based image\tsegmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.
Bornyakov, V G; Goy, V A; Molochkov, A V; Nakamura, Atsushi; Nikolaev, A A; Zakharov, V I
2016-01-01
We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential $i\\mu_{qI}$. Then we restore the grand canonical partition function for imaginary chemical potential using fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.
Quasinormal modes and Regge poles of the canonical acoustic hole
Dolan, Sam R; Crispino, Luis C B
2014-01-01
We compute the quasinormal mode frequencies and Regge poles of the canonical acoustic hole (a black hole analogue), using three methods. First, we show how damped oscillations arise by evolving generic perturbations in the time domain using a simple finite-difference scheme. We use our results to estimate the fundamental QN frequencies of the low multipolar modes $l=1, 2, \\ldots$. Next, we apply an asymptotic method to obtain an expansion for the frequency in inverse powers of $l+1/2$ for low overtones. We test the expansion by comparing against our time-domain results, and (existing) WKB results. The expansion method is then extended to locate the Regge poles. Finally, to check the expansion of Regge poles we compute the spectrum numerically by direct integration in the frequency domain. We give a geometric interpretation of our results and comment on experimental verification.
Probabilistic Determination of Native State Ensembles of Proteins.
Olsson, Simon; Vögeli, Beat Rolf; Cavalli, Andrea; Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten; Hamelryck, Thomas
2014-08-12
The motions of biological macromolecules are tightly coupled to their functions. However, while the study of fast motions has become increasingly feasible in recent years, the study of slower, biologically important motions remains difficult. Here, we present a method to construct native state ensembles of proteins by the combination of physical force fields and experimental data through modern statistical methodology. As an example, we use NMR residual dipolar couplings to determine a native state ensemble of the extensively studied third immunoglobulin binding domain of protein G (GB3). The ensemble accurately describes both local and nonlocal backbone fluctuations as judged by its reproduction of complementary experimental data. While it is difficult to assess precise time-scales of the observed motions, our results suggest that it is possible to construct realistic conformational ensembles of biomolecules very efficiently. The approach may allow for a dramatic reduction in the computational as well as experimental resources needed to obtain accurate conformational ensembles of biological macromolecules in a statistically sound manner.
Canonical Transformations can Dramatically Simplify Supersymmetry
Dixon, John
2016-01-01
A useful way to keep track of the SUSY invariance of a theory is by formulating it with a BRST Poisson Bracket. It turns out that there is a crucial subtlety that is hidden in this formulation. When the theory contains a Chiral Multiplet, the relevant BRST Poisson Bracket has a very important Canonical Transformation that leaves it invariant. This Canonical Transformation takes all or part of the Scalar Field $A$ and replaces it with a Zinn Source $J_A$, and also takes the related Zinn Source $\\Gamma_A$ and replaces it with an `Antighost' Field $\\eta_A$. Naively, this looks like it is just a change of notation. But in fact the interpretation means that one has moved some of the conserved Noether SUSY current from the Field Action, and placed it partly in the Zinn Sources Action, and so the SUSY current in the Field part of the Action is no longer conserved, because the Zinn Sources do not satisfy any equations of motion. They are not quantized, because they are Sources. So it needs to be recognized that SUSY ...
Four-dimensional Localization and the Iterative Ensemble Kalman Smoother
Bocquet, M.
2015-12-01
The iterative ensemble Kalman smoother (IEnKS) is a data assimilation method meant for efficiently tracking the state ofnonlinear geophysical models. It combines an ensemble of model states to estimate the errors similarly to the ensemblesquare root Kalman filter, with a 4D-variational analysis performed within the ensemble space. As such it belongs tothe class of ensemble variational methods. Recently introduced 4DEnVar or the 4D-LETKF can be seen as particular casesof the scheme. The IEnKS was shown to outperform 4D-Var, the ensemble Kalman filter (EnKF) and smoother, with low-ordermodels in all investigated dynamical regimes. Like any ensemble method, it could require the use of localization of theanalysis when the state space dimension is high. However, localization for the IEnKS is not as straightforward as forthe EnKF. Indeed, localization needs to be defined across time, and it needs to be as much as possible consistent withthe dynamical flow within the data assimilation variational window. We show that a Liouville equation governs the timeevolution of the localization operator, which is linked to the evolution of the error correlations. It is argued thatits time integration strongly depends on the forecast dynamics. Using either covariance localization or domainlocalization, we propose and test several localization strategies meant to address the issue: (i) a constant and uniformlocalization, (ii) the propagation through the window of a restricted set of dominant modes of the error covariancematrix, (iii) the approximate propagation of the localization operator using model covariant local domains. Theseschemes are illustrated on the one-dimensional Lorenz 40-variable model.
Canon Fodder: Young Adult Literature as a Tool for Critiquing Canonicity
Hateley, Erica
2013-01-01
Young adult literature is a tool of socialisation and acculturation for young readers. This extends to endowing "reading" with particular significance in terms of what literature should be read and why. This paper considers some recent young adult fiction with an eye to its engagement with canonical literature and its representations of…
Non-canonical RAN Translation of CGG Repeats Has Canonical Requirements.
Cox, Diana C; Cooper, Thomas A
2016-04-21
Repeat expansions cause dominantly inherited neurological disorders. In this issue of Molecular Cell, Kearse et al. (2016) examine the requirements for RAN translation of the CGG repeats that cause fragile X-associated tremor/ataxia syndrome, revealing similarities and differences with canonical translation.
Lakkaraju, Sirish Kaushik; Raman, E Prabhu; Yu, Wenbo; MacKerell, Alexander D
2014-06-10
Solute sampling of explicit bulk-phase aqueous environments in grand canonical (GC) ensemble simulations suffer from poor convergence due to low insertion probabilities of the solutes. To address this, we developed an iterative procedure involving Grand Canonical-like Monte Carlo (GCMC) and molecular dynamics (MD) simulations. Each iteration involves GCMC of both the solutes and water followed by MD, with the excess chemical potential (μex) of both the solute and the water oscillated to attain their target concentrations in the simulation system. By periodically varying the μex of the water and solutes over the GCMC-MD iterations, solute exchange probabilities and the spatial distributions of the solutes improved. The utility of the oscillating-μex GCMC-MD method is indicated by its ability to approximate the hydration free energy (HFE) of the individual solutes in aqueous solution as well as in dilute aqueous mixtures of multiple solutes. For seven organic solutes: benzene, propane, acetaldehyde, methanol, formamide, acetate, and methylammonium, the average μex of the solutes and the water converged close to their respective HFEs in both 1 M standard state and dilute aqueous mixture systems. The oscillating-μex GCMC methodology is also able to drive solute sampling in proteins in aqueous environments as shown using the occluded binding pocket of the T4 lysozyme L99A mutant as a model system. The approach was shown to satisfactorily reproduce the free energy of binding of benzene as well as sample the functional group requirements of the occluded pocket consistent with the crystal structures of known ligands bound to the L99A mutant as well as their relative binding affinities.
Dettinger, M.
2006-01-01
In many meteorological and climatological modeling applications, the availability of ensembles of predictions containing very large numbers of members would substantially ease statistical analyses and validations. This study describes and demonstrates an objective approach for generating large ensembles of "additional" realizations from smaller ensembles, where the additional ensemble members share important first-and second-order statistical characteristics and some dynamic relations within the original ensemble. By decomposing the original ensemble members into assuredly independent time-series components (using a form of principal component decomposition) that can then be resampled randomly and recombined, the component-resampling procedure generates additional time series that follow the large and small scale structures in the original ensemble members, without requiring any tuning by the user. The method is demonstrated by applications to operational medium-range weather forecast ensembles from a single NCEP weather model and application to a multi-model, multi-emission-scenarios ensemble of 21st Century climate-change projections. ?? Springer 2006.
An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter
Song, Hajoon
2010-07-01
A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.
Directory of Open Access Journals (Sweden)
Jogendra Kushwah
2013-06-01
Full Text Available The free radical gene classification of cancer diseases is challenging job in biomedical data engineering. The improving of classification of gene selection of cancer diseases various classifier are used, but the classification of classifier are not validate. So ensemble classifier is used for cancer gene classification using neural network classifier with random forest tree. The random forest tree is ensembling technique of classifier in this technique the number of classifier ensemble of their leaf node of class of classifier. In this paper we combined neural network with random forest ensemble classifier for classification of cancer gene selection for diagnose analysis of cancer diseases. The proposed method is different from most of the methods of ensemble classifier, which follow an input output paradigm of neural network, where the members of the ensemble are selected from a set of neural network classifier. the number of classifiers is determined during the rising procedure of the forest. Furthermore, the proposed method produces an ensemble not only correct, but also assorted, ensuring the two important properties that should characterize an ensemble classifier. For empirical evaluation of our proposed method we used UCI cancer diseases data set for classification. Our experimental result shows that better result in compression of random forest tree classification.
phenix.ensemble_refinement: a test study of apo and holo BACE1
Burnley, B.T.; Gros, P.
2013-01-01
phenix.ensemble_refinement (Burnley et al. 2012) combines molecular dynamic (MD) simulations with X-‐ray structure refinement to generate ensemble models fitted to diffraction data. It is an evolution of the ‘time-‐averaging’ method first proposed by Gros et al. in 1990 (Gros, van Gunsteren, and H
Kadoura, Ahmad
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
Large unbalanced credit scoring using Lasso-logistic regression ensemble.
Directory of Open Access Journals (Sweden)
Hong Wang
Full Text Available Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.
Large unbalanced credit scoring using Lasso-logistic regression ensemble.
Wang, Hong; Xu, Qingsong; Zhou, Lifeng
2015-01-01
Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.
Properties of the Affine Invariant Ensemble Sampler in high dimensions
Huijser, David; Brewer, Brendon J
2015-01-01
We present theoretical and practical properties of the affine-invariant ensemble sampler Markov chain Monte Carlo method. In high dimensions the affine-invariant ensemble sampler shows unusual and undesirable properties. We demonstrate this with an $n$-dimensional correlated Gaussian toy problem with a known mean and covariance structure, and analyse the burn-in period. The burn-in period seems to be short, however upon closer inspection we discover the mean and the variance of the target distribution do not match the expected, known values. This problem becomes greater as $n$ increases. We therefore conclude that the affine-invariant ensemble sampler should be used with caution in high dimensional problems. We also present some theoretical results explaining this behaviour.
Ensemble-based Probabilistic Forecasting at Horns Rev
DEFF Research Database (Denmark)
Pinson, Pierre; Madsen, Henrik
2009-01-01
of probabilistic forecasts, the resolution of which may be maximized by using meteorological ensemble predictions as input. The paper concentrates on the test case of the Horns Rev wind form over a period of approximately 1 year, in order to describe, apply and discuss a complete ensemble-based probabilistic...... the benefit of yielding predictive distributions that are of increased reliability (in a probabilistic sense) in comparison with the raw ensemble forecasts, at the some time taking advantage of their high resolution. Copyright (C) 2008 John Wiley & Sons, Ltd....... are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which ore recursively estimated in order to maximize the overall skill of obtained predictive distributions. Such a methodology has...
Exact and approximate ensemble treatments of thermal pairing in a multilevel model
Hung, N. Quang; Dang, N. Dinh
2009-05-01
A systematic comparison is conducted for pairing properties of finite systems at nonzero temperature as predicted by the exact solutions of the pairing problem embedded in three principal statistical ensembles, as well as the unprojected (FTBCS1+SCQRPA) and Lipkin+Nogami projected (FTLN1+SCQRPA) theories that include the quasiparticle number fluctuation and coupling to pair vibrations within the self-consistent quasiparticle random-phase approximation. The numerical calculations are performed for the pairing gap, total energy, heat capacity, entropy, and microcanonical temperature within the doubly folded equidistant multilevel pairing model. The FTLN1+SCQRPA predictions agree best with the exact grand-canonical results. In general, all approaches clearly show that the superfluid-normal phase transition is smoothed out in finite systems. A novel formula is suggested for extracting the empirical pairing gap in reasonable agreement with the exact canonical results.
Exact and approximate ensemble treatments of thermal pairing in a multilevel model
Hung, N Quang
2009-01-01
A systematic comparison is conducted for pairing properties of finite systems at nonzero temperature as predicted by the exact solutions of the pairing problem embedded in three principal statistical ensembles, as well as the unprojected (FTBCS1+SCQRPA) and Lipkin-Nogami projected (FTLN1+SCQRPA) theories that include the quasiparticle number fluctuation and coupling to pair vibrations within the self-consistent quasiparticle random-phase approximation. The numerical calculations are performed for the pairing gap, total energy, heat capacity, entropy, and microcanonical temperature within the doubly-folded equidistant multilevel pairing model. The FTLN1+SCQRPA predictions agree best with the exact grand-canonical results. In general, all approaches clearly show that the superfluid-normal phase transition is smoothed out in finite systems. A novel formula is suggested for extracting the empirical pairing gap in reasonable agreement with the exact canonical results.
Molecular Dynamics Simulation of Glass Transition Behavior of Polyimide Ensemble
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The effect of chromophores to the glass transition temperature of polyimide ensemble has been investigated by means of molecular dynamics simulation in conjunction with barrier analysis. Simulated Tg results indicated a good agreement with experimental value. This study showed the MD simulation could estimate the effect of chromophores to the Tg of polyimide ensemble conveniently and an estimation approach method had a surprising deviation of Tg from experiment. At the same time, a polyimide structure with higher barrier energy was designed and validated by MD simulation.
Probabilistic Determination of Native State Ensembles of Proteins
DEFF Research Database (Denmark)
Olsson, Simon; Vögeli, Beat Rolf; Cavalli, Andrea
2014-01-01
The motions of biological macromolecules are tightly coupled to their functions. However, while the study of fast motions has become increasingly feasible in recent years, the study of slower, biologically important motions remains difficult. Here, we present a method to construct native state...... ensembles of proteins by the combination of physical force fields and experimental data through modern statistical methodology. As an example, we use NMR residual dipolar couplings to determine a native state ensemble of the extensively studied third immunoglobulin binding domain of protein G (GB3...
Classical and Quantum Ensembles via Multiresolution. II. Wigner Ensembles
Fedorova, A N; Fedorova, Antonina N.; Zeitlin, Michael G.
2004-01-01
We present the application of the variational-wavelet analysis to the analysis of quantum ensembles in Wigner framework. (Naive) deformation quantization, the multiresolution representations and the variational approach are the key points. We construct the solutions of Wigner-like equations via the multiscale expansions in the generalized coherent states or high-localized nonlinear eigenmodes in the base of the compactly supported wavelets and the wavelet packets. We demonstrate the appearance of (stable) localized patterns (waveletons) and consider entanglement and decoherence as possible applications.
Nanobiosensing with Arrays and Ensembles of Nanoelectrodes
Directory of Open Access Journals (Sweden)
Najmeh Karimian
2016-12-01
Full Text Available Since the first reports dating back to the mid-1990s, ensembles and arrays of nanoelectrodes (NEEs and NEAs, respectively have gained an important role as advanced electroanalytical tools thank to their unique characteristics which include, among others, dramatically improved signal/noise ratios, enhanced mass transport and suitability for extreme miniaturization. From the year 2000 onward, these properties have been exploited to develop electrochemical biosensors in which the surfaces of NEEs/NEAs have been functionalized with biorecognition layers using immobilization modes able to take the maximum advantage from the special morphology and composite nature of their surface. This paper presents an updated overview of this field. It consists of two parts. In the first, we discuss nanofabrication methods and the principles of functioning of NEEs/NEAs, focusing, in particular, on those features which are important for the development of highly sensitive and miniaturized biosensors. In the second part, we review literature references dealing the bioanalytical and biosensing applications of sensors based on biofunctionalized arrays/ensembles of nanoelectrodes, focusing our attention on the most recent advances, published in the last five years. The goal of this review is both to furnish fundamental knowledge to researchers starting their activity in this field and provide critical information on recent achievements which can stimulate new ideas for future developments to experienced scientists.
Ensemble methods for large scale inverse problems
Heemink, A.W.; Umer Altaf, M.; Barbu, A.L.; Verlaan, M.
2013-01-01
Variational data assimilation, also sometimes simply called the ‘adjoint method’, is used very often for large scale model calibration problems. Using the available data, the uncertain parameters in the model are identified by minimizing a certain cost function that measures the difference between t
Ensemble data assimilation for the reconstruction of mantle circulation
Bocher, Marie; Coltice, Nicolas; Fournier, Alexandre; Tackley, Paul
2016-04-01
The surface tectonics of the Earth is the result of mantle dynamics. This link between internal and surface dynamics can be used to reconstruct the evolution of mantle circulation. This is classically done by imposing plate tectonics reconstructions as boundary conditions on numerical models of mantle convection. However, this technique does not account for uncertainties in plate tectonics reconstructions and does not allow any dynamical feedback of mantle dynamics on surface tectonics to develop. Mantle convection models are now able to produce surface tectonics comparable to that of the Earth to first order. We capitalize on these convection models to propose a more consistent integration of plate tectonics reconstructions into mantle convection models. For this purpose, we use the ensemble Kalman filter. This method has been developed and successfully applied to meteorology, oceanography and even more recently outer core dynamics. It consists in integrating sequentially a time series of data into a numerical model, starting from an ensemble of possible initial states. The initial ensemble of states is designed to represent an approximation of the probability density function (pdf) of the a priori state of the system. Whenever new observations are available, each member of the ensemble states is corrected considering both the approximated pdf of the state, and the pdf of the new data. Between two observation times, each ensemble member evolution is computed independently, using the convection model. This technique provides at each time an approximation of the pdf of the state of the system, in the form of a finite ensemble of states. We perform synthetic experiments to assess the efficiency of this method for the reconstruction of mantle circulation.
Paul Weiss and the genesis of canonical quantization
Rickles, Dean; Blum, Alexander
2015-12-01
This paper describes the life and work of a figure who, we argue, was of primary importance during the early years of field quantisation and (albeit more indirectly) quantum gravity. A student of Dirac and Born, he was interned in Canada during the second world war as an enemy alien and after his release never seemed to regain a good foothold in physics, identifying thereafter as a mathematician. He developed a general method of quantizing (linear and non-linear) field theories based on the parameters labelling an arbitrary hypersurface. This method (the `parameter formalism' often attributed to Dirac), though later discarded, was employed (and viewed at the time as an extremely important tool) by the leading figures associated with canonical quantum gravity: Dirac, Pirani and Schild, Bergmann, DeWitt, and others. We argue that he deserves wider recognition for this and other innovations.
2012-01-01
Licence; En 1935, un groupe de mathématiciens français eut l'ambition de reconstruire tout l'édifice mathématique (sans S pour bien montrer l'unité) selon la pensée formaliste de Hilbert. Les membres fondateurs ont été Henri Cartan, Claude Chevalley, Jean Delsarte, Jean Dieudonné, André Weil auxquels se joindra René de Possel. En juillet 1935 fut donc créé, lors d'un séminaire en Auvergne le groupe 'Nicolas Bourbaki'. Le nom de cette association fait référence en fait à une anecdote qui se pa...
When Canonical Quantization Fails, Here is How to Fix It
Klauder, John R.
2016-01-01
Following Dirac, the rules of canonical quantization include classical and quantum contact transformations of classical and quantum phase space variables. While arbitrary classical canonical coordinate transformations exist that is not the case for some analogous quantum canonical coordinate transformations. This failure is due to the rigid connection of quantum variables arising by promoting the corresponding classical variable from a $c$-number to a $q$-number. A different relationship of $...
Canonical symmetry properties of the constrained singular generalized mechanical system
Institute of Scientific and Technical Information of China (English)
李爱民; 江金环; 李子平
2003-01-01
Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account, the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.
Canonical symmetry properties of the constrained singular generalized mechanical system
Institute of Scientific and Technical Information of China (English)
LiAi-Min; JiangJin-Huan; LiZi-Ping
2003-01-01
Based on generalized Apell-Chetaev constraint conditions and to take the inherent constrains for singular Lagrangian into account,the generalized canonical equations for a general mechanical system with a singular higher-order Lagrangian and subsidiary constrains are formulated. The canonical symmetries in phase space for such a system are studied and Noether theorem and its inversion theorem in the generalized canonical formalism have been established.
Canonical terminal patterning is an evolutionary novelty.
Duncan, Elizabeth J; Benton, Matthew A; Dearden, Peter K
2013-05-01
Patterning of the terminal regions of the Drosophila embryo is achieved by an exquisitely regulated signal that passes between the follicle cells of the ovary, and the developing embryo. This pathway, however, is missing or modified in other insects. Here we trace the evolution of this pathway by examining the origins and expression of its components. The three core components of this pathway: trunk, torso and torso-like have different evolutionary histories and have been assembled step-wise to form the canonical terminal patterning pathway of Drosophila and Tribolium. Trunk, torso and a gene unrelated to terminal patterning, prothoraciotrophic hormone (PTTH), show an intimately linked evolutionary history, with every holometabolous insect, except the honeybee, possessing both PTTH and torso genes. Trunk is more restricted in its phylogenetic distribution, present only in the Diptera and Tribolium and, surprisingly, in the chelicerate Ixodes scapularis, raising the possibility that trunk and torso evolved earlier than previously thought. In Drosophila torso-like restricts the activation of the terminal patterning pathway to the poles of the embryo. Torso-like evolved in the pan-crustacean lineage, but based on expression of components of the canonical terminal patterning system in the hemimetabolous insect Acyrthosiphon pisum and the holometabolous insect Apis mellifera, we find that the canonical terminal-patterning system is not active in these insects. We therefore propose that the ancestral function of torso-like is unrelated to terminal patterning and that torso-like has become co-opted into terminal patterning in the lineage leading to Coleoptera and Diptera. We also show that this co-option has not resulted in changes to the molecular function of this protein. Torso-like from the pea aphid, honeybee and Drosophila, despite being expressed in different patterns, are functionally equivalent. We propose that co-option of torso-like into restricting the activity
The Wilson loop in the Gaussian Unitary Ensemble
Gurau, Razvan
2016-01-01
Using the supersymmetric formalism we compute exactly at finite $N$ the expectation of the Wilson loop in the Gaussian Unitary Ensemble and derive an exact formula for the spectral density at finite $N$. We obtain the same result by a second method relying on enumerative combinatorics and show that it leads to a novel proof of the Harer-Zagier series formula.
Nonlinear reaction coordinate analysis in the reweighted path ensemble
Lechner, W.; Rogal, J.; Juraszek, J.; Ensing, B.; Bolhuis, P.G.
2010-01-01
We present a flexible nonlinear reaction coordinate analysis method for the transition path ensemble based on the likelihood maximization approach developed by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)] . By parametrizing the reaction coordinate by a string of images in a collective variab
Korean Percussion Ensemble ("Samulnori") in the General Music Classroom
Kang, Sangmi; Yoo, Hyesoo
2016-01-01
This article introduces "samulnori" (Korean percussion ensemble), its cultural background, and instructional methods as parts of a classroom approach to teaching upper-level general music. We introduce five of eight sections from "youngnam nong-ak" (a style of samulnori) as a repertoire for teaching Korean percussion music to…
Update on non-canonical microRNAs
2014-01-01
Non-canonical microRNAs are a recently-discovered subset of microRNAs. They structurally and functionally resemble canonical miRNAs, but were found to follow distinct maturation pathways, typically bypassing one or more steps of the classic canonical biogenesis pathway. Non-canonical miRNAs were found to have diverse origins, including introns, snoRNAs, endogenous shRNAs and tRNAs. Our knowledge about their functions remains relatively primitive; however, many interesting discoveries have tak...
The Topology of Canonical Flux Tubes in Flared Jet Geometry
Sander Lavine, Eric; You, Setthivoine
2017-01-01
Magnetized plasma jets are generally modeled as magnetic flux tubes filled with flowing plasma governed by magnetohydrodynamics (MHD). We outline here a more fundamental approach based on flux tubes of canonical vorticity, where canonical vorticity is defined as the circulation of the species’ canonical momentum. This approach extends the concept of magnetic flux tube evolution to include the effects of finite particle momentum and enables visualization of the topology of plasma jets in regimes beyond MHD. A flared, current-carrying magnetic flux tube in an ion-electron plasma with finite ion momentum is thus equivalent to either a pair of electron and ion flow flux tubes, a pair of electron and ion canonical momentum flux tubes, or a pair of electron and ion canonical vorticity flux tubes. We examine the morphology of all these flux tubes for increasing electrical currents, different radial current profiles, different electron Mach numbers, and a fixed, flared, axisymmetric magnetic geometry. Calculations of gauge-invariant relative canonical helicities track the evolution of magnetic, cross, and kinetic helicities in the system, and show that ion flow fields can unwind to compensate for an increasing magnetic twist. The results demonstrate that including a species’ finite momentum can result in a very long collimated canonical vorticity flux tube even if the magnetic flux tube is flared. With finite momentum, particle density gradients must be normal to canonical vorticities, not to magnetic fields, so observations of collimated astrophysical jets could be images of canonical vorticity flux tubes instead of magnetic flux tubes.
The Literary Canon in the Age of New Media
DEFF Research Database (Denmark)
Backe, Hans-Joachim
2015-01-01
and mediality of the canon. In a development that has largely gone unnoticed outside German speaking countries, new approaches for discussing current and future processes of canonization have been developed in recent years. One pivotal element of this process has been a thorough re-evaluation new media...... as a touchstone for both defining literature in the digital age and inquiring into the mechanisms of contemporary canon formation. The article is thus aimed at introducing both the specifically German approach to canon developed in recent years and its results to a larger scholarly community....
Transforming differential equations of multi-loop Feynman integrals into canonical form
Meyer, Christoph
2016-01-01
The method of differential equations has been proven to be a powerful tool for the computation of multi-loop Feynman integrals appearing in quantum field theory. It has been observed that in many instances a canonical basis can be chosen, which drastically simplifies the solution of the differential equation. In this paper, an algorithm is presented that computes the transformation to a canonical basis, starting from some basis that is, for instance, obtained by the usual integration-by-parts reduction techniques. The algorithm requires the existence of a rational transformation to a canonical basis, but is otherwise completely agnostic about the differential equation. In particular, it is applicable to problems involving multiple scales and allows for a rational dependence on the dimensional regulator. It is demonstrated that the algorithm is suitable for current multi-loop calculations by presenting its successful application to a number of non-trivial examples.
The Canonical Correlation Analysis on Semen Quality and Serum Heavy Metals in Chinese Young Men
Institute of Scientific and Technical Information of China (English)
Jun-qing WU; Jiang ZHU; Zhan-hai FU; Yin-mei DU; Cui-ling LIANG; Er-sheng GAO; Jian-guo TAO; Qiu-ying YANG; Xiao XU; Wen-juan CAI; Jian GUO; Feng TANG
2003-01-01
Objective To explore the correlation between serum heavy metal and semen quality in normal Chinese young menMethods This study was designed as a multi-center cross-sectional investigation. The subjects consisted of 562 male vomunteers who had undergone premarital physical examination in maternal and children health centers in 7 provinces in China.Results Results from Spearman rank correlation analysis (partial variable: region) show that serum lead and cadmium are negatively related to percentage of morphological normal sperm, but canonical correlation between semen quality and serum heavy metal are not significant. Canonical correlation analysis among the subjects from Guizhou shows cadmium is harmful to sperm morphology. In Henan, furthermore, results show lead and cadmium could negatively affect sperm viability and morphology.Conclusion Among all study subjects, canonical correlation between semen quality and serum heavy metal were not significant; however, results in some region showed serum cadmium and lead might be harmful to sperm quality.
Distribution and genetic diversity of functional microorganisms in different CANON reactors.
Liu, Tao; Li, Dong; Zeng, Huiping; Li, Xiangkun; Liang, Yuhai; Chang, Xiaoyan; Zhang, Jie
2012-11-01
Completely autotrophic nitrogen removal over nitrite (CANON) has been regarded as an efficient and economical process for nitrogen removal from wastewater. The distribution and genetic diversity of the functional microorganisms in five lab-scale CANON reactors have been investigated by using some molecular biology methods. Nitrosomonas-like aerobic ammonium oxidizing bacteria (AerAOB) and Candidatus Brocadia-related anaerobic ammonium oxidizing bacteria (AnAOB) were detected as predominant functional microbes in the five reactors while Nitrobacter-like nitrite oxidizing bacteria (NOB) existed only in the systems operated at ambient temperature. Communities of AerAOB and AnAOB were almost similar among the five reactors while the distribution of the functional microbes was either scattered or densely packed. Meanwhile, this study has demonstrated the feasibility of starting up CANON by inoculating conventional activated sludge in low ammonium content at ambient temperature.
Institute of Scientific and Technical Information of China (English)
HUANG YongChang; JIANG YunGuo; LI XinGuo
2007-01-01
According to the method of path integral quantization for the canonical constrained system in Becchi-Rouet-Stora-Tyutin scheme, the supersymmetric electromagnetic interaction system was quantized. Both the Hamiltonian of the supersymmetric electromagnetic interaction system in phase space and the quantization procedure were simplified. The BRST generator was constructed, and the BRST transformations of supersymmetric fields were gotten; the effective action was calculated, and the generating functional for the Green function was achieved; also, the gauge generator was constructed, and the gauge transformation of the system was obtained. Finally, the Ward-Takahashi identities based on the canonical Noether theorem were calculated, and two relations between proper vertices and propagators were obtained.
Gauge-covariant canonical formalism revisited with application to the proton spin decomposition
Lorcé, Cédric
2013-01-01
We revisit the gauge-covariant canonical formalism by separating explicitly physical and gauge degrees of freedom. We show in particular that the gauge-invariant linear and angular momentum operators proposed by Chen et al. can consistently be derived from the standard procedure based on the Noether's theorem. Finally, we demonstrate that this approach is essentially equivalent to the gauge-invariant canonical formalism based on the concept of Dirac variables. Because of many similarities with the background field method, the formalism developed here should also be relevant to general relativity and any metric theories.
The difference between the Weil height and the canonical height on elliptic curves
Silverman, Joseph H.
1990-10-01
Estimates for the difference of the Weil height and the canonical height of points on elliptic curves are used for many purposes, both theoretical and computational. In this note we give an explicit estimate for this difference in terms of the j-invariant and discriminant of the elliptic curve. The method of proof, suggested by Serge Lang, is to use the decomposition of the canonical height into a sum of local heights. We illustrate one use for our estimate by computing generators for the Mordell-Weil group in three examples.
Quantum Repeaters and Atomic Ensembles
DEFF Research Database (Denmark)
Borregaard, Johannes
a previous protocol, thereby enabling fast local processing, which greatly enhances the distribution rate. We then move on to describe our work on improving the stability of atomic clocks using entanglement. Entanglement can potentially push the stability of atomic clocks to the so-called Heisenberg limit......, which is the absolute upper limit of the stability allowed by the Heisenberg uncertainty relation. It has, however, been unclear whether entangled state’s enhanced sensitivity to noise would prevent reaching this limit. We have developed an adaptive measurement protocol, which circumvents this problem...... based on atomic ensembles....
Gibbs Ensembles of Nonintersecting Paths
Borodin, Alexei
2008-01-01
We consider a family of determinantal random point processes on the two-dimensional lattice and prove that members of our family can be interpreted as a kind of Gibbs ensembles of nonintersecting paths. Examples include probability measures on lozenge and domino tilings of the plane, some of which are non-translation-invariant. The correlation kernels of our processes can be viewed as extensions of the discrete sine kernel, and we show that the Gibbs property is a consequence of simple linear relations satisfied by these kernels. The processes depend on infinitely many parameters, which are closely related to parametrization of totally positive Toeplitz matrices.
Measuring social interaction in music ensembles.
Volpe, Gualtiero; D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano
2016-05-05
Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances.
Rollins, C.; Barbot, S.; Avouac, J. P.
2014-12-01
The 2010 M=7.2 El Mayor-Cucapah earthquake occurred in the Salton Trough, a region of thinned lithosphere and high heat flow, and the postseismic deformation following this earthquake presents a unique opportunity to study the rheology of extensional environments and the mechanics of ductile flow within and beneath the lithosphere. Previous work [Rollins et al, in prep.] revealed that GPS timeseries of surface displacement following the earthquake were well fit to a coupled model simulating stress-driven afterslip on the deep extension of the coseismic rupture, Newtonian viscoelastic relaxation in a low-viscosity zone in the lower crust of the Salton Trough aligned with areas of high heat flow, and Newtonian viscoelastic relaxation in a three-dimensional asthenosphere with geometry matching that of the regional lithosphere-asthenosphere boundary inferred from receiver functions. Extending the success of this model to a robust interpretation of the mechanics of deformation at depth requires a better understanding of uncertainties and trade-offs between parameters (depth of the brittle-ductile transition, viscosities of the lower crust and asthenosphere, geometry of viscosity anomalies in the Salton Trough, frictional parameters of the possible downdip extensions of the coseismic rupture, and correlations among these parameters). We will show results from recent work that uses a newly developed method to efficiently explore this model space in a Bayesian sense. The method employs the Neighborhood Algorithm of Sambridge [1999], which makes use of Voronoi cells to optimize the search in the model space, samples regions that contains models of acceptable data fit, and extracts robust information from the ensemble of models obtained. The method is particularly well suited to identify a class of models that fit geodetic data approximately equally well, allowing us to present and discuss a range of possible deformation mechanisms. This method can be applied to any study of
Consistency of canonical formulation of Horava gravity
Energy Technology Data Exchange (ETDEWEB)
Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Tainan, Taiwan (China)
2011-09-22
Both the non-projectable and projectable version of Horava gravity face serious challenges. In the non-projectable version, the constraint algebra is seemingly inconsistent. The projectable version lacks a local Hamiltonian constraint, thus allowing for an extra graviton mode which can be problematic. A new formulation (based on arXiv:1007.1563) of Horava gravity which is naturally realized as a representation of the master constraint algebra (instead of the Dirac algebra) studied by loop quantum gravity researchers is presented. This formulation yields a consistent canonical theory with first class constraints; and captures the essence of Horava gravity in retaining only spatial diffeomorphisms as the physically relevant non-trivial gauge symmetry. At the same time the local Hamiltonian constraint is equivalently enforced by the master constraint.
The Deuteron as a Canonically Quantized Biskyrmion
Acus, A; Norvaisas, E; Riska, D O
2003-01-01
The ground state configurations of the solution to Skyrme's topological soliton model for systems with baryon number larger than 1 are well approximated with rational map ans"atze, without individual baryon coordinates. Here canonical quantization of the baryon number 2 system, which represents the deuteron, is carried out in the rational map approximation. The solution, which is described by the 6 parameters of the chiral group SU(2)$times$SU(2), is stabilized by the quantum corrections. The matter density of the variational quantized solution has the required exponential large distance falloff and the quantum numbers of the deuteron. Similarly to the axially symmetric semiclassical solution, the radius and the quadrupole moment are, however, only about half as large as the corresponding empirical values. The quantized deuteron solution is constructed for representations of arbitrary dimension of the chiral group.
Linear canonical transforms theory and applications
Kutay, M; Ozaktas, Haldun; Sheridan, John
2016-01-01
This book provides a clear and accessible introduction to the essential mathematical foundations of linear canonical transforms from a signals and systems perspective. Substantial attention is devoted to how these transforms relate to optical systems and wave propagation. There is extensive coverage of sampling theory and fast algorithms for numerically approximating the family of transforms. Chapters on topics ranging from digital holography to speckle metrology provide a window on the wide range of applications. This volume will serve as a reference for researchers in the fields of image and signal processing, wave propagation, optical information processing and holography, optical system design and modeling, and quantum optics. It will be of use to graduate students in physics and engineering, as well as for scientists in other areas seeking to learn more about this important yet relatively unfamiliar class of integral transformations.
New Canonical Variables for d=11 Supergravity
Melosch, S; Melosch, Stephan; Nicolai, Hermann
1998-01-01
A set of new canonical variables for $d=11$ supergravity is proposed which renders the supersymmetry variations and the supersymmetry constraint polynomial. The construction is based on the $SO(1,2)\\times SO(16)$ invariant reformulation of $d=11$ supergravity given in previous work, and has some similarities with Ashtekar's reformulation of Einstein's theory. The new bosonic variables fuse the gravitational degrees of freedom with those of the three-index photon $A_{MNP}$ in accordance with the hidden symmetries of the dimensionally reduced theory. Although $E_8$ is not a symmetry of the theory, the bosonic sector exhibits a remarkable $E_8$ structure, hinting at the existence of a novel type of ``exceptional geometry''.
Families of Log Canonically Polarized Varieties
Dundon, Ariana
2011-01-01
Determining the number of singular fibers in a family of varieties over a curve is a generalization of Shafarevich's Conjecture and has implications for the types of subvarieties that can appear in the corresponding moduli stack. We consider families of log canonically polarized varieties over $\\P^1$, i.e. families $g:(Y,D)\\to \\P^1$ where $D$ is an effective snc divisor and the sheaf $\\omega_{Y/\\P^1}(D)$ is $g$-ample. After first defining what it means for fibers of such a family to be singular, we show that with the addition of certain mild hypotheses (the fibers have finite automorphism group, $\\sO_Y(D)$ is semi-ample, and the components of $D$ must avoid the singular locus of the fibers and intersect the fibers transversely), such a family must either be isotrivial or contain at least 3 singular fibers.
DEFF Research Database (Denmark)
Sunyer Pinya, Maria Antonia; Madsen, Henrik; Rosbjerg, Dan
2013-01-01
on the method and precipitation index considered. The results also show that the main cause of interdependency in the ensemble is the use of the same RCMdriven by different GCMs. This study shows that the precipitation outputs from the RCMs in the ENSEMBLES project cannot be considered independent....... Daily precipitation indices from an ensemble of RCMs driven by the 40-yrECMWFRe-Analysis (ERA-40) and an ensemble of the same RCMs driven by different general circulation models (GCMs) are analyzed. Two different methods are used to estimate the amount of independent information in the ensembles....... If the interdependency between RCMs is not taken into account, the uncertainty in theRCMsimulations of current regional climatemay be underestimated. This will in turn lead to an underestimation of the uncertainty in future precipitation projections. © 2013 American Meteorological Society....
Interpreting Tree Ensembles with inTrees
Deng, Houtao
2014-01-01
Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy. In this work, we provide the inTrees (interpretable trees) framework that extracts, measures, prunes and selects rules from a tree ensemble, and calculates frequent variable interactions. An rule-based learner, referred to as the simplified tree ensemble learner (STEL), can also be formed and used for future prediction. The inTrees framework can applied to both classification an...