Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Maximum Entropy Principle Based Estimation of Performance Distribution in Queueing Theory
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated. PMID:25207992
Maximum entropy principle based estimation of performance distribution in queueing theory.
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated.
Maximum entropy principle based estimation of performance distribution in queueing theory.
Dayi He
Full Text Available In related research on queuing systems, in order to determine the system state, there is a widespread practice to assume that the system is stable and that distributions of the customer arrival ratio and service ratio are known information. In this study, the queuing system is looked at as a black box without any assumptions on the distribution of the arrival and service ratios and only keeping the assumption on the stability of the queuing system. By applying the principle of maximum entropy, the performance distribution of queuing systems is derived from some easily accessible indexes, such as the capacity of the system, the mean number of customers in the system, and the mean utilization of the servers. Some special cases are modeled and their performance distributions are derived. Using the chi-square goodness of fit test, the accuracy and generality for practical purposes of the principle of maximum entropy approach is demonstrated.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Pesch, Hans-Josef
2013-01-01
International audience; The purpose of the present paper is to show that the most prominent results in optimal control theory, the distinction between state and control variables, the maximum principle, and the principle of optimality, resp. Bellman's equation are immediate consequences of Carathéodory's achievements published about two decades before optimal control theory saw the light of day.
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Durante, Fabrizio
2015-01-01
Principles of Copula Theory explores the state of the art on copulas and provides you with the foundation to use copulas in a variety of applications. Throughout the book, historical remarks and further readings highlight active research in the field, including new results, streamlined presentations, and new proofs of old results.After covering the essentials of copula theory, the book addresses the issue of modeling dependence among components of a random vector using copulas. It then presents copulas from the point of view of measure theory, compares methods for the approximation of copulas,
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
Pan, Sudip; Solà, Miquel; Chattaraj, Pratim K
2013-02-28
Hardness and electrophilicity values for several molecules involved in different chemical reactions are calculated at various levels of theory and by using different basis sets. Effects of these aspects as well as different approximations to the calculation of those values vis-à-vis the validity of the maximum hardness and minimum electrophilicity principles are analyzed in the cases of some representative reactions. Among 101 studied exothermic reactions, 61.4% and 69.3% of the reactions are found to obey the maximum hardness and minimum electrophilicity principles, respectively, when hardness of products and reactants is expressed in terms of their geometric means. However, when we use arithmetic mean, the percentage reduces to some extent. When we express the hardness in terms of scaled hardness, the percentage obeying maximum hardness principle improves. We have observed that maximum hardness principle is more likely to fail in the cases of very hard species like F(-), H(2), CH(4), N(2), and OH appearing in the reactant side and in most cases of the association reactions. Most of the association reactions obey the minimum electrophilicity principle nicely. The best results (69.3%) for the maximum hardness and minimum electrophilicity principles reject the 50% null hypothesis at the 2% level of significance.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
Maximum information entropy: a foundation for ecological theory.
Harte, John; Newman, Erica A
2014-07-01
The maximum information entropy (MaxEnt) principle is a successful method of statistical inference that has recently been applied to ecology. Here, we show how MaxEnt can accurately predict patterns such as species-area relationships (SARs) and abundance distributions in macroecology and be a foundation for ecological theory. We discuss the conceptual foundation of the principle, why it often produces accurate predictions of probability distributions in science despite not incorporating explicit mechanisms, and how mismatches between predictions and data can shed light on driving mechanisms in ecology. We also review possible future extensions of the maximum entropy theory of ecology (METE), a potentially important foundation for future developments in ecological theory.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
Remarks on the strong maximum principle for nonlocal operators
Jerome Coville
2008-05-01
Full Text Available In this note, we study the existence of a strong maximum principle for the nonlocal operator $$ mathcal{M}[u](x :=int_{G}J(gu(x*g^{-1}dmu(g - u(x, $$ where $G$ is a topological group acting continuously on a Hausdorff space $X$ and $u in C(X$. First we investigate the general situation and derive a pre-maximum principle. Then we restrict our analysis to the case of homogeneous spaces (i.e., $ X=G /H$. For such Hausdorff spaces, depending on the topology, we give a condition on $J$ such that a strong maximum principle holds for $mathcal{M}$. We also revisit the classical case of the convolution operator (i.e. $G=(mathbb{R}^n,+, X=mathbb{R}^n, dmu =dy$.
Maximum-entropy principle as Galerkin modelling paradigm
Noack, Bernd R.; Niven, Robert K.; Rowley, Clarence W.
2012-11-01
We show how the empirical Galerkin method, leading e.g. to POD models, can be derived from maximum-entropy principles building on Noack & Niven 2012 JFM. In particular, principles are proposed (1) for the Galerkin expansion, (2) for the Galerkin system identification, and (3) for the probability distribution of the attractor. Examples will illustrate the advantages of the entropic modelling paradigm. Partially supported by the ANR Chair of Excellence TUCOROM and an ADFA/UNSW Visiting Fellowship.
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
A Clustering Method Based on the Maximum Entropy Principle
Edwin Aldana-Bobadilla
2015-01-01
Full Text Available Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
The constraint rule of the maximum entropy principle
Uffink, J.
2001-01-01
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distribut
On the maximum entropy principle in non-extensive thermostatistics
Naudts, Jan
2004-01-01
It is possible to derive the maximum entropy principle from thermodynamic stability requirements. Using as a starting point the equilibrium probability distribution, currently used in non-extensive thermostatistics, it turns out that the relevant entropy function is Renyi's alpha-entropy, and not Tsallis' entropy.
A Remark on the Omori-Yau Maximum Principle
Borbely, Albert
2012-01-01
A Riemannian manifold $M$ is said to satisfy the Omori-Yau maximum principle if for any $C^2$ bounded function $g:M\\to \\Bbb R$ there is a sequence $x_n\\in M$, such that $\\lim_{n\\to \\infty}g(x_n)=\\sup_M g$, $ \\lim_{n\\to \\infty}|\
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Effective soil hydraulic conductivity predicted with the maximum power principle
Westhoff, Martijn; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Zehe, Erwin; Dewals, Benjamin
2016-04-01
Drainage of water in soils happens for a large extent through preferential flowpaths, but these subsurface flowpaths are extremely difficult to observe or parameterize in hydrological models. To potentially overcome this problem, thermodynamic optimality principles have been suggested to predict effective parametrization of these (sub-grid) structures, such as the maximum entropy production principle or the equivalent maximum power principle. These principles have been successfully applied to predict heat transfer from the Equator to the Poles, or turbulent heat fluxes between the surface and the atmosphere. In these examples, the effective flux adapts itself to its boundary condition by adapting its effective conductance through the creation of e.g. convection cells. However, flow through porous media, such as soils, can only quickly adapt its effective flow conductance by creation of preferential flowpaths, but it is unknown if this is guided by the aim to create maximum power. Here we show experimentally that this is indeed the case: In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles. The experimental setup consists of two freely draining reservoirs connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. From the steady state potential difference and the observed flow through the aquifer, and effective hydraulic conductance can be determined. This observed conductance does correspond to the one maximizing power of the flux through the confined aquifer. Although this experiment is done in an idealized setting, it opens doors for better parameterizing hydrological models. Furthermore, it shows that hydraulic properties of soils are not static, but they change with changing boundary conditions. A potential limitation to the principle is that it only applies to steady state conditions
An application of Hamiltonian neurodynamics using Pontryagin's Maximum (Minimum) Principle.
Koshizen, T; Fulcher, J
1995-12-01
Classical optimal control methods, notably Pontryagin's Maximum (Minimum) Principle (PMP) can be employed, together with Hamiltonians, to determine optimal system weights in Artificial Neural dynamical systems. A new learning rule based on weight equations derived using PMP is shown to be suitable for both discrete- and continuous-time systems, and moreover, can also be applied to feedback networks. Preliminary testing shows that this PMP learning rule compares favorably with Standard BackPropagations (SBP) on the XOR problem.
MAXIMUM PRINCIPLES FOR SECOND-ORDER PARABOLIC EQUATIONS
Antonio Vitolo
2004-01-01
This paper is the parabolic counterpart of previous ones about elliptic operators in unbounded domains. Maximum principles for second-order linear parabolic equations are established showing a variant of the ABP-Krylov-Tso estimate, based lower bound for super-solutions due to Krylov and Safonov. The results imply the uniqueness for the Cauchy-Dirichlet problem in a large class of infinite cylindrical and non-cylindrical domains.
Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles
Paulo H. Egydio
2008-01-01
Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.
Optimal Control of Polymer Flooding Based on Maximum Principle
Yang Lei
2012-01-01
Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.
Training Concept, Evolution Time, and the Maximum Entropy Production Principle
Alexey Bezryadin
2016-04-01
Full Text Available The maximum entropy production principle (MEPP is a type of entropy optimization which demands that complex non-equilibrium systems should organize such that the rate of the entropy production is maximized. Our take on this principle is that to prove or disprove the validity of the MEPP and to test the scope of its applicability, it is necessary to conduct experiments in which the entropy produced per unit time is measured with a high precision. Thus we study electric-field-induced self-assembly in suspensions of carbon nanotubes and realize precise measurements of the entropy production rate (EPR. As a strong voltage is applied the suspended nanotubes merge together into a conducting cloud which produces Joule heat and, correspondingly, produces entropy. We introduce two types of EPR, which have qualitatively different significance: global EPR (g-EPR and the entropy production rate of the dissipative cloud itself (DC-EPR. The following results are obtained: (1 As the system reaches the maximum of the DC-EPR, it becomes stable because the applied voltage acts as a stabilizing thermodynamic potential; (2 We discover metastable states characterized by high, near-maximum values of the DC-EPR. Under certain conditions, such efficient entropy-producing regimes can only be achieved if the system is allowed to initially evolve under mildly non-equilibrium conditions, namely at a reduced voltage; (3 Without such a “training” period the system typically is not able to reach the allowed maximum of the DC-EPR if the bias is high; (4 We observe that the DC-EPR maximum is achieved within a time, Te, the evolution time, which scales as a power-law function of the applied voltage; (5 Finally, we present a clear example in which the g-EPR theoretical maximum can never be achieved. Yet, under a wide range of conditions, the system can self-organize and achieve a dissipative regime in which the DC-EPR equals its theoretical maximum.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Enzyme kinetics and the maximum entropy production principle.
Dobovišek, Andrej; Zupanović, Paško; Brumen, Milan; Bonačić-Lošić, Zeljana; Kuić, Domagoj; Juretić, Davor
2011-03-01
A general proof is derived that entropy production can be maximized with respect to rate constants in any enzymatic transition. This result is used to test the assumption that biological evolution of enzyme is accompanied with an increase of entropy production in its internal transitions and that such increase can serve to quantify the progress of enzyme evolution. The state of maximum entropy production would correspond to fully evolved enzyme. As an example the internal transition ES↔EP in a generalized reversible Michaelis-Menten three state scheme is analyzed. A good agreement is found among experimentally determined values of the forward rate constant in internal transitions ES→EP for three types of β-Lactamase enzymes and their optimal values predicted by the maximum entropy production principle, which agrees with earlier observations that β-Lactamase enzymes are nearly fully evolved. The optimization of rate constants as the consequence of basic physical principle, which is the subject of this paper, is a completely different concept from a) net metabolic flux maximization or b) entropy production minimization (in the static head state), both also proposed to be tightly connected to biological evolution.
Robust stochastic maximum principle: Complete proof and discussions
Poznyak Alex S.
2002-01-01
Full Text Available This paper develops a version of Robust Stochastic Maximum Principle (RSMP applied to the Minimax Mayer Problem formulated for stochastic differential equations with the control-dependent diffusion term. The parametric families of first and second order adjoint stochastic processes are introduced to construct the corresponding Hamiltonian formalism. The Hamiltonian function used for the construction of the robust optimal control is shown to be equal to the Lebesque integral over a parametric set of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The paper deals with a cost function given at finite horizon and containing the mathematical expectation of a terminal term. A terminal condition, covered by a vector function, is also considered. The optimal control strategies, adapted for available information, for the wide class of uncertain systems given by an stochastic differential equation with unknown parameters from a given compact set, are constructed. This problem belongs to the class of minimax stochastic optimization problems. The proof is based on the recent results obtained for Minimax Mayer Problem with a finite uncertainty set [14,43-45] as well as on the variation results of [53] derived for Stochastic Maximum Principle for nonlinear stochastic systems under complete information. The corresponding discussion of the obtain results concludes this study.
Combining experiments and simulations using the maximum entropy principle.
Wouter Boomsma
2014-02-01
Full Text Available A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Combining experiments and simulations using the maximum entropy principle.
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-02-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling
2013-09-01
Full Text Available In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain and obtain the optimal conditions and results. On this basis, we further research the effect of localization of CODP on the total cost and the relation of CODP, inventory policy and demand type through the data simulation. The results of simulation show that CODP locates in the downstream of the product life cycle, is a linear function of the product life cycle. The result indicates that the demand forecast is the main factors influencing the total cost; meanwhile the mode of production according to the demand forecast is the deciding factor of the total cost. Also the model can reflect the relation between the total cost of two-stage supply chain and inventory, demand.
Optimal control of a double integrator a primer on maximum principle
Locatelli, Arturo
2017-01-01
This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...
Setting the renormalization scale in QCD: The principle of maximum conformality
Brodsky, S. J.; Di Giustino, L.
2012-01-01
the renormalization scale is set properly, all nonconformal beta not equal 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with beta...... = 0. The resulting scale-fixed predictions using the principle of maximum conformality (PMC) are independent of the choice of renormalization scheme-a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale setting in the Abelian limit...
Hanel, Rudolf; Gell-Mann, Murray
2014-01-01
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems, by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there exists an ongoing controversy whether the notion of the maximum entropy principle can be extended in a meaningful way to non-extensive, non-ergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for non-ergodic and complex statistical systems if their relative entropy can be factored into a general...
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.
Shalymov, Dmitry S; Fradkov, Alexander L
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.
The general principles of quantum theory
Temple, George
2014-01-01
Published in 1934, this monograph was one of the first introductory accounts of the principles which form the physical basis of the Quantum Theory, considered as a branch of mathematics. The exposition is restricted to a discussion of general principles and does not attempt detailed application to the wide domain of atomic physics, although a number of special problems are considered in elucidation of the principles. The necessary fundamental mathematical methods - the theory of linear operators and of matrics - are developed in the first chapter so this could introduce anyone to the new theor
An Extension of Chebyshev’s Maximum Principle to Several Variables
Meng Zhao-liang; Luo Zhong-xuan
2013-01-01
In this article, we generalize Chebyshev’s maximum principle to several variables. Some analogous maximum formulae for the special integration functional are given. A suﬃcient condition of the existence of Chebyshev’s maximum principle is also obtained.
Quantum Mechanics as a Principle Theory
Bub, J
1999-01-01
I show how quantum mechanics, like the theory of relativity, can be understood as a 'principle theory' in Einstein's sense, and I use this notion to explore the approach to the problem of interpretation developed in my book Interpreting the Quantum World (Cambridge: Cambridge University Press, 1999).
Decision theory principles and approaches
Parmigiani, Giovanni
2009-01-01
Decision theory provides a formal framework for making logical choices in the face of uncertainty. Given a set of alternatives, a set of consequences, and a correspondence between those sets, decision theory offers conceptually simple procedures for choice. This book presents an overview of the fundamental concepts and outcomes of rational decision making under uncertainty, highlighting the implications for statistical practice. The authors have developed a series of self contained chapters focusing on bridging the gaps between the different fields that have contributed to rational decision making and presenting ideas in a unified framework and notation while respecting and highlighting the different and sometimes conflicting perspectives. This book: Provides a rich collection of techniques and procedures.Discusses the foundational aspects and modern day practice.Links foundations to practical applications in biostatistics, computer science, engineering and economics.Presents different perspectives and cont...
The maximum sizes of large scale structures in alternative theories of gravity
Bhattacharya, Sourav; Romano, Antonio Enea; Skordis, Constantinos; Tomaras, Theodore N
2016-01-01
The maximum size of a cosmic structure is given by the maximum turnaround radius -- the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulas for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulas agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the $\\Lambda$CDM value, by a factor $1 + \\frac{1}{3\\omega}$, where $\\omega\\gg 1$ is the Brans-Dicke parameter, implying consistency of the theory with current data.
Gauge theory and variational principles
Bleecker, David
2005-01-01
This text provides a framework for describing and organizing the basic forces of nature and the interactions of subatomic particles. A detailed and self-contained mathematical account of gauge theory, it is geared toward beginning graduate students and advanced undergraduates in mathematics and physics. This well-organized treatment supplements its rigor with intuitive ideas.Starting with an examination of principal fiber bundles and connections, the text explores curvature; particle fields, Lagrangians, and gauge invariance; Lagrange's equation for particle fields; and the inhomogeneous field
Possible dynamical explanations for Paltridge's principle of maximum entropy production
Virgo, Nathaniel, E-mail: nathanielvirgo@gmail.com; Ikegami, Takashi, E-mail: nathanielvirgo@gmail.com [Ikegami Laboratory, University of Tokyo (Japan)
2014-12-05
Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrast to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
A general maximum entropy framework for thermodynamic variational principles
Dewar, Roderick C., E-mail: roderick.dewar@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)
2014-12-05
Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.
A general maximum entropy framework for thermodynamic variational principles
Dewar, Roderick C.
2014-12-01
Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution ̂p, such that Ψ is a minimum at ̂p = p. Minimization of Ψ with respect to ̂p thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between ̂p and p. Illustrative examples of min-Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min-Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...
OIL MONITORING DIAGNOSTIC CRITERIONS BASED ON MAXIMUM ENTROPY PRINCIPLE
Huo Hua; Li Zhuguo; Xia Yanchun
2005-01-01
A method of applying maximum entropy probability density estimation approach to constituting diagnostic criterions of oil monitoring data is presented. The method promotes the precision of diagnostic criterions for evaluating the wear state of mechanical facilities, and judging abnormal data. According to the critical boundary points defined, a new measure on monitoring wear state and identifying probable wear faults can be got. The method can be applied to spectrometric analysis and direct reading ferrographic analysis. On the basis of the analysis and discussion of two examples of 8NVD48A-2U diesel engines, the practicality is proved to be an effective method in oil monitoring.
Can the maximum entropy principle be explained as a consistency requirement?
Uffink, J.
2001-01-01
The principle of maximum entropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathema
General proof of (maximum) entropy principle in Lovelock gravity
Cao, Li-Ming
2014-01-01
We consider a static self-gravitating perfect fluid system in Lovelock gravity theory. For a spacial region on the hypersurface orthogonal to static Killing vector, by the Tolman's law of temperature, the assumption of a fixed total particle number inside the spacial region, and all of the variations (of relevant fields) in which the induced metric and its first derivatives are fixed on the boundary of the spacial region, then with the help of the gravitational equations of the theory, we can prove a theorem says that the total entropy of the fluid in this region takes an extremum value. A converse theorem can also be obtained following the reverse process of our proof.
General proof of (maximum) entropy principle in Lovelock gravity
Cao, Li-Ming; Xu, Jianfei
2015-02-01
We consider a static self-gravitating perfect fluid system in Lovelock gravity theory. For a spacial region on the hypersurface orthogonal to static Killing vector, by the Tolman's law of temperature, the assumption of a fixed total particle number inside the spacial region, and all of the variations (of relevant fields) in which the induced metric and its first derivatives are fixed on the boundary of the spacial region, then with the help of the gravitational and fluid equations of the theory, we can prove a theorem says that the total entropy of the fluid in this region takes an extremum value. A converse theorem can also be obtained following the reverse process of our proof. We also propose the definition of isolation quasilocally for the system and explain the physical meaning of the boundary conditions in the proof of the theorems.
Shi Jingtao; Wu Zhen
2011-01-01
A stochastic maximum principle for the risk-sensitive optimal control prob- lem of jump diffusion processes with an exponential-of-integral cost functional is derived assuming that the value function is smooth, where the diffusion and jump term may both depend on the control. The form of the maximum principle is similar to its risk-neutral counterpart. But the adjoint equations and the maximum condition heavily depend on the risk-sensitive parameter. As applications, a linear-quadratic risk-sensitive control problem is solved by using the maximum principle derived and explicit optimal control is obtained.
Venus atmosphere profile from a maximum entropy principle
L. N. Epele
2007-10-01
Full Text Available The variational method with constraints recently developed by Verkley and Gerkema to describe maximum-entropy atmospheric profiles is generalized to ideal gases but with temperature-dependent specific heats. In so doing, an extended and non standard potential temperature is introduced that is well suited for tackling the problem under consideration. This new formalism is successfully applied to the atmosphere of Venus. Three well defined regions emerge in this atmosphere up to a height of 100 km from the surface: the lowest one up to about 35 km is adiabatic, a transition layer located at the height of the cloud deck and finally a third region which is practically isothermal.
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Setting the Renormalization Scale in QCD: The Principle of Maximum Conformality
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins; Di Giustino, Leonardo; /SLAC
2011-08-19
A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale {mu} of the running coupling {alpha}{sub s}({mu}{sup 2}): The purpose of the running coupling in any gauge theory is to sum all terms involving the {beta} function; in fact, when the renormalization scale is set properly, all non-conformal {beta} {ne} 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with {beta} = 0. The resulting scale-fixed predictions using the 'principle of maximum conformality' (PMC) are independent of the choice of renormalization scheme - a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale-setting in the Abelian limit. The PMC is also the theoretical principle underlying the BLM procedure, commensurate scale relations between observables, and the scale-setting method used in lattice gauge theory. The number of active flavors nf in the QCD {beta} function is also correctly determined. We discuss several methods for determining the PMC/BLM scale for QCD processes. We show that a single global PMC scale, valid at leading order, can be derived from basic properties of the perturbative QCD cross section. The elimination of the renormalization scheme ambiguity using the PMC will not only increase the precision of QCD tests, but it will also increase the sensitivity of collider experiments to new physics beyond the Standard Model.
Polyatomic gases with dynamic pressure: Maximum entropy principle and shock structure
Pavić-Čolić, Milana; Simić, Srboljub
2016-01-01
This paper is concerned with the analysis of polyatomic gases within the framework of kinetic theory. Internal degrees of freedom are modeled using a single continuous variable corresponding to the molecular internal energy. Non-equilibrium velocity distribution function, compatible with macroscopic field variables, is constructed using the maximum entropy principle. A proper collision cross section is constructed which obeys the micro-reversibility requirement. The source term and entropy production rate are determined in the form which generalizes the results obtained within the framework of extended thermodynamics. They can be adapted to appropriate physical situations due to the presence of parameters. They are also compared with the results obtained using BGK approximation. For the proposed model the shock structure problem is thoroughly analyzed.
Maximum principle and convergence of central schemes based on slope limiters
Mehmetoglu, Orhan
2012-01-01
A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.
The physical principles of the quantum theory
Heisenberg, Werner
1949-01-01
The contributions of few contemporary scientists have been as far reaching in their effects as those of Nobel Laureate Werner Heisenberg. His matrix theory is one of the bases of modern quantum mechanics, while his ""uncertainty principle"" has altered our whole philosophy of science.In this classic, based on lectures delivered at the University of Chicago, Heisenberg presents a complete physical picture of quantum theory. He covers not only his own contributions, but also those of Bohr, Dirac, Bose, de Broglie, Fermi, Einstein, Pauli, Schrodinger, Somerfield, Rupp, ·Wilson, Germer, and others
A discussion on maximum entropy production and information theory
Bruers, Stijn [Instituut voor Theoretische Fysica, Celestijnenlaan 200D, Katholieke Universiteit Leuven, B-3001 Leuven (Belgium)
2007-07-06
We will discuss the maximum entropy production (MaxEP) principle based on Jaynes' information theoretical arguments, as was done by Dewar (2003 J. Phys. A: Math. Gen. 36 631-41, 2005 J. Phys. A: Math. Gen. 38 371-81). With the help of a simple mathematical model of a non-equilibrium system, we will show how to derive minimum and maximum entropy production. Furthermore, the model will help us to clarify some confusing points and to see differences between some MaxEP studies in the literature.
Domoshnitsky Alexander
2009-01-01
Full Text Available We obtain the maximum principles for the first-order neutral functional differential equation where , and are linear continuous operators, and are positive operators, is the space of continuous functions, and is the space of essentially bounded functions defined on . New tests on positivity of the Cauchy function and its derivative are proposed. Results on existence and uniqueness of solutions for various boundary value problems are obtained on the basis of the maximum principles.
A maximum principle for diffusive Lotka-Volterra systems of two competing species
Chen, Chiun-Chuan; Hung, Li-Chang
2016-10-01
Using an elementary approach, we establish a new maximum principle for the diffusive Lotka-Volterra system of two competing species, which involves pointwise estimates of an elliptic equation consisting of the second derivative of one function, the first derivative of another function, and a quadratic nonlinearity. This maximum principle gives a priori estimates for the total mass of the two species. Moreover, applying it to the system of three competing species leads to a nonexistence theorem of traveling wave solutions.
J. G. Dyke; Kleidon, A.
2010-01-01
The Maximum Entropy Production (MEP) principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the ...
Principles of the theory of solids
Ziman, J M
1972-01-01
Professor Ziman's classic textbook on the theory of solids was first pulished in 1964. This paperback edition is a reprint of the second edition, which was substantially revised and enlarged in 1972. The value and popularity of this textbook is well attested by reviewers' opinions and by the existence of several foreign language editions, including German, Italian, Spanish, Japanese, Polish and Russian. The book gives a clear exposition of the elements of the physics of perfect crystalline solids. In discussing the principles, the author aims to give students an appreciation of the conditions which are necessary for the appearance of the various phenomena. A self-contained mathematical account is given of the simplest model that will demonstrate each principle. A grounding in quantum mechanics and knowledge of elementary facts about solids is assumed. This is therefore a textbook for advanced undergraduates and is also appropriate for graduate courses.
General proof of entropy principle in Einstein-Maxwell theory
Fang, Xiongjun
2015-01-01
We consider a static self-gravitating charged perfect fluid system in the Einstein-Maxwell theory. Assume Maxwell's equation and the Einstein constraint equation are satisfied, and the temperature of the fluid obeys Tolman's law. Then we prove that the total entropy of the fluid achieves an extremum implies other components of Einstein's equation for any variations of metric and electrical potential with fixed boundary values. Conversely, if Einstein's equation and Maxwell's equations hold, the total entropy achieves an extremum. Our work suggests that the maximum entropy principle is consistent with Einstein's equation when electric field is taken into account.
Wang, Junping
2011-01-01
This paper derives some maximum principles for P1-conforming finite element approximations of quasi-linear second order elliptic equations. The results are extensions of the classical maximum principles in the theory of partial differential equations to finite element methods. The mathematical tools are also extensions of the variational approach that was used in classical PDE theories. The maximum principles for finite element approximations are valid with some geometric conditions that are applied to the angles of each element. For the general quasi-linear elliptic equation, each triangle or tetrahedron needs to be $O(h^\\alpha)$-acute in the sense that each angle $\\alpha_{ij}$ (for triangle) or interior dihedral angle $\\alpha_{ij}$ (for tetrahedron) must satisfy $\\alpha_{ij}\\le \\pi/2-\\gamma h^\\alpha$ for some $\\alpha\\ge 0$ and $\\gamma>0$. For the Poisson problem where the differential operator is given by Laplacian, the angle requirement is the same as the classical one: either all the triangles are non-obt...
A strong test of the maximum entropy theory of ecology.
Xiao, Xiao; McGlinn, Daniel J; White, Ethan P
2015-03-01
The maximum entropy theory of ecology (METE) is a unified theory of biodiversity that predicts a large number of macroecological patterns using information on only species richness, total abundance, and total metabolic rate of the community. We evaluated four major predictions of METE simultaneously at an unprecedented scale using data from 60 globally distributed forest communities including more than 300,000 individuals and nearly 2,000 species.METE successfully captured 96% and 89% of the variation in the rank distribution of species abundance and individual size but performed poorly when characterizing the size-density relationship and intraspecific distribution of individual size. Specifically, METE predicted a negative correlation between size and species abundance, which is weak in natural communities. By evaluating multiple predictions with large quantities of data, our study not only identifies a mismatch between abundance and body size in METE but also demonstrates the importance of conducting strong tests of ecological theories.
Applying the maximum information principle to cell transmission model of tra-ffic flow
刘喜敏; 卢守峰
2013-01-01
This paper integrates the maximum information principle with the Cell Transmission Model (CTM) to formulate the velo-city distribution evolution of vehicle traffic flow. The proposed discrete traffic kinetic model uses the cell transmission model to cal-culate the macroscopic variables of the vehicle transmission, and the maximum information principle to examine the velocity distri-bution in each cell. The velocity distribution based on maximum information principle is solved by the Lagrange multiplier method. The advantage of the proposed model is that it can simultaneously calculate the hydrodynamic variables and velocity distribution at the cell level. An example shows how the proposed model works. The proposed model is a hybrid traffic simulation model, which can be used to understand the self-organization phenomena in traffic flows and predict the traffic evolution.
Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds
Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)
2016-03-15
The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.
Moroz, Adam
2009-06-11
The maximum energy dissipation principle is employed to nonlinear chemical thermodynamics in terms of distance variable (generalized displacement) from the global equilibrium, applying the optimal control interpretation to develop a variational formulation. The cost-like functional was chosen to support the suggestion that such a formulation corresponds to the maximum energy dissipation principle. Using this approach, the variational framework was proposed for a nonlinear chemical thermodynamics, including a general cooperative kinetics model. The formulation is in good agreement with standard linear nonequilibrium chemical thermodynamics.
Wang, Tianxiao
2010-01-01
This paper formulates and studies a stochastic maximum principle for forward-backward stochastic Volterra integral equations (FBSVIEs in short), while the control area is assumed to be convex. Then a linear quadratic (LQ in short) problem for backward stochastic Volterra integral equations (BSVIEs in short) is present to illustrate the aforementioned optimal control problem. Motivated by the technical skills in solving above problem, a more convenient and briefer method for the unique solvability of M-solution for BSVIEs is proposed. At last, we will investigate a risk minimization problem by means of the maximum principle for FBSVIEs. Closed-form optimal portfolio is obtained in some special cases.
Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle
Ge Cheng; Zhenyu Zhang; Moses Ntanda Kyebambe; Nasser Kimbugwe
2016-01-01
Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that...
A Matter of Principle: The Principles of Quantum Theory, Dirac's Equation, and Quantum Information
Plotnitsky, Arkady
2015-01-01
This article is concerned with the role of fundamental principles in theoretical physics, especially quantum theory. The fundamental principles of relativity will be be addressed as well in view of their role in quantum electrodynamics and quantum field theory, specifically Dirac's work, which, in particular Dirac's derivation of his relativistic equation for the electron from the principles of relativity and quantum theory, is the main focus of this article. I shall, however, also consider Heisenberg's derivation of quantum mechanics, which inspired Dirac. I argue that Heisenberg's and Dirac's work alike was guided by their adherence to and confidence in the fundamental principles of quantum theory. The final section of the article discusses the recent work by G. M. D' Ariano and his coworkers on the principles of quantum information theory, which extends quantum theory and its principles in a new direction. This extension enabled them to offer a new derivation of Dirac's equation from these principles alone...
Saha, Ranajit; Pan, Sudip; Chattaraj, Pratim K
2016-11-05
The validity of the maximum hardness principle (MHP) is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd) and LC-BLYP/6-311++G(2df,3pd) (def2-QZVP for iodine and mercury) levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.
Ranajit Saha
2016-11-01
Full Text Available The validity of the maximum hardness principle (MHP is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd and LC-BLYP/6-311++G(2df,3pd (def2-QZVP for iodine and mercury levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.
Likelihood Principle and Maximum Likelihood Estimator of Location Parameter for Cauchy Distribution.
1986-05-01
consistency (or strong consistency) of maximum likelihood estimator has been studied by many researchers, for example, Wald (1949), Wolfowitz (1953, 1965...20, 595-601. [25] Wolfowitz , J. (1953). The method of maximum likelihood and Wald theory of decision functions. Indag. Math., Vol. 15, 114-119. [26...Probability Letters Vol. 1, No. 3, 197-202. [24] Wald , A. (1949). Note on the consistency of maximum likelihood estimates. Ann. Math. Statist., Vol
Incorporation of generalized uncertainty principle into Lifshitz field theories
Faizal, Mir, E-mail: f2mir@uwaterloo.ca [Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Majumder, Barun, E-mail: barunbasanta@iitgn.ac.in [Indian Institute of Technology Gandhinagar, Ahmedabad, 382424 (India)
2015-06-15
In this paper, we will incorporate the generalized uncertainty principle into field theories with Lifshitz scaling. We will first construct both bosonic and fermionic theories with Lifshitz scaling based on generalized uncertainty principle. After that we will incorporate the generalized uncertainty principle into a non-abelian gauge theory with Lifshitz scaling. We will observe that even though the action for this theory is non-local, it is invariant under local gauge transformations. We will also perform the stochastic quantization of this Lifshitz fermionic theory based generalized uncertainty principle.
Variational principles for multisymplectic second-order classical field theories
Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso
2015-06-01
We state a unified geometrical version of the variational principles for second-order classical field theories. The standard Lagrangian and Hamiltonian variational principles and the corresponding field equations are recovered from this unified framework.
Variational principles for multisymplectic second-order classical field theories
Román Roy, Narciso; Prieto Martínez, Pedro Daniel
2015-01-01
We state a unified geometrical version of the variational principles for second-order classical field theories. The standard Lagrangian and Hamiltonian variational principles and the corresponding field equations are recovered from this unified framework. Peer Reviewed
MAXIMUM PRINCIPLES OF NONHOMOGENEOUS SUBELLIPTIC P-LAPLACE EQUATIONS AND APPLICATIONS
Liu Haifeng; Niu Pengcheng
2006-01-01
Maximum principles for weak solutions of nonhomogeneous subelliptic p-Laplace equations related to smooth vector fields {Xj} satisfying the H(o)rmander condition are proved by the choice of suitable test functions and the adaption of the classical Moser iteration method. Some applications are given in this paper.
Lagrange Multipliers, Adjoint Equations, the Pontryagin Maximum Principle and Heuristic Proofs
Ollerton, Richard L.
2013-01-01
Deeper understanding of important mathematical concepts by students may be promoted through the (initial) use of heuristic proofs, especially when the concepts are also related back to previously encountered mathematical ideas or tools. The approach is illustrated by use of the Pontryagin maximum principle which is then illuminated by reference to…
Carathéodory domains and Rudin's converse of the maximum modulus principle
Fedorovskiy, K. Yu
2015-01-01
We obtain extensions of the classical Rudin theorem on the converse of the maximum modulus principle from the unit disc to Carathéodory domains. The proofs are based on recent results about properties of conformal mappings of Carathéodory domains, which are also considered in the paper. Bibliography: 18 titles.
Jingtao Shi
2013-01-01
Full Text Available This paper is concerned with the relationship between maximum principle and dynamic programming for stochastic recursive optimal control problems. Under certain differentiability conditions, relations among the adjoint processes, the generalized Hamiltonian function, and the value function are given. A linear quadratic recursive utility portfolio optimization problem in the financial engineering is discussed as an explicitly illustrated example of the main result.
Uri UDIN
2014-06-01
Full Text Available This article proposes usage of Pontryagin maximum principle for parametrical identification of mathematical vessel’s model. Proposed method has a special perspective for identification in real time mode, when the parameters identified can be used for forecasting of coming maneuvers.
Hazoglou, Michael J; Walther, Valentin; Dixit, Purushottam D; Dill, Ken A
2015-08-07
There has been interest in finding a general variational principle for non-equilibrium statistical mechanics. We give evidence that Maximum Caliber (Max Cal) is such a principle. Max Cal, a variant of maximum entropy, predicts dynamical distribution functions by maximizing a path entropy subject to dynamical constraints, such as average fluxes. We first show that Max Cal leads to standard near-equilibrium results—including the Green-Kubo relations, Onsager's reciprocal relations of coupled flows, and Prigogine's principle of minimum entropy production—in a way that is particularly simple. We develop some generalizations of the Onsager and Prigogine results that apply arbitrarily far from equilibrium. Because Max Cal does not require any notion of "local equilibrium," or any notion of entropy dissipation, or temperature, or even any restriction to material physics, it is more general than many traditional approaches. It also applicable to flows and traffic on networks, for example.
Westhoff, M.; Erpicum, S.; Archambeau, P.; Pirotton, M.; Zehe, E.; Dewals, B.
2015-12-01
Power can be performed by a system driven by a potential difference. From a given potential difference, the power that can be subtracted is constraint by the Carnot limit, which follows from the first and second laws of thermodynamics. If the system is such that the flux producing power (with power being the flux times its driving potential difference) also influences the potential difference, a maximum in power can be obtained as a result of the trade-off between the flux and the potential difference. This is referred to as the maximum power principle. It has already been shown that the atmosphere operates close to this maximum power limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state of maximum power, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells. The aim of this study is to test if the soil's effective hydraulic conductivity also adapts in such a way that it produces maximum power. However, the soil's hydraulic conductivity adapts differently; for example by the creation of preferential flow paths. Here, this process is simulated in a lab experiment, which focuses on preferential flow paths created by piping. In the lab, we created a hydrological analogue to the atmospheric model dealing with heat transport between Equator and poles, with the aim to test if the effective hydraulic conductivity of the sand bed can be predicted with the maximum power principle. The experimental setup consists of two freely draining reservoir connected with each other by a confined aquifer. By adding water to only one reservoir, a potential difference will build up until a steady state is reached. The results will indicate whether the maximum power principle does apply for groundwater flow and how it should be applied. Because of the different way of adaptation of flow conductivity, the results differ from that of the
Quantum theory of the generalised uncertainty principle
Bruneton, Jean-Philippe; Larena, Julien
2017-04-01
We extend significantly previous works on the Hilbert space representations of the generalized uncertainty principle (GUP) in 3 + 1 dimensions of the form [X_i,P_j] = i F_{ij} where F_{ij} = f({{P}}^2) δ _{ij} + g({{P}}^2) P_i P_j for any functions f. However, we restrict our study to the case of commuting X's. We focus in particular on the symmetries of the theory, and the minimal length that emerge in some cases. We first show that, at the algebraic level, there exists an unambiguous mapping between the GUP with a deformed quantum algebra and a quadratic Hamiltonian into a standard, Heisenberg algebra of operators and an aquadratic Hamiltonian, provided the boost sector of the symmetries is modified accordingly. The theory can also be mapped to a completely standard Quantum Mechanics with standard symmetries, but with momentum dependent position operators. Next, we investigate the Hilbert space representations of these algebraically equivalent models, and focus specifically on whether they exhibit a minimal length. We carry the functional analysis of the various operators involved, and show that the appearance of a minimal length critically depends on the relationship between the generators of translations and the physical momenta. In particular, because this relationship is preserved by the algebraic mapping presented in this paper, when a minimal length is present in the standard GUP, it is also present in the corresponding Aquadratic Hamiltonian formulation, despite the perfectly standard algebra of this model. In general, a minimal length requires bounded generators of translations, i.e. a specific kind of quantization of space, and this depends on the precise shape of the function f defined previously. This result provides an elegant and unambiguous classification of which universal quantum gravity corrections lead to the emergence of a minimal length.
Moroz, Adam
2008-05-01
In this work we revise the applicability of the optimal control and variational approach to the maximum energy dissipation (MED) principle in non-equilibrium thermodynamics. The optimal control analogies for the kinetical and potential parts of thermodynamic Lagrangian (in the form of a sum of the positively defined thermodynamic potential and positively defined dissipative function) have been considered. An interpretation of thermodynamic momenta is discussed with respect to standard optimal control applications, which employ dynamic constraints. Also included is interpretation in terms of the least action principle.
Invulnerability of power grids based on maximum flow theory
Fan, Wenli; Huang, Shaowei; Mei, Shengwei
2016-11-01
The invulnerability analysis against cascades is of great significance in evaluating the reliability of power systems. In this paper, we propose a novel cascading failure model based on the maximum flow theory to analyze the invulnerability of power grids. In the model, node initial loads are built on the feasible flows of nodes with a tunable parameter γ used to control the initial node load distribution. The simulation results show that both the invulnerability against cascades and the tolerance parameter threshold αT are affected by node load distribution greatly. As γ grows, the invulnerability shows the distinct change rules under different attack strategies and different tolerance parameters α respectively. These results are useful in power grid planning and cascading failure prevention.
ZHUANG Huifu
2016-03-01
Full Text Available Generally, spatial-contextual information would be used in change detection because there is significant speckle noise in synthetic aperture radar(SAR images. In this paper, using the rich texture information of SAR images, an unsupervised change detection approach to high-resolution SAR images based on texture feature vector and maximum entropy principle is proposed. The difference image is generated by using the 32-dimensional texture feature vector of gray-level co-occurrence matrix(GLCM. And the automatic threshold is obtained by maximum entropy principle. In this method, the appropriate window size to change detection is 11×11 according to the regression analysis of window size and precision index. The experimental results show that the proposed approach is better could both reduce the influence of speckle noise and improve the detection accuracy of high-resolution SAR image effectively; and it is better than Markov random field.
Faggian, Silvia
2007-01-01
The paper concerns the study of the Pontryagin Maximum Principle for an infinite dimensional and infinite horizon boundary control problem for linear partial differential equations. The optimal control model has already been studied both in finite and infinite horizon with Dynamic Programming methods in a series of papers by the same author, or by Faggian and Gozzi. Necessary and sufficient optimality conditions for open loop controls are established. Moreover the co-state variable is shown to coincide with the spatial gradient of the value function evaluated along the trajectory of the system, creating a parallel between Maximum Principle and Dynamic Programming. The abstract model applies, as recalled in one of the first sections, to optimal investment with vintage capital.
Bulgakov, V. K.; Strigunov, V. V.
2009-05-01
The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.
OPTIMAL FEED STRATEGY FOR FED-BATCH GLYCEROL FERMENTATION DETERMINED BY MAXIMUM PRINCIPLE
无
2000-01-01
1 IntroductionGlycerol fed-batch fermentation is attractive tocommercial application since it can control theglucose concentration by changing the feed rate andget a high glycerol yield, therefore it is essential todevelop an optimal glucose feed strategy. For mostof fed-batch fermentation, optimization of feed ratewas based on Pontryagin's maximum principle [if.Since the term of feed rate appears linearly in theHamiltonian, the optimal feed rate profile usuallyconsists of ba,lg-bang intervals and singular ...
Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle
Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)
2014-08-15
The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.
A maximum principle for smooth optimal impulsive control problems with multipoint state constraints
Dykhta, V. A.; Samsonyuk, O. N.
2009-06-01
A nonlinear optimal impulsive control problem with trajectories of bounded variation subject to intermediate state constraints at a finite number on nonfixed instants of time is considered. Features of this problem are discussed from the viewpoint of the extension of the classical optimal control problem with the corresponding state constraints. A necessary optimality condition is formulated in the form of a smooth maximum principle; thorough comments are given, a short proof is presented, and examples are discussed.
MAXIMUM PRINCIPLE FOR THE OPTIMAL CONTROL OF AN ABLATION-TRANSPIRATION COOLING SYSTEM
SUN Bing; GUO Baozhu
2005-01-01
This paper is concerned with an optimal control problem of an ablationtranspiration cooling control system with Stefan-Signorini boundary condition. The existence of weak solution of the system is considered. The Dubovitskii and Milyutin approach is adopted in the investigation of the Pontryagin's maximum principle of the system. The optimality necessary condition is presented for the problem with fixed final horizon and phase constraints.
Sob'yanin, Denis Nikolaevich
2012-06-01
A principle of hierarchical entropy maximization is proposed for generalized superstatistical systems, which are characterized by the existence of three levels of dynamics. If a generalized superstatistical system comprises a set of superstatistical subsystems, each made up of a set of cells, then the Boltzmann-Gibbs-Shannon entropy should be maximized first for each cell, second for each subsystem, and finally for the whole system. Hierarchical entropy maximization naturally reflects the sufficient time-scale separation between different dynamical levels and allows one to find the distribution of both the intensive parameter and the control parameter for the corresponding superstatistics. The hierarchical maximum entropy principle is applied to fluctuations of the photon Bose-Einstein condensate in a dye microcavity. This principle provides an alternative to the master equation approach recently applied to this problem. The possibility of constructing generalized superstatistics based on a statistics different from the Boltzmann-Gibbs statistics is pointed out.
Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle
Ge Cheng
2016-12-01
Full Text Available Predicting the outcome of National Basketball Association (NBA matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.
Effective medium theory principles and applications
Choy, Tuck C
2015-01-01
Effective medium theory dates back to the early days of the theory of electricity. Faraday in 1837 proposed one of the earliest models for a composite metal-insulator dielectric and around 1870 Maxwell and later Garnett (1904) developed models to describe a composite or mixed material medium. The subject has been developed considerably since and while the results are useful for predicting materials performance, the theory can also be used in a wide range of problems in physics and materials engineering. This book develops the topic of effective medium theory by bringing together the essentials of both the static and the dynamical theory. Electromagnetic systems are thoroughly dealt with, as well as related areas such as the CPA theory of alloys, liquids, the density functional theory etc., with applications to ultrasonics, hydrodynamics, superconductors, porous media and others, where the unifying aspects of the effective medium concept are emphasized. In this new second edition two further chapters have been...
Self-assembled wiggling nano-structures and the principle of maximum entropy production.
Belkin, A; Hubler, A; Bezryadin, A
2015-02-09
While behavior of equilibrium systems is well understood, evolution of nonequilibrium ones is much less clear. Yet, many researches have suggested that the principle of the maximum entropy production is of key importance in complex systems away from equilibrium. Here, we present a quantitative study of large ensembles of carbon nanotubes suspended in a non-conducting non-polar fluid subject to a strong electric field. Being driven out of equilibrium, the suspension spontaneously organizes into an electrically conducting state under a wide range of parameters. Such self-assembly allows the Joule heating and, therefore, the entropy production in the fluid, to be maximized. Curiously, we find that emerging self-assembled structures can start to wiggle. The wiggling takes place only until the entropy production in the suspension reaches its maximum, at which time the wiggling stops and the structure becomes quasi-stable. Thus, we provide strong evidence that maximum entropy production principle plays an essential role in the evolution of self-organizing systems far from equilibrium.
Surface Elevation Distribution of Sea Waves Based on the Maximum Entropy Principle
戴德君; 王伟; 钱成春; 孙孚
2001-01-01
A probability density function of surface elevation is obtained through improvement of the method introduced byCieslikiewicz who employed the maximum entropy principle to investigate the surface elevation distribution. The densityfunction can be easily extended to higher order according to demand and is non-negative everywhere, satisfying the basicbehavior of the probability. Moreover because the distribution is derived without any assumption about sea waves, it isfound from comparison with several accepted distributions that the new form of distribution can be applied in a widerrange of wave conditions. In addition, the density function can be used to fit some observed distributions of surface verti-cal acceleration although something remains unsolved.
A mixed relaxed singular maximum principle for linear SDEs with random coefficients
Andersson, Daniel
2008-01-01
We study singular stochastic control of a two dimensional stochastic differential equation, where the first component is linear with random and unbounded coefficients. We derive existence of an optimal relaxed control and necessary conditions for optimality in the form of a mixed relaxed-singular maximum principle in a global form. A motivating example is given in the form of an optimal investment and consumption problem with transaction costs, where we consider a portfolio with a continuum of bonds and where the portfolio weights are modeled as measure-valued processes on the set of times to maturity.
A maximum principle for the mutation--selection equilibrium of nucleotide sequences
Garske, T; Garske, Tini; Grimm, Uwe
2004-01-01
We study the equilibrium behaviour of a deterministic four-state mutation--selection model as a model for the evolution of a population of nucleotide sequences. The mutation model is the Kimura 3ST mutation scheme, and selection is assumed to be permutation invariant. Considering the evolution process both forward and backward in time, we use the ancestral distribution as the stationary state of the backward process to derive an expression for the mutational loss (as the difference between ancestral and population mean fitness), and we prove a maximum principle that determines the population mean fitness in mutation--selection balance.
Quality Evaluation and Its Application to Surface Water Ecosystem Based on Maximum Flux Principle
刘年磊; 毛国柱; 赵林
2010-01-01
Based on the maximum flux principle(MFP),a water quality evaluation model for surface water ecosystem is presented by using self-organization map(SOM) neural network simulation algorithm from the aspect of systematic structural evolution.This evaluation model is applied to the case of surface water ecosystem in Xindu District of Chengdu City in China.The values reflecting the water quality of five cross-sections of the system at different developing stages are obtained,with stable values of 1.438,2.952,1.86...
Guermond, Jean-Luc
2014-01-01
© 2014 Society for Industrial and Applied Mathematics. This paper proposes an explicit, (at least) second-order, maximum principle satisfying, Lagrange finite element method for solving nonlinear scalar conservation equations. The technique is based on a new viscous bilinear form introduced in Guermond and Nazarov [Comput. Methods Appl. Mech. Engrg., 272 (2014), pp. 198-213], a high-order entropy viscosity method, and the Boris-Book-Zalesak flux correction technique. The algorithm works for arbitrary meshes in any space dimension and for all Lipschitz fluxes. The formal second-order accuracy of the method and its convergence properties are tested on a series of linear and nonlinear benchmark problems.
A maximum-principle preserving finite element method for scalar conservation equations
Guermond, Jean-Luc
2014-04-01
This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.
The Maximum Principle of Pontryagin in Control of Twolegged Robot Based on Human Walking System
Żur K.K.
2014-05-01
Full Text Available In the paper a hypothesis about state equations of human gait is presented. Instantaneous normalized power developed by human muscles at particular joints of a leg is a control vector in state equations of the human walking system. The maximum principle of Pontryagin in analysis of dynamic human knee joint was presented. The discrete Hamilton function of a knee joint is similar to a discrete square function of normalized power developed by muscles at the knee joint. The results satisfy optimal conditions and could be applied in control of exoskeleton and DAR type robot.
Dynamic Optimization of a Polymer Flooding Process Based on Implicit Discrete Maximum Principle
Yang Lei
2012-01-01
Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and some inequality constraints as polymer concentration and injection amount limitation. The optimal control model is discretized by full implicit finite-difference method. To cope with the discrete optimal control problem (OCP, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s discrete maximum principle. A modified gradient method with new adjoint construction is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.
String theory, scale relativity and the generalized uncertainty principle
Castro, C
1995-01-01
An extension/ modification of the Stringy Heisenberg Uncertainty principle is derived within the framework of the theory of Special Scale-Relativity proposed by Nottale. Based on the fractal structure of two dimensional Quantum Gravity which has attracted considerable interest recently we conjecture that the underlying fundamental principle behind String theory should be based on an extension of Scale Relativity where both dynamics as well as scales are incorporated in the same footing.
Anthropic-principle arguments against steady-state cosmological theories
Tipler, F.J. (Tulane Univ., New Orleans, LA (USA))
1982-04-01
Steady-state theories are very difficult to rule out on observational grounds, particularly if they are adjusted to contain a three-degree isotropic thermal-background radiation. However, anthropic-principle arguments can be used to rule out virtually any cosmological theory which has the universe stationary in the large. For example, anthropic considerations show that the perfect cosmological principle is self-contradictory.
Applications of the principle of maximum entropy: from physics to ecology.
Banavar, Jayanth R; Maritan, Amos; Volkov, Igor
2010-02-17
There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.
First-principles theory of flexoelectricity
Stengel, Massimiliano; Vanderbilt, David
2015-01-01
In this Chapter we provide an overview of the current first-principles perspective on flexoelectric effects in crystalline solids. We base our theoretical formalism on the long-wave expansion of the electrical response of a crystal to an acoustic phonon perturbation. In particular, we recover the known expression for the piezoelectric tensor from the response at first order in wavevector ${\\bf q}$, and then obtain the flexoelectric tensor by extending the formalism to second order in $\\bf q$....
Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.
Frieden, B Roy; Gatenby, Robert A
2013-10-01
Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.
Ivashin V.A.
2013-12-01
Full Text Available Aims. The study presents the results of experimental research to verify the principle overlay for maximum permissible levels (MPL of multicolor laser radiation single exposure on eyes. This principle of the independence of the effects of radiation with each wavelength (the imposing principle, was founded and generalized to a wide range of exposure conditions. Experimental verification of this approach in relation to the impact of laser radiation on tissue fundus of an eye, as shows the analysis of the literature was not carried out. Material and methods. Was used in the experimental laser generating radiation with wavelengths: Л1 =0,532 microns, A2=0,556to 0,562 microns and A3=0,619to 0,621 urn. Experiments were carried out on eyes of rabbits with evenly pigmented eye bottom. Results. At comparison of results of processing of the experimental data with the calculated data it is shown that these levels are close by their parameters. Conclusions. For the first time in the Russian Federation had been performed experimental studies on the validity of multi-colored laser radiation on the organ of vision. In view of the objective coincidence of the experimental data with the calculated data, we can conclude that the mathematical formulas work.
Test the Principle of Maximum Entropy in Constant Sum 2x2 Game:Evidence in Experimental Economics
Xu, Bin; Wang, Zhijian; Zhang, Jianbo
2011-01-01
Entropy serves as a central observable which indicates uncertainty in many chemical, thermodynamical, biological and ecological systems, and the principle of the maximum entropy (MaxEnt) is widely supported in natural science. Recently, entropy is employed to describe the social system in which human subjects are interacted with each other, but the principle of the maximum entropy has never been reported from this field empirically. By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two person constant sum $2 \\times 2$ game. Empirical evidence shows that, in this competing game environment, the outcome of human's decision-making obeys the principle of maximum entropy.
Jiang Zhu
2014-01-01
Full Text Available Some delta-nabla type maximum principles for second-order dynamic equations on time scales are proved. By using these maximum principles, the uniqueness theorems of the solutions, the approximation theorems of the solutions, the existence theorem, and construction techniques of the lower and upper solutions for second-order linear and nonlinear initial value problems and boundary value problems on time scales are proved, the oscillation of second-order mixed delat-nabla differential equations is discussed and, some maximum principles for second order mixed forward and backward difference dynamic system are proved.
Cristian Enache
2006-06-01
Full Text Available For a class of nonlinear elliptic boundary value problems in divergence form, we construct some general elliptic inequalities for appropriate combinations of u(x and |Ã¢ÂˆÂ‡u|2, where u(x are the solutions of our problems. From these inequalities, we derive, using Hopf's maximum principles, some maximum principles for the appropriate combinations of u(x and |Ã¢ÂˆÂ‡u|2, and we list a few examples of problems to which these maximum principles may be applied.
A principle of relativity for quantum theory
Zaopo, Marco
2012-01-01
In non relativistic physics it is assumed that both chronological ordering and causal ordering of events (telling wether there exists a causal relationship between two events or not) are absolute, observer independent properties. In relativistic physics on the other hand chronological ordering depends on the observer who assigns space-time coordinates to physical events and only causal ordering is regarded as an observer independent property. In this paper it is shown that quantum theory can be considered as a physical theory in which causal (as well as chronological) ordering of probabilistic events happening in experiments may be regarded as an observer dependent property.
The Main General Didactical Principles of Glotoeducological Theory and Practice
Regina Juškienė
2011-04-01
Full Text Available As a pedagogical discipline glotoeducology is related to didactics, i. e. teaching theory. Three concepts of didactics are being distinguished: teaching, teaching principles and types of teaching activity. The authors limited themselves in their paper on one of them, namely: teaching principles that determine the usage of teaching regularities in the course of implementation of the objectives of teaching and education. The article also provides analysis of interaction of linguodidactical principles with general didactical principles, the impact thereof to teaching of foreign languages.
Application of the Principle of Maximum Conformality to Top-Pair Production
Brodsky, Stanley J.; /SLAC; Wu, Xing-Gang; /SLAC /Chongqing U.
2013-05-13
A major contribution to the uncertainty of finite-order perturbative QCD predictions is the perceived ambiguity in setting the renormalization scale {mu}{sub r}. For example, by using the conventional way of setting {mu}{sub r} {element_of} [m{sub t}/2, 2m{sub t}], one obtains the total t{bar t} production cross-section {sigma}{sub t{bar t}} with the uncertainty {Delta}{sigma}{sub t{bar t}}/{sigma}{sub t{bar t}} {approx} (+3%/-4%) at the Tevatron and LHC even for the present NNLO level. The Principle of Maximum Conformality (PMC) eliminates the renormalization scale ambiguity in precision tests of Abelian QED and non-Abelian QCD theories. By using the PMC, all nonconformal {l_brace}{beta}{sub i}{r_brace}-terms in the perturbative expansion series are summed into the running coupling constant, and the resulting scale-fixed predictions are independent of the renormalization scheme. The correct scale-displacement between the arguments of different renormalization schemes is automatically set, and the number of active flavors n{sub f} in the {l_brace}{beta}{sub i}{r_brace}-function is correctly determined. The PMC is consistent with the renormalization group property that a physical result is independent of the renormalization scheme and the choice of the initial renormalization scale {mu}{sub r}{sup init}. The PMC scale {mu}{sub r}{sup PMC} is unambiguous at finite order. Any residual dependence on {mu}{sub r}{sup init} for a finite-order calculation will be highly suppressed since the unknown higher-order {l_brace}{beta}{sub i}{r_brace}-terms will be absorbed into the PMC scales higher-order perturbative terms. We find that such renormalization group invariance can be satisfied to high accuracy for {sigma}{sub t{bar t}} at the NNLO level. In this paper we apply PMC scale-setting to predict the t{bar t} cross-section {sigma}{sub t{bar t}} at the Tevatron and LHC colliders. It is found that {sigma}{sub t{bar t}} remains almost unchanged by varying {mu}{sub r}{sup init
Quantum theory from first principles an informational approach
D'Ariano, Giacomo Mauro; Perinotti, Paolo
2017-01-01
Quantum theory is the soul of theoretical physics. It is not just a theory of specific physical systems, but rather a new framework with universal applicability. This book shows how we can reconstruct the theory from six information-theoretical principles, by rebuilding the quantum rules from the bottom up. Step by step, the reader will learn how to master the counterintuitive aspects of the quantum world, and how to efficiently reconstruct quantum information protocols from first principles. Using intuitive graphical notation to represent equations, and with shorter and more efficient derivations, the theory can be understood and assimilated with exceptional ease. Offering a radically new perspective on the field, the book contains an efficient course of quantum theory and quantum information for undergraduates. The book is aimed at researchers, professionals, and students in physics, computer science and philosophy, as well as the curious outsider seeking a deeper understanding of the theory.
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
General principles of quantum field theory
Bogolubov, N.N.; Logunov, A.A. (AN SSSR, Moscow (USSR) Moskovskij Gosudarstvennyj Univ., Moscow (USSR)); Oksak, A.I. (Institute for High Energy Physics, Moscow (USSR)); Todorov, I.T. (Bylgarska Akademiya na Naukite, Sofia (Bulgaria) Bulgarian Institute for Nuclear Research and Nuclear Energy, Sofia (Bulgaria))
1990-01-01
This major volume provides a account of general quantum field theory, with an emphasis on model-independent methods. The important aspects of the development of the subject are described in detail and are shown to have promising links with many branches of modern mathematics and theoretical physics, such as random fields (probability), statistical physics, and elemantary particles. The material is presented in a thorough, systematic way and the mathematical methods of quantum field theory are also given. The text is self-contained and contains numerous exercises. Topics of independent interest are given in appendices. The book also contains a large bibliography. (author). 1181 refs. Includes index of notation and subject index; includes 1181 refs.
Quantum Field Theory from First Principles
Esposito, Giampiero
2000-01-01
When quantum fields are studied on manifolds with boundary, the corresponding one-loop quantum theory for bosonic gauge fields with linear covariant gauges needs the assignment of suitable boundary conditions for elliptic differential operators of Laplace type. There are however deep reasons to modify such a scheme and allow for pseudo-differential boundary-value problems. When the boundary operator is allowed to be pseudo-differential while remaining a projector, the conditions on its kernel...
Descent Principle in Modular Galois Theory
Shreeram S Abhyankar; Pradipkumar H Keskar
2001-05-01
We propound a descent principle by which previously constructed equations over GF()() may be deformed to have incarnations over GF()() without changing their Galois groups. Currently this is achieved by starting with a vectorial (= additive) -polynomial of -degree with Galois group GL(, ) and then, under suitable conditions, enlarging its Galois group to GL(, ) by forming its generalized iterate relative to an auxiliary irreducible polynomial of degree . Elsewhere this was proved under certain conditions by using the classification of finite simple groups, and under some other conditions by using Kantor's classification of linear groups containing a Singer cycle. Now under different conditions we prove it by using Cameron-Kantor's classification of two-transitive linear groups.
Shao Dian-guo
2016-01-01
In this paper, we derive the stochastic maximum principle for optimal control problems of the forward-backward Markovian regime-switching system. The control system is described by an anticipated forward-backward stochastic pantograph equation and modulated by a continuous-time finite-state Markov chain. By virtue of classical variational approach, duality method, and convex analysis, we obtain a stochastic maximum principle for the optimal control.
NONE
2000-07-01
The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.
Source Function Determined from HBT Correlations by the Maximum Entropy Principle
Yuan Fang Wei; Yuanfang, Wu; Heinz, Ulrich
1996-01-01
We study the reconstruction of the source function in space-time directly from the measured HBT correlation function using the Maximum Entropy Principle. We find that the problem is ill-defined without at least one additional theoretical constraint as input. Using the requirement of a finite source lifetime for the latter we find a new Gaussian parametrization of the source function directly in terms of the measured HBT radius parameters and its lifetime, where the latter is a free parameter which is not directly measurable by HBT. We discuss the implications of our results for the remaining freedom in building source models consistent with a given set of measured HBT radius parameters.
Source Function Determined from Hanbury-Brown/Twiss Correlations by the Maximum Entropy Principle
吴元芳; 刘连寿
2002-01-01
We study the reconstruction of the source function in space-time directly from the measured Hanbury-Brown/Twiss (HBT) correlation function using the maximum entropy principle. We find that the problem is ill-defined without at least one additional theoretical constraint as input. Using the requirement of a finite source lifetime for the problem we find a new Gaussian parametrization of the source function directly in terms of the measured HBT radius parameters and its lifetime, where the latter is a free parameter which is not directly measurable by HBT.We discuss the implications of our results for the remaining freedom in building source models consistent with a given set of measured HBT radius parameters.
Brodsky, Stanley J
2012-01-01
The uncertainty in setting the renormalization scale in finite-order perturbative QCD predictions using standard methods substantially reduces the precision of tests of the Standard Model in collider experiments. It is conventional to choose a typical momentum transfer of the process as the renormalization scale and take an arbitrary range to estimate the uncertainty in the QCD prediction. However, predictions using this procedure depend on the choice of renormalization scheme, and moreover, one obtains incorrect results when applied to QED processes. In contrast, if one fixes the renormalization scale using the Principle of Maximum Conformality (PMC), all non-conformal $\\{\\beta_i\\}$-terms in the perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC renormalization scale $\\mu^{\\rm PMC}_R$ and the resulting finite-order PMC prediction are both to high accuracy independent of choice of the initial ren...
The Stampacchia maximum principle for stochastic partial differential equations and applications
Chekroun, Mickaël D.; Park, Eunhee; Temam, Roger
2016-02-01
Stochastic partial differential equations (SPDEs) are considered, linear and nonlinear, for which we establish comparison theorems for the solutions, or positivity results a.e., and a.s., for suitable data. Comparison theorems for SPDEs are available in the literature. The originality of our approach is that it is based on the use of truncations, following the Stampacchia approach to maximum principle. We believe that our method, which does not rely too much on probability considerations, is simpler than the existing approaches and to a certain extent, more directly applicable to concrete situations. Among the applications, boundedness results and positivity results are respectively proved for the solutions of a stochastic Boussinesq temperature equation, and of reaction-diffusion equations perturbed by a non-Lipschitz nonlinear noise. Stabilization results to a Chafee-Infante equation perturbed by a nonlinear noise are also derived.
Principle of Maximum Fisher Information from Hardy's Axioms Applied to Statistical Systems
Frieden, B R
2014-01-01
Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general non-equilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I_{max}. This is important because many physical laws have been derived, assuming as a working hypothesis that I=I_{max}. These derivations include uses of the principle of Extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, q...
A Stochastic Maximum Principle for Risk-Sensitive Mean-Field Type Control
Djehiche, Boualem
2015-02-24
In this paper we study mean-field type control problems with risk-sensitive performance functionals. We establish a stochastic maximum principle (SMP) for optimal control of stochastic differential equations (SDEs) of mean-field type, in which the drift and the diffusion coefficients as well as the performance functional depend not only on the state and the control but also on the mean of the distribution of the state. Our result extends the risk-sensitive SMP (without mean-field coupling) of Lim and Zhou (2005), derived for feedback (or Markov) type optimal controls, to optimal control problems for non-Markovian dynamics which may be time-inconsistent in the sense that the Bellman optimality principle does not hold. In our approach to the risk-sensitive SMP, the smoothness assumption on the value-function imposed in Lim and Zhou (2005) needs not be satisfied. For a general action space a Peng\\'s type SMP is derived, specifying the necessary conditions for optimality. Two examples are carried out to illustrate the proposed risk-sensitive mean-field type SMP under linear stochastic dynamics with exponential quadratic cost function. Explicit solutions are given for both mean-field free and mean-field models.
Svyatskiy, Daniil [Los Alamos National Laboratory; Shashkov, Mikhail [Los Alamos National Laboratory; Kuzmin, D [DORTMUND UNIV
2008-01-01
A new approach to the design of constrained finite element approximations to second-order elliptic problems is introduced. This approach guarantees that the finite element solution satisfies the discrete maximum principle (DMP). To enforce these monotonicity constrains the sufficient conditions for elements of the stiffness matrix are formulated. An algebraic splitting of the stiffness matrix is employed to separate the contributions of diffusive and antidiffusive numerical fluxes, respectively. In order to prevent the formation of spurious undershoots and overshoots, a symmetric slope limiter is designed for the antidiffusive part. The corresponding upper and lower bounds are defined using an estimate of the steepest gradient in terms of the maximum and minimum solution values at surrounding nodes. The recovery of nodal gradients is performed by means of a lumped-mass L{sub 2} projection. The proposed slope limiting strategy preserves the consistency of the underlying discrete problem and the structure of the stiffness matrix (symmetry, zero row and column sums). A positivity-preserving defect correction scheme is devised for the nonlinear algebraic system to be solved. Numerical results and a grid convergence study are presented for a number of anisotropic diffusion problems in two space dimensions.
Modelling and Simulation of Seasonal Rainfall Using the Principle of Maximum Entropy
Jonathan Borwein
2014-02-01
Full Text Available We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to model the joint probability distribution for total seasonal rainfall and a set of two-parameter gamma distributions to model each of the marginal monthly rainfall totals. The model allows us to match the grade correlation coefficients for the checkerboard copula to the observed Spearman rank correlation coefficients for the monthly rainfalls and, hence, provides a model that correctly describes the mean and variance for each of the monthly totals and also for the overall seasonal total. Thus, we avoid the need for a posteriori adjustment of simulated monthly totals in order to correctly simulate the observed seasonal statistics. Detailed results are presented for the modelling and simulation of seasonal rainfall in the town of Kempsey on the mid-north coast of New South Wales. Empirical evidence from extensive simulations is used to validate this application of the model. A similar analysis for Sydney is also described.
Metz, Johan A Jacob; Staňková, Kateřina; Johansson, Jacob
2016-03-01
This paper should be read as addendum to Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013). Our goal is, using little more than high-school calculus, to (1) exhibit the form of the canonical equation of adaptive dynamics for classical life history problems, where the examples in Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013) are chosen such that they avoid a number of the problems that one gets in this most relevant of applications, (2) derive the fitness gradient occurring in the CE from simple fitness return arguments, (3) show explicitly that setting said fitness gradient equal to zero results in the classical marginal value principle from evolutionary ecology, (4) show that the latter in turn is equivalent to Pontryagin's maximum principle, a well known equivalence that however in the literature is given either ex cathedra or is proven with more advanced tools, (5) connect the classical optimisation arguments of life history theory a little better to real biology (Mendelian populations with separate sexes subject to an environmental feedback loop), (6) make a minor improvement to the form of the CE for the examples in Dieckmann et al. and Parvinen et al.
The free-energy principle: a unified brain theory?
Friston, Karl
2010-02-01
A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.
In search of principles for a Theory of Organisms.
Longo, Giuseppe; Montevil, Mael; Sonnenschein, Carlos; Soto, Ana M
2015-12-01
Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit theoretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we favour adopting the default state implicit in Darwin's theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension.
In search of principles for a Theory of Organisms
Giuseppe Longo; Maël Montévil; Carlos Sonnenschein; Ana M Soto
2015-12-01
Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit the oretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we fanout adopting the default state implicit in Darwins theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension.
Groups, information theory, and Einstein's likelihood principle
Sicuro, Gabriele; Tempesta, Piergiulio
2016-04-01
We propose a unifying picture where the notion of generalized entropy is related to information theory by means of a group-theoretical approach. The group structure comes from the requirement that an entropy be well defined with respect to the composition of independent systems, in the context of a recently proposed generalization of the Shannon-Khinchin axioms. We associate to each member of a large class of entropies a generalized information measure, satisfying the additivity property on a set of independent systems as a consequence of the underlying group law. At the same time, we also show that Einstein's likelihood function naturally emerges as a byproduct of our informational interpretation of (generally nonadditive) entropies. These results confirm the adequacy of composable entropies both in physical and social science contexts.
The physics of forgetting: Landauer's erasure principle and information theory
Plenio, MB; Vitelli, V
2001-01-01
This article discusses the concept of information and its intimate relationship with physics. After an introduction of all the necessary quantum mechanical and information theoretical concepts we analyze Landauer's principle that states that the erasure of information is inevitably accompanied by the generation of heat. We employ this principle to rederive a number of results in classical and quantum information theory whose rigorous mathematical derivations are difficult. This demonstrates t...
LUO Xiao-hui; LI Yong-le; LUO Xin
2005-01-01
The difference of constitutive character and large deformation as to soil mass are basic questions to analyze deformational feature. According to the description method of limited deformation, the large deformation consolidation equations of soil mass were created and its variational principles were rigorously testified. The regionwise variational principles of consolidation theory were deduced using sub-structure continuous condition of region-wise. Quoting the method of Lagrangian multiplier operator, generalized variational principles of region-wise of large deformation consolidation in the nonconstrained condition were created and approved.
Wang, Sheng-Quan; Brodsky, Stanley J; Mojaza, Matin
2016-01-01
We present improved pQCD predictions for Higgs boson hadroproduction at the Large Hadronic Collider (LHC) by applying the Principle of Maximum Conformality (PMC), a procedure which resums the pQCD series using the renormalization group (RG), thereby eliminating the dependence of the predictions on the choice of the renormalization scheme while minimizing sensitivity to the initial choice of the renormalization scale. In previous pQCD predictions for Higgs boson hadroproduction, it has been conventional to assume that the renormalization scale $\\mu_r$ of the QCD coupling $\\alpha_s(\\mu_r)$ is the Higgs mass, and then to vary this choice over the range $1/2 m_H < \\mu_r < 2 m_H $ in order to estimate the theory uncertainty. However, this error estimate is only sensitive to the non-conformal $\\beta$ terms in the pQCD series, and thus it fails to correctly estimate the theory uncertainty in cases where pQCD series has large higher order contributions, as is the case for Higgs boson hadroproduction. Furthermor...
Ionization and maximum energy of nuclei in shock acceleration theory
Morlino, Giovanni
2011-01-01
We study the acceleration of heavy nuclei at SNR shocks when the process of ionization is taken into account. Heavy atoms ($Z_N >$ few) in the interstellar medium which start the diffusive shock acceleration (DSA) are never fully ionized at the moment of injection. The ionization occurs during the acceleration process, when atoms already move relativistically. For typical environment around SNRs the photo-ionization due to the background galactic radiation dominates over Coulomb collisions. The main consequence of ionization is the reduction of the maximum energy which ions can achieve with respect to the standard result of the DSA. In fact the photo-ionization has a timescale comparable to the beginning of the Sedov-Taylor phase, hence the maximum energy is no more proportional to the nuclear charge, as predicted by standard DSA, but rather to the effective ions' charge during the acceleration process, which is smaller than the total nuclear charge $Z_N$. This result can have a direct consequence in the pred...
XU Fu-min; XUE Hong-chao
2004-01-01
The Maximum Entropy Principle (MEP) method is elaborated, and the corresponding probability density evaluation method for the random fluctuation system is introduced, the goal of the article is to find the best fitting method for the wave climate statistical distribution. For the first time, a kind of new maximum entropy probability distribution (MEP distribution) expression is deduced in accordance with the second order moment of a random process. Different from all the fitting methods in the past, the MEP distribution can describe the probability distribution of any random fluctuation system conveniently and reasonably. If the moments of the random signal is limited to the second order, that is, the ratio of the root-mean-square value to the mean value of the random variable is obtained from the random sample, the corresponding MEP distribution can be computed according to the deduced expression in this essay. The concept of the wave climate is introduced here, and the MEP distribution is applied to fit the probability density distributions of the significant wave height and spectral peak period. Take the Mexico Gulf as an example, three stations at different locations, depths and wind wave strengths are chosen in the half-closed gulf, the significant wave height and spectral peak period distributions at each station are fitted with the MEP distribution, the Weibull distribution and the Log-normal distribution respectively, the fitted results are compared with the field observations, the results show that the MEP distribution is the best fitting method, and the Weibull distribution is the worst one when applied to the significant wave height and spectral peak period distributions at different locations, water depths and wind wave strengths in the Gulf. The conclusion shows the feasibility and reasonability of fitting wave climate statistical distributions with the deduced MEP distributions in this essay, and furthermore proves the great potential of MEP method to
Special relativity and theory of gravity via maximum symmetry and localization
2008-01-01
Like Euclid,Riemann and Lobachevski geometries are on an almost equal footing,based on the principle of relativity of maximum symmetry proposed by Professor Lu Qikeng and the postulate on invariant universal constants c and R,the de Sitter/anti-de Sitter（dS/AdS）special relativity on dS/AdS-space with radius R can be set up on an almost equal footing with Einstein’s special relativity on the Minkowski-space in the case of R→∞. Thus the dS-space is coin-like:a law of inertia in Beltrami atlas with Beltrami time simultaneity for the principle of relativity on one side,and the proper-time simultaneity and a Robertson-Walker-like dS-space with entropy and an accelerated expanding S3 fitting the cosmological principle on another side. If our universe is asymptotic to the Robertson-Walker-like dS-space of R（?）（3/Λ）1/2,it should be slightly closed in O（A）with entropy bound S（?）3πc3kB/ΛGh.Contrarily,via its asymptotic behavior, it can fix on Beltrami inertial frames without‘an argument in a circle’and acts as the origin of inertia. There is a triality of conformal extensions of three kinds of special relativity and their null physics on the projective boundary of a 5-d AdS-space,a null cone modulo projective equivalence[N]（?）p（AdS5）. Thus there should be a dS-space on the boundary of S5×AdS5 as a vacuum of supergravity. In the light of Einstein’s‘Galilean regions’,gravity should be based on the localized principle of relativity of full maximum symmetry with a gauge-like dynamics.Thus,this may lead to the theory of gravity of corresponding local symmetry.A simple model of dS-gravity characterized by a dimensionless constant g（?）（AGh/3c3）1/2～10-61shows the features on umbilical manifolds of local dS-invariance. Some gravitational effects out of general relativity may play a role as dark matter. The dark universe and its asymptotic behavior may already indicate that the dS special relativity and dS-gravity be the
Special relativity and theory of gravity via maximum symmetry and localization
GUO HanYing
2008-01-01
Like Euclid,Riemann and Lobachevski geometries are on an almost equal footing,based on the principle of relativity of maximum symmetry proposed by Professor Lu Qikeng and the postulate on invariant universal constants c and R,the de Sitter/anti-de Sitter (dS/AdS) special relativity on dS/AdS-space with radius R can be set up on an almost equal footing with Einstein's special relativity on the Minkowski-space in the case of R →∞.Thus the dS-space is coin-like: a law of inertia in Beltrami atlas with Beltrami time simultaneity for the principle of relativity on one side,and the proper-time simultaneity and a Robertson-Walker-like dS-space with entropy and an accelerated expanding S3 fitting the cosmological principle on another side.If our universe is asymptotic to the Robertson-Walker-like dS-space of R≈(3/∧)1/2,it should be slightly closed in O(A) with entropy bound S≈3πc3kB/∧Gh.Contrarily,via its asymptotic behavior,it can fix on Beltrami inertial frames without 'an argument in a circle' and acts as the origin of inertia.There is a triality of conformal extensions of three kinds of special relativity and their null physics on the projective boundary of a 5-d AdS-space,a null cone modulo projective equivalence [N]≈(e)p(AdS5).Thus there should be a dS-space on the boundary of S5 × AdS5 as a vacuum of supergravity.In the light of Einstein's 'Galilean regions',gravity should be based on the localized principle of relativity of full maximum symmetry with a gauge-like dynamics.Thus,this may lead to the theory of gravity of corresponding local symmetry.A simple model of dS-gravity characterized by a dimensionless constant g≈(∧Gh/3c3)1/2 ～ 10-61 shows the features on umbilical manifolds of local dS-invariance.Some gravitational effects out of general relativity may play a role as dark matter.The dark universe and its asymptotic behavior may already indicate that the dS special relativity and dS-gravity be the foundation of large scale physics.
Initial system-bath state via the maximum-entropy principle
Dai, Jibo; Len, Yink Loong; Ng, Hui Khoon
2016-11-01
The initial state of a system-bath composite is needed as the input for prediction from any quantum evolution equation to describe subsequent system-only reduced dynamics or the noise on the system from joint evolution of the system and the bath. The conventional wisdom is to write down an uncorrelated state as if the system and the bath were prepared in the absence of each other; yet, such a factorized state cannot be the exact description in the presence of system-bath interactions. Here, we show how to go beyond the simplistic factorized-state prescription using ideas from quantum tomography: We employ the maximum-entropy principle to deduce an initial system-bath state consistent with the available information. For the generic case of weak interactions, we obtain an explicit formula for the correction to the factorized state. Such a state turns out to have little correlation between the system and the bath, which we can quantify using our formula. This has implications, in particular, on the subject of subsequent non-completely positive dynamics of the system. Deviation from predictions based on such an almost uncorrelated state is indicative of accidental control of hidden degrees of freedom in the bath.
Zhao, Quanyu; Kurata, Hiroyuki
2010-08-01
Elementary mode (EM) analysis is potentially effective in integrating transcriptome or proteome data into metabolic network analyses and in exploring the mechanism of how phenotypic or metabolic flux distribution is changed with respect to environmental and genetic perturbations. The EM coefficients (EMCs) indicate the quantitative contribution of their associated EMs and can be estimated by maximizing Shannon's entropy as a general objective function in our previous study, but the use of EMCs is still restricted to a relatively small-scale networks. We propose a fast and universal method that optimizes hundreds of thousands of EMCs under the constraint of the Maximum entropy principle (MEP). Lagrange multipliers (LMs) are applied to maximize the Shannon's entropy-based objective function, analytically solving each EMC as the function of LMs. Consequently, the number of such search variables, the EMC number, is dramatically reduced to the reaction number. To demonstrate the feasibility of the MEP with Lagrange multipliers (MEPLM), it is coupled with enzyme control flux (ECF) to predict the flux distributions of Escherichia coli and Saccharomycescerevisiae for different conditions (gene deletion, adaptive evolution, temperature, and dilution rate) and to provide a quantitative understanding of how metabolic or physiological states are changed in response to these genetic or environmental perturbations at the elementary mode level. It is shown that the ECF-based method is a feasible framework for the prediction of metabolic flux distribution by integrating enzyme activity data into EMs to genetic and environmental perturbations.
On the relevance of the maximum entropy principle in non-equilibrium statistical mechanics
Auletta, Gennaro; Rondoni, Lamberto; Vulpiani, Angelo
2017-07-01
At first glance, the maximum entropy principle (MEP) apparently allows us to derive, or justify in a simple way, fundamental results of equilibrium statistical mechanics. Because of this, a school of thought considers the MEP as a powerful and elegant way to make predictions in physics and other disciplines, rather than a useful technical tool like others in statistical physics. From this point of view the MEP appears as an alternative and more general predictive method than the traditional ones of statistical physics. Actually, careful inspection shows that such a success is due to a series of fortunate facts that characterize the physics of equilibrium systems, but which are absent in situations not described by Hamiltonian dynamics, or generically in nonequilibrium phenomena. Here we discuss several important examples in non equilibrium statistical mechanics, in which the MEP leads to incorrect predictions, proving that it does not have a predictive nature. We conclude that, in these paradigmatic examples, an approach that uses a detailed analysis of the relevant aspects of the dynamics cannot be avoided.
Fujino, Akinori; Ueda, Naonori; Saito, Kazumi
2008-03-01
This paper presents a method for designing semi-supervised classifiers trained on labeled and unlabeled samples. We focus on probabilistic semi-supervised classifier design for multi-class and single-labeled classification problems, and propose a hybrid approach that takes advantage of generative and discriminative approaches. In our approach, we first consider a generative model trained by using labeled samples and introduce a bias correction model, where these models belong to the same model family, but have different parameters. Then, we construct a hybrid classifier by combining these models based on the maximum entropy principle. To enable us to apply our hybrid approach to text classification problems, we employed naive Bayes models as the generative and bias correction models. Our experimental results for four text data sets confirmed that the generalization ability of our hybrid classifier was much improved by using a large number of unlabeled samples for training when there were too few labeled samples to obtain good performance. We also confirmed that our hybrid approach significantly outperformed generative and discriminative approaches when the performance of the generative and discriminative approaches was comparable. Moreover, we examined the performance of our hybrid classifier when the labeled and unlabeled data distributions were different.
A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type
Hosking, John Joseph Absalom, E-mail: j.j.a.hosking@cma.uio.no [University of Oslo, Centre of Mathematics for Applications (CMA) (Norway)
2012-12-15
We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.
Brodsky, Stanley J; Wu, Xing-Gang
2012-07-27
It is conventional to choose a typical momentum transfer of the process as the renormalization scale and take an arbitrary range to estimate the uncertainty in the QCD prediction. However, predictions using this procedure depend on the renormalization scheme, leave a nonconvergent renormalon perturbative series, and moreover, one obtains incorrect results when applied to QED processes. In contrast, if one fixes the renormalization scale using the principle of maximum conformality (PMC), all nonconformal {β(i)} terms in the perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC scale μ(R)(PMC) and the resulting finite-order PMC prediction are both to high accuracy independent of the choice of initial renormalization scale μ(R)(init), consistent with renormalization group invariance. As an application, we apply the PMC procedure to obtain next-to-next-to-leading-order (NNLO) predictions for the tt-pair production at the Tevatron and LHC colliders. The PMC prediction for the total cross section σ(tt) agrees well with the present Tevatron and LHC data. We also verify that the initial scale independence of the PMC prediction is satisfied to high accuracy at the NNLO level: the total cross section remains almost unchanged even when taking very disparate initial scales μ(R)(init) equal to m(t), 20m(t), and √s. Moreover, after PMC scale setting, we obtain A(FB)(tt)≃12.5%, A(FB)(pp)≃8.28% and A(FB)(tt)(M(tt)>450 GeV)≃35.0%. These predictions have a 1σ deviation from the present CDF and D0 measurements; the large discrepancy of the top quark forward-backward asymmetry between the standard model estimate and the data are, thus, greatly reduced.
Lattice Field Theory with the Sign Problem and the Maximum Entropy Method
Masahiro Imachi
2007-02-01
Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.
Noncommutative Common Cause Principles in algebraic quantum field theory
Hofer-Szabó, Gábor; Vecsernyés, Péter
2013-04-01
States in algebraic quantum field theory "typically" establish correlation between spacelike separated events. Reichenbach's Common Cause Principle, generalized to the quantum field theoretical setting, offers an apt tool to causally account for these superluminal correlations. In the paper we motivate first why commutativity between the common cause and the correlating events should be abandoned in the definition of the common cause. Then we show that the Noncommutative Weak Common Cause Principle holds in algebraic quantum field theory with locally finite degrees of freedom. Namely, for any pair of projections A, B supported in spacelike separated regions VA and VB, respectively, there is a local projection C not necessarily commuting with A and B such that C is supported within the union of the backward light cones of VA and VB and the set {C, C⊥} screens off the correlation between A and B.
Noncommutative Common Cause Principles in Algebraic Quantum Field Theory
Hofer-Szabó, Gábor
2012-01-01
States in algebraic quantum field theory "typically" establish correlation between spacelike separated events. Reichenbach's Common Cause Principle, generalized to the quantum field theoretical setting, offers an apt tool to causally account for these superluminal correlations. In the paper we motivate first why commutativity between the common cause and the correlating events should be abandoned in the definition of the common cause. Then we show that the Noncommutative Weak Common Cause Principle holds in algebraic quantum field theory with locally finite degrees of freedom. Namely, for any pair of projections A, B supported in spacelike separated regions V_A and V_B, respectively, there is a local projection C not necessarily commuting with A and B such that C is supported within the union of the backward light cones of V_A and V_B and the set {C, non-C} screens off the correlation between A and B.
Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities
Kinney, Justin B
2014-01-01
Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.
Derivation of instanton rate theory from first principles
Richardson, Jeremy O
2015-01-01
Instanton rate theory is used to study tunneling events in a wide range of systems including low-temperature chemical reactions. Despite many successful applications, the method has never been obtained from first principles, relying instead on the "ImF" premise. In this paper, the same expression for the rate of barrier penetration at finite temperature is rederived from quantum scattering theory [W. H. Miller, S. D. Schwartz, and J. W. Tromp, J. Chem. Phys. 79, 4889 (1983)] using a semiclassical Green's function formalism. This justifies the instanton approach and provides a route to deriving the rate of other processes.
Holographic Principle of Black Holes in Brans-Dicke Theory
Chen, C Y; Chen, Chi-Yi; Shen, You-Gen
2003-01-01
We consider the general situation of type-I stationary solutions of black holes in Brans-Dicke theory and investigate their statistical entropies by using the brick wall model. Compare with a generalized entropy formula derived from their thermodynamical evolution by Kang, We get the ultimate scenario of black holes entropies in Brans-Dicke theory. For further considering the bound of holographic principle, we obtain a new constraint on parameters in this type solution read as $2Q-\\chi=2$, which corresponds to $\\omega=-{3/2}$.
Derivation of instanton rate theory from first principles
Richardson, Jeremy O.
2016-03-01
Instanton rate theory is used to study tunneling events in a wide range of systems including low-temperature chemical reactions. Despite many successful applications, the method has never been obtained from first principles, relying instead on the "Im F" premise. In this paper, the same expression for the rate of barrier penetration at finite temperature is rederived from quantum scattering theory [W. H. Miller, S. D. Schwartz, and J. W. Tromp, J. Chem. Phys. 79, 4889 (1983)] using a semiclassical Green's function formalism. This justifies the instanton approach and provides a route to deriving the rate of other processes.
Wang, Sheng-Quan; Wu, Xing-Gang; Brodsky, Stanley J.; Mojaza, Matin
2016-09-09
We present improved perturbative QCD (pQCD) predictions for Higgs boson hadroproduction at the LHC by applying the principle of maximum conformality (PMC), a procedure which resums the pQCD series using the renormalization group (RG), thereby eliminating the dependence of the predictions on the choice of the renormalization scheme while minimizing sensitivity to the initial choice of the renormalization scale. In previous pQCD predictions for Higgs boson hadroproduction, it has been conventional to assume that the renormalization scale μ r of the QCD coupling α s ( μ r ) is the Higgs mass and then to vary this choice over the range 1 / 2 m H < μ r < 2 m H in order to estimate the theory uncertainty. However, this error estimate is only sensitive to the nonconformal β terms in the pQCD series, and thus it fails to correctly estimate the theory uncertainty in cases where a pQCD series has large higher-order contributions, as is the case for Higgs boson hadroproduction. Furthermore, this ad hoc choice of scale and range gives pQCD predictions which depend on the renormalization scheme being used, in contradiction to basic RG principles. In contrast, after applying the PMC, we obtain next-to-next-to-leading-order RG resummed pQCD predictions for Higgs boson hadroproduction which are renormalization-scheme independent and have minimal sensitivity to the choice of the initial renormalization scale. Taking m H = 125 GeV , the PMC predictions for the p p → H X Higgs inclusive hadroproduction cross sections for various LHC center-of-mass energies are σ Incl | 7 TeV = 21.2 1 + 1.36 - 1.32 pb , σ Incl | 8 TeV = 27.3 7 + 1.65 - 1.59 pb , and σ Incl | 13 TeV = 65.7 2 + 3.46 - 3.0 pb . We also predict the fiducial cross section σ fid ( p p → H → γ γ ) : σ fid | 7 TeV = 30.1 + 2.3 - 2.2 fb , σ fid | 8 TeV = 38.3 + 2.9 - 2.8 fb , and σ fid | 13 TeV = 85.8 + 5.7 - 5.3 fb . The error limits in these predictions include the small residual high
Universality principle and the development of classical density functional theory
周世琦; 张晓琪
2002-01-01
The universality principle of the free energy density functional and the ‘test particle' trick by Percus are combined to construct the approximate free energy density functional or its functional derivative. Information about the bulk fluid ralial distribution function is integrated into the density functional approximation directly for the first time in the present methodology. The physical foundation of the present methodology also applies to the quantum density functional theory.
A Simple First-Principles Homogenization Theory for Chiral Metamaterials
Carlo Rizza
2015-04-01
Full Text Available We discuss a simple first-principles homogenization theory for describing, in the long-wavelength limit, the effective bianisotropic response of a periodic metamaterial composite without intrinsic chiral and magnetic inclusions. In the case where the dielectric contrast is low, we obtain a full analytical description which can be considered the extension of Landau-Lifshitz-Looyenga effective-medium formulation in the context of periodic metamaterials.
Brodsky, Stanley J.; /SLAC; Wu, Xing-Gang; /Chongqing U.
2012-04-02
The uncertainty in setting the renormalization scale in finite-order perturbative QCD predictions using standard methods substantially reduces the precision of tests of the Standard Model in collider experiments. It is conventional to choose a typical momentum transfer of the process as the renormalization scale and take an arbitrary range to estimate the uncertainty in the QCD prediction. However, predictions using this procedure depend on the choice of renormalization scheme, leave a non-convergent renormalon perturbative series, and moreover, one obtains incorrect results when applied to QED processes. In contrast, if one fixes the renormalization scale using the Principle of Maximum Conformality (PMC), all non-conformal {l_brace}{beta}{sub i}{r_brace}-terms in the perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC renormalization scale {mu}{sub R}{sup PMC} and the resulting finite-order PMC prediction are both to high accuracy independent of choice of the initial renormalization scale {mu}{sub R}{sup init}, consistent with renormalization group invariance. Moreover, after PMC scale-setting, the n!-growth of the pQCD expansion is eliminated. Even the residual scale-dependence at fixed order due to unknown higher-order {l_brace}{beta}{sub i}{r_brace}-terms is substantially suppressed. As an application, we apply the PMC procedure to obtain NNLO predictions for the t{bar t}-pair hadroproduction cross-section at the Tevatron and LHC colliders. There are no renormalization scale or scheme uncertainties, thus greatly improving the precision of the QCD prediction. The PMC prediction for {sigma}{sub t{bar t}} is larger in magnitude in comparison with the conventional scale-setting method, and it agrees well with the present Tevatron and LHC data. We also verify that the initial scale-independence of the PMC prediction is satisfied to high accuracy at the
Beretta, Gian Paolo
2014-10-01
By suitable reformulations, we cast the mathematical frameworks of several well-known different approaches to the description of nonequilibrium dynamics into a unified formulation valid in all these contexts, which extends to such frameworks the concept of steepest entropy ascent (SEA) dynamics introduced by the present author in previous works on quantum thermodynamics. Actually, the present formulation constitutes a generalization also for the quantum thermodynamics framework. The analysis emphasizes that in the SEA modeling principle a key role is played by the geometrical metric with respect to which to measure the length of a trajectory in state space. In the near-thermodynamic-equilibrium limit, the metric tensor is directly related to the Onsager's generalized resistivity tensor. Therefore, through the identification of a suitable metric field which generalizes the Onsager generalized resistance to the arbitrarily far-nonequilibrium domain, most of the existing theories of nonequilibrium thermodynamics can be cast in such a way that the state exhibits the spontaneous tendency to evolve in state space along the path of SEA compatible with the conservation constraints and the boundary conditions. The resulting unified family of SEA dynamical models is intrinsically and strongly consistent with the second law of thermodynamics. The non-negativity of the entropy production is a general and readily proved feature of SEA dynamics. In several of the different approaches to nonequilibrium description we consider here, the SEA concept has not been investigated before. We believe it defines the precise meaning and the domain of general validity of the so-called maximum entropy production principle. Therefore, it is hoped that the present unifying approach may prove useful in providing a fresh basis for effective, thermodynamically consistent, numerical models and theoretical treatments of irreversible conservative relaxation towards equilibrium from far nonequilibrium
Ethics and nursing research. 1: Development, theories and principles.
Noble-Adams, R
This article, the first of two looking at nursing ethics and research, outlines the foundations and development of an ethical framework for nursing research. The two dominant theories of ethics--utilitarianism and deontology--are described as they relate to the rights of individuals undergoing the research. Each of these approaches has limitations and in some instances choosing the right action may be difficult. The guiding ethical standards of beneficence/non-maleficence, respect for human dignity, justice, informed consent and vulnerable subjects are reviewed for the reader as they relate to undertaking research. This knowledge will help nurses conduct, participate in, or use research that is based on ethically sound principles. The second article will explore and explain the relationship between these guiding principles and the elemental steps of the research process.
Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO
Lo C. Y.
2006-04-01
Full Text Available The theoretical foundation of LIGO's design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein's theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein's equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne's formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein's equivalence principle is not important or even irrelevant.
Principle-based concept analysis: intentionality in holistic nursing theories.
Aghebati, Nahid; Mohammadi, Eesa; Ahmadi, Fazlollah; Noaparast, Khosrow Bagheri
2015-03-01
This is a report of a principle-based concept analysis of intentionality in holistic nursing theories. A principle-based concept analysis method was used to analyze seven holistic theories. The data included eight books and 31 articles (1998-2011), which were retrieved through MEDLINE and CINAHL. Erickson, Kriger, Parse, Watson, and Zahourek define intentionality as a capacity, a focused consciousness, and a pattern of human being. Rogers and Newman do not explicitly mention intentionality; however, they do explain pattern and consciousness (epistemology). Intentionality has been operationalized as a core concept of nurse-client relationships (pragmatic). The theories are consistent on intentionality as a noun and as an attribute of the person-intentionality is different from intent and intention (linguistic). There is ambiguity concerning the boundaries between intentionality and consciousness (logic). Theoretically, intentionality is an evolutionary capacity to integrate human awareness and experience. Because intentionality is an individualized concept, we introduced it as "a matrix of continuous known changes" that emerges in two forms: as a capacity of human being and as a capacity of transpersonal caring. This study has produced a theoretical definition of intentionality and provides a foundation for future research to further investigate intentionality to better delineate its boundaries. © The Author(s) 2014.
Acoustic space dimensionality selection and combination using the maximum entropy principle
Abdel-Haleem, Yasser H.; Renals, Steve; Lawrence, Neil D.
2004-01-01
In this paper we propose a discriminative approach to acoustic space dimensionality selection based on maximum entropy modelling. We form a set of constraints by composing the acoustic space with the space of phone classes, and use a continuous feature formulation of maximum entropy modelling to select an optimal feature set. The suggested approach has two steps: (1) the selection of the best acoustic space that efficiently and economically represents the acoustic data and its variability;...
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics
Xu, Bin, E-mail: xubin211@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Public Administration College, Zhejiang Gongshang University, Hangzhou, 310018 (China); Zhang, Hongen, E-mail: hongen777@163.com [Department of Physics, Zhejiang University, Hangzhou, 310027 (China); Wang, Zhijian, E-mail: wangzj@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Zhang, Jianbo, E-mail: jbzhang08@zju.edu.cn [Department of Physics, Zhejiang University, Hangzhou, 310027 (China)
2012-03-19
By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.
Theory-generating practice. Proposing a principle for learning design
Buhl, Mie
2016-01-01
This contribution proposes a principle for learning design – Theory-Generating Practice (TGP) – as an alternative to the way university courses are traditionally taught and structured, with a series of theoretical lectures isolated from practical experience and concluding with an exam or a project...... building, and takes tacit knowledge into account. The article introduces TGP, contextualizes it to a Danish tradition of didactics, and discusses it in relation to contemporary conceptual currents of didactic design and learning design. This is followed by a theoretical framing of TGP. Finally, three...
Directionality Theory and the Entropic Principle of Natural Selection
Lloyd A. Demetrius
2014-10-01
Full Text Available Darwinian fitness describes the capacity of an organism to appropriate resources from the environment and to convert these resources into net-offspring production. Studies of competition between related types indicate that fitness is analytically described by entropy, a statistical measure which is positively correlated with population stability, and describes the number of accessible pathways of energy flow between the individuals in the population. Directionality theory is a mathematical model of the evolutionary process based on the concept evolutionary entropy as the measure of fitness. The theory predicts that the changes which occur as a population evolves from one non-equilibrium steady state to another are described by the following directionality principle–fundamental theorem of evolution: (a an increase in evolutionary entropy when resource composition is diverse, and resource abundance constant; (b a decrease in evolutionary entropy when resource composition is singular, and resource abundance variable. Evolutionary entropy characterizes the dynamics of energy flow between the individual elements in various classes of biological networks: (a where the units are individuals parameterized by age, and their age-specific fecundity and mortality; where the units are metabolites, and the transitions are the biochemical reactions that convert substrates to products; (c where the units are social groups, and the forces are the cooperative and competitive interactions between the individual groups. % This article reviews the analytical basis of the evolutionary entropic principle, and describes applications of directionality theory to the study of evolutionary dynamics in two biological systems; (i social networks–the evolution of cooperation; (ii metabolic networks–the evolution of body size. Statistical thermodynamics is a mathematical model of macroscopic behavior in inanimate matter based on entropy, a statistical measure which
The workings of the Maximum Entropy Principle in collective human behavior
Hernando, A; Plastino, A; Plastino, A R
2012-01-01
We exhibit compelling evidence regarding how well does the MaxEnt principle describe the rank-distribution of city-populations via an exhaustive study of the 50 Spanish provinces (more than 8000 cities) in a time-window of 15 years (1996-2010). We show that the dynamics that governs the population-growth is the deciding factor that originates the observed distributions. The connection between dynamics and distributions is unravelled via MaxEnt.
Jiang, Yulin; Li, Bin; Chen, Jie
2016-01-01
The flow velocity distribution in partially-filled circular pipe was investigated in this paper. The velocity profile is different from full-filled pipe flow, since the flow is driven by gravity, not by pressure. The research findings show that the position of maximum flow is below the water surface, and varies with the water depth. In the region of near tube wall, the fluid velocity is mainly influenced by the friction of the wall and the pipe bottom slope, and the variation of velocity is similar to full-filled pipe. But near the free water surface, the velocity distribution is mainly affected by the contractive tube wall and the secondary flow, and the variation of the velocity is relatively small. Literature retrieval results show relatively less research has been shown on the practical expression to describe the velocity distribution of partially-filled circular pipe. An expression of two-dimensional (2D) velocity distribution in partially-filled circular pipe flow was derived based on the principle of maximum entropy (POME). Different entropies were compared according to fluid knowledge, and non-extensive entropy was chosen. A new cumulative distribution function (CDF) of partially-filled circular pipe velocity in terms of flow depth was hypothesized. Combined with the CDF hypothesis, the 2D velocity distribution was derived, and the position of maximum velocity distribution was analyzed. The experimental results show that the estimated velocity values based on the principle of maximum Tsallis wavelet entropy are in good agreement with measured values.
Prediction of the maximum water inflow in Pingdingshan No.8 mine based on grey system theory
XU Jie; JING Guo-xun; XU Yao-yao
2012-01-01
In order to prevent and control the water inflow of mines,this paper built a new initial GM(1,1) model to forecast the maximum water inflow according to the principle of new information.The effect of the new initial GM(1,1) model is not ideal by the concrete example.Then according to the principle of making the sum of the squares of the difference between the calculated sequences and the original sequences,an optimized GM(1,1) model was established.The result shows that this method is a new prediction method which can predict the maximum water inflow accurately.It not only conforms to the guideline of prevention primarily,but also provides reference standards to managers on making prevention measures.
Maximum path information and the principle of least action for chaotic system
Wang, Qiuping A.
2004-01-01
ps file, 10 Pages; A path information is defined in connection with the different possible paths of chaotic system moving in its phase space between two cells. On the basis of the assumption that the paths are differentiated by their actions, we show that the maximum path information leads to a path probability distribution as a function of action from which the well known transition probability of Brownian motion can be easily derived. An interesting result is that the most probable paths ar...
The workings of the maximum entropy principle in collective human behaviour
Hernando, A.; Hernando, R.; Plastino, A.; Plastino, A. R.
2013-01-01
We present an exhaustive study of the rank-distribution of city-population and population-dynamics of the 50 Spanish provinces (more than 8000 municipalities) in a time-window of 15 years (1996–2010). We exhibit compelling evidence regarding how well the MaxEnt principle describes the equilibrium distributions. We show that the microscopic dynamics that governs population growth is the deciding factor that originates the observed macroscopic distributions. The connection between microscopic dynamics and macroscopic distributions is unravelled via MaxEnt. PMID:23152105
Unified Field Theory and Principle of Representation Invariance
Ma, Tian
2012-01-01
This is part of a research program to establish a unified field model for interactions in nature. The aim of this article is to postulate a new principle of representation invariance (PRI), to provide a needed mathematical foundation for PRI, and to use PRI to refine the unified field equations of four interactions. Intuitively, PRI amounts to saying that all SU(N) gauge theories should be invariant under transformations of different representations of SU(N). With PRI, we are able to substantially reduce the number of to-be-determined parameters in the unified model to two SU(2) and SU(3) constant vectors $\\{\\alpha^1_\\mu \\}$ and $\\{\\alpha^2_k\\}$, containing 11 parameters, which represent the portions distributed to the gauge potentials by the weak and strong charges. Furthermore, both PRI and PID can be directly applied to individual interactions, leading to a unified theory for dark matter and dark energy, and theories on strong and weak interaction potentials. As a direct application of the strong interacti...
A theory for cursive handwriting based on the minimization principle.
Wada, Y; Kawato, M
1995-06-01
We propose a trajectory planning and control theory which provides explanations at the computation, algorithm, representation, and hardware levels for continuous movement such as connected cursive handwriting. The hardware is based on our previously proposed forward-inverse-relaxation neural network. Computationally, the optimization principle is the minimum torque-change criterion. At the representation level, hard constraints satisfied by a trajectory are represented as a set of via-points extracted from handwritten characters. Accordingly, we propose a via-point estimation algorithm that estimates via-points by repeating trajectory formation of a character and via-point extraction from the character. It is shown experimentally that for movements with a single via-point target, the via-point estimation algorithm can assign a point near the actual via-point target. Good quantitative agreement is found between human movement data and the trajectories generated by the proposed model.
Violation of the Equivalence Principle in String Dilaton Theories
Landau, S J; Vucetich, H; Landau, Susana J.; Sisterna, Pablo D.
2003-01-01
We study violations of the weak equivalence principle in the context of string dilaton theories. In these models some fundamental constants become space- as well as time-dependent. We show that although universality of free fall (UFF) experiments set bounds on parameters that govern the cosmological evolution of the scalar fields, these are strongly relaxed when considering the space dependent behavior of the scalar field. We also analyze the Oklo bound on the variation of the fine structure constant. Conversely, including the space-dependent solution of the dilaton field, does not affect the restrictions on the free parameters of the model. Finally, consequences on the relevance of experiments on UFF are reanalyzed.
Principles of physics from quantum field theory to classical mechanics
Jun, Ni
2014-01-01
This book starts from a set of common basic principles to establish the formalisms in all areas of fundamental physics, including quantum field theory, quantum mechanics, statistical mechanics, thermodynamics, general relativity, electromagnetic field, and classical mechanics. Instead of the traditional pedagogic way, the author arranges the subjects and formalisms in a logical-sequential way, i.e. all the formulas are derived from the formulas before them. The formalisms are also kept self-contained. Most of the required mathematical tools are also given in the appendices. Although this book covers all the disciplines of fundamental physics, the book is concise and can be treated as an integrated entity. This is consistent with the aphorism that simplicity is beauty, unification is beauty, and thus physics is beauty. The book may be used as an advanced textbook by graduate students. It is also suitable for physicists who wish to have an overview of fundamental physics. Readership: This is an advanced gradua...
Principle of Maximum Entanglement Entropy and Local Physics of Strongly Correlated Materials
Lanatà, Nicola [Rutgers University; Strand, Hugo U. R. [University of Gothenburg; Yao, Yongxin [Ames Laboratory; Kotliar, Gabriel [Rutgers University
2014-07-01
We argue that, because of quantum entanglement, the local physics of strongly correlated materials at zero temperature is described in a very good approximation by a simple generalized Gibbs distribution, which depends on a relatively small number of local quantum thermodynamical potentials. We demonstrate that our statement is exact in certain limits and present numerical calculations of the iron compounds FeSe and FeTe and of the elemental cerium by employing the Gutzwiller approximation that strongly support our theory in general.
Cavalli, Andrea; Camilloni, Carlo; Vendruscolo, Michele
2013-03-07
In order to characterise the dynamics of proteins, a well-established method is to incorporate experimental parameters as replica-averaged structural restraints into molecular dynamics simulations. Here, we justify this approach in the case of interproton distance information provided by nuclear Overhauser effects by showing that it generates ensembles of conformations according to the maximum entropy principle. These results indicate that the use of replica-averaged structural restraints in molecular dynamics simulations, given a force field and a set of experimental data, can provide an accurate approximation of the unknown Boltzmann distribution of a system.
Chattaraj, Pratim K; Ayers, Paul W; Melin, Junia
2007-08-07
Ayers, Parr, and Pearson recently showed that insight into the hard/soft acid/base (HSAB) principle could be obtained by analyzing the energy of reactions in hard/soft exchange reactions, i.e., reactions in which a soft acid replaces a hard acid or a soft base replaces a hard base [J. Chem. Phys., 2006, 124, 194107]. We show, in accord with the maximum hardness principle, that the hardness increases for favorable hard/soft exchange reactions and decreases when the HSAB principle indicates that hard/soft exchange reactions are unfavorable. This extends the previous work of the authors, which treated only the "double hard/soft exchange" reaction [P. K. Chattaraj and P. W. Ayers, J. Chem. Phys., 2005, 123, 086101]. We also discuss two different approaches to computing the hardness of molecules from the hardness of the composing fragments, and explain how the results differ. In the present context, it seems that the arithmetic mean of fragment softnesses is the preferable definition.
Chavanis, Pierre-Henri
2014-01-01
In the context of two-dimensional (2D) turbulence, we apply the maximum entropy production principle (MEPP) by enforcing a local conservation of energy. This leads to an equation for the vorticity distribution that conserves all the Casimirs, the energy, and that increases monotonically the mixing entropy ($H$-theorem). Furthermore, the equation for the coarse-grained vorticity dissipates monotonically all the generalized enstrophies. These equations may provide a parametrization of 2D turbulence. They do not generally relax towards the maximum entropy state. The vorticity current vanishes for any steady state of the 2D Euler equation. Interestingly, the equation for the coarse-grained vorticity obtained from the MEPP turns out to coincide, after some algebraic manipulations, with the one obtained with the anticipated vorticity method. This shows a connection between these two approaches when the conservation of energy is treated locally. Furthermore, the newly derived equation, which incorporates a diffusion...
A First-Principle Kinetic Theory of Meteor Plasma Formation
Dimant, Yakov; Oppenheim, Meers
2015-11-01
Every second millions of tiny meteoroids hit the Earth from space, vast majority too small to observe visually. However, radars detect the plasma they generate and use the collected data to characterize the incoming meteoroids and the atmosphere in which they disintegrate. This diagnostics requires a detailed quantitative understanding of formation of the meteor plasma. Fast-descending meteoroids become detectable to radars after they heat due to collisions with atmospheric molecules sufficiently and start ablating. The ablated material then collides into atmospheric molecules and forms plasma around the meteoroid. Reflection of radar pulses from this plasma produces a localized signal called a head echo. Using first principles, we have developed a consistent collisional kinetic theory of the near-meteoroid plasma. This theory shows that the meteoroid plasma develops over a length-scale close to the ion mean free path with a non-Maxwellian velocity distribution. The spatial distribution of the plasma density shows significant deviations from a Gaussian law usually employed in head-echo modeling. This analytical model will serve as a basis for more accurate quantitative interpretation of the head echo radar measurements. Work supported by NSF Grant 1244842.
Chavanis, Pierre-Henri, E-mail: chavanis@irsamc.ups-tlse.fr [Laboratoire de Physique Théorique, Université Paul Sabatier, 118 route de Narbonne, F-31062 Toulouse (France)
2014-12-01
In the context of two-dimensional (2D) turbulence, we apply the maximum entropy production principle (MEPP) by enforcing a local conservation of energy. This leads to an equation for the vorticity distribution that conserves all the Casimirs, the energy, and that increases monotonically the mixing entropy (H-theorem). Furthermore, the equation for the coarse-grained vorticity dissipates monotonically all the generalized enstrophies. These equations may provide a parametrization of 2D turbulence. They do not generally relax towards the maximum entropy state. The vorticity current vanishes for any steady state of the 2D Euler equation. Interestingly, the equation for the coarse-grained vorticity obtained from the MEPP turns out to coincide, after some algebraic manipulations, with the one obtained with the anticipated vorticity method. This shows a connection between these two approaches when the conservation of energy is treated locally. Furthermore, the newly derived equation, which incorporates a diffusion term and a drift term, has a nice physical interpretation in terms of a selective decay principle. This sheds new light on both the MEPP and the anticipated vorticity method. (paper)
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
The Principle of Equivalence as a Guide towards Matrix Theory Compactifications
Peñalba, J P
1998-01-01
The principle of equivalence is translated into the language of the world-volume field theories that define matrix and string theories. This idea leads to explore possible matrix descriptions of M-theory compactifications. An interesting case is the relationship between D=6 N=1 U(M) SYM and Matrix Theory on K3.
Principles of General Systems Theory: Some Implications for Higher Education Administration
Gilliland, Martha W.; Gilliland, J. Richard
1978-01-01
Three principles of general systems theory are presented and systems theory is distinguished from systems analysis. The principles state that all systems tend to become more disorderly, that they must be diverse in order to be stable, and that only those maximizing their resource utilization for doing useful work will survive. (Author/LBH)
Westhoff, Martijn; Zehe, Erwin; Erpicum, Sébastien; Archambeau, Pierre; Pirotton, Michel; Dewals, Benjamin
2015-04-01
The Maximum Entropy Production (MEP) principle is a conjecture assuming that a medium is organized in such a way that maximum power is subtracted from a gradient driving a flux (with power being a flux times its driving gradient). This maximum power is also known as the Carnot limit. It has already been shown that the atmosphere operates close to this Carnot limit when it comes to heat transport from the Equator to the poles, or vertically, from the surface to the atmospheric boundary layer. To reach this state close to the Carnot limit, the effective thermal conductivity of the atmosphere is adapted by the creation of convection cells (e.g. wind). The aim of this study is to test if the soil's effective hydraulic conductivity also adapts itself in such a way that it operates close to the Carnot limit. The big difference between atmosphere and soil is the way of adaptation of its resistance. The soil's hydraulic conductivity is either changed by weathering processes, which is a very slow process, or by creation of preferential flow paths. In this study the latter process is simulated in a lab experiment, where we focus on the preferential flow paths created by piping. Piping is the process of backwards erosion of sand particles subject to a large pressure gradient. Since this is a relatively fast process, it is suitable for being tested in the lab. In the lab setup a horizontal sand bed connects two reservoirs that both drain freely at a level high enough to keep the sand bed always saturated. By adding water to only one reservoir, a horizontal pressure gradient is maintained. If the flow resistance is small, a large gradient develops, leading to the effect of piping. When pipes are being formed, the effective flow resistance decreases; the flow through the sand bed increases and the pressure gradient decreases. At a certain point, the flow velocity is small enough to stop the pipes from growing any further. In this steady state, the effective flow resistance of
Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory
Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O
2014-01-01
We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.
Optimizing Computer Assisted Instruction By Applying Principles of Learning Theory.
Edwards, Thomas O.
The development of learning theory and its application to computer-assisted instruction (CAI) are described. Among the early theoretical constructs thought to be important are E. L. Thorndike's concept of learning connectisms, Neal Miller's theory of motivation, and B. F. Skinner's theory of operant conditioning. Early devices incorporating those…
Jungemann, C.; Pham, A. T.; Meinerzhagen, B.; Ringhofer, C.; Bollhöfer, M.
2006-07-01
The Boltzmann equation for transport in semiconductors is projected onto spherical harmonics in such a way that the resultant balance equations for the coefficients of the distribution function times the generalized density of states can be discretized over energy and real spaces by box integration. This ensures exact current continuity for the discrete equations. Spurious oscillations of the distribution function are suppressed by stabilization based on a maximum entropy dissipation principle avoiding the H transformation. The derived formulation can be used on arbitrary grids as long as box integration is possible. The approach works not only with analytical bands but also with full band structures in the case of holes. Results are presented for holes in bulk silicon based on a full band structure and electrons in a Si NPN bipolar junction transistor. The convergence of the spherical harmonics expansion is shown for a device, and it is found that the quasiballistic transport in nanoscale devices requires an expansion of considerably higher order than the usual first one. The stability of the discretization is demonstrated for a range of grid spacings in the real space and bias points which produce huge gradients in the electron density and electric field. It is shown that the resultant large linear system of equations can be solved in a memory efficient way by the numerically robust package ILUPACK.
周良明; 郭佩芳; 王强; 杜伊
2004-01-01
Based on the maximum entropy principle, a probability density function (PDF) is derived for the distribution of wave heights in a random wave field, without any more hypothesis. The present PDF, being a non-Rayleigh form, involves two parameters: the average wave height H and the state parameter γ. The role of γ in the distribution of wave heights is examined. It is found that γ may be a certain measure of sea state. A least square method for determining γ from measured data is proposed. In virtue of the method, the values of γ are determined for three sea states from the data measured in the East China Sea. The present PDF is compared with the well known Rayleigh PDF of wave height and it is shown that it much better fits the data than the Rayleigh PDF. It is expected that the present PDF would fit some other wave variables, since its derivation is not restricted only to the wave height.
The argument of the principles in contemporary theory of law: An antipositivist plea
José Julián Suárez-Rodríguez
2012-06-01
Full Text Available The theory of legal principles knows today a resonance unknown in other times of legal science and several authors have dedicated themselves to its formation, each of them giving important elements in its configuration. This article presents the characteristics of the contemporary theory of the principles and the contributions that the most important authors in the field gave to it. Furthermore, it shows how the theory of principles has been developed as an argument against the main thesis of legal positivism, the dominant legal culture until the second half of the twentieth century.
Convex integration theory solutions to the h-principle in geometry and topology
Spring, David
1998-01-01
This book provides a comprehensive study of convex integration theory in immersion-theoretic topology. Convex integration theory, developed originally by M. Gromov, provides general topological methods for solving the h-principle for a wide variety of problems in differential geometry and topology, with applications also to PDE theory and to optimal control theory. Though topological in nature, the theory is based on a precise analytical approximation result for higher order derivatives of functions, proved by M. Gromov. This book is the first to present an exacting record and exposition of all of the basic concepts and technical results of convex integration theory in higher order jet spaces, including the theory of iterated convex hull extensions and the theory of relative h-principles. A second feature of the book is its detailed presentation of applications of the general theory to topics in symplectic topology, divergence free vector fields on 3-manifolds, isometric immersions, totally real embeddings, u...
Generalized invariance principles and the theory of stability.
Lasalle, J. P.
1971-01-01
Description of some recent extensions of the invariance principle to more generalized dynamical systems where the state space is not locally compact and the flow is unique only in the forward direction of time. A sufficient condition for asymptotic stability of an invariant set is obtained which does not require that the Liapunov function be positive-definite. A recently developed generalized invariance principle is described which is applicable to functional differential equations, partial differential equations, and, in particular, to certain stability problems arising in thermoelasticity, viscoelasticity, and distributed nonlinear networks.
Shaolin Ji
2012-01-01
Full Text Available We study the optimal control problem of a controlled time-symmetric forward-backward doubly stochastic differential equation with initial-terminal state constraints. Applying the terminal perturbation method and Ekeland’s variation principle, a necessary condition of the stochastic optimal control, that is, stochastic maximum principle, is derived. Applications to backward doubly stochastic linear-quadratic control models are investigated.
MBA theory and application of business and management principles
Davim, J
2016-01-01
This book focuses on the relevant subjects in the curriculum of an MBA program. Covering many different fields within business, this book is ideal for readers who want to prepare for a Master of Business Administration degree. It provides discussions and exchanges of information on principles, strategies, models, techniques, methodologies and applications in the business area.
Basic economic principles of road pricing: From theory to applications
Rouwendal, J.; Verhoef, E.T.
2006-01-01
This paper presents, a non-technical introduction to the economic principles relevant for transport pricing design and analysis. We provide the basic rationale behind pricing of externalities, discuss why simple Pigouvian tax rules that equate charges to marginal external costs are not optimal in 's
Basic economic principles of road pricing: From theory to applications
Rouwendal, J.; Verhoef, E.T.
2006-01-01
This paper presents, a non-technical introduction to the economic principles relevant for transport pricing design and analysis. We provide the basic rationale behind pricing of externalities, discuss why simple Pigouvian tax rules that equate charges to marginal external costs are not optimal in 's
Designing the Electronic Classroom: Applying Learning Theory and Ergonomic Design Principles.
Emmons, Mark; Wilkinson, Frances C.
2001-01-01
Applies learning theory and ergonomic principles to the design of effective learning environments for library instruction. Discusses features of electronic classroom ergonomics, including the ergonomics of physical space, environmental factors, and workstations; and includes classroom layouts. (Author/LRW)
The Correspondence Principle and the Founding of the Atomic Quantum Theory.
Liu, Hua-Xiang
1995-01-01
Presents a brief historical review and a discussion of the Bohr theory aimed at helping readers understand more completely the development of atomic quantum physics and comprehend more precisely and profoundly the essence of the correspondence principle. (JRH)
Principles of computer graphics theory and practice using OpenGL and Maya
Govil-Pai, Shalini
2004-01-01
Principles of Computer Graphics: Theory and Practice Using OpenGL and Maya' helps readers understand the principles of interactive computer graphics. Hands-on examples developed in OpenGL illustrate key concepts, and readers develop a professional animation, following traditional processes used in production houses.
Fundamentals of the theory of computation principles and practice
Greenlaw, Raymond
1998-01-01
This innovative textbook presents the key foundational concepts for a one-semester undergraduate course in the theory of computation. It offers the most accessible and motivational course material available for undergraduate computer theory classes. Directed at undergraduates who may have difficulty understanding the relevance of the course to their future careers, the text helps make them more comfortable with the techniques required for the deeper study of computer science. The text motivates students by clarifying complex theory with many examples, exercises and detailed proofs.* This book
Chemical Principles Revisited: Updating the Atomic Theory in General Chemistry.
Whitman, Mark
1984-01-01
Presents a descriptive overview of recent achievements in atomic structure to provide instructors with the background necessary to enhance their classroom presentations. Topics considered include hadrons, quarks, leptons, forces, and the unified fields theory. (JN)
Quantum theory and statistical thermodynamics principles and worked examples
Hertel, Peter
2017-01-01
This textbook presents a concise yet detailed introduction to quantum physics. Concise, because it condenses the essentials to a few principles. Detailed, because these few principles – necessarily rather abstract – are illustrated by several telling examples. A fairly complete overview of the conventional quantum mechanics curriculum is the primary focus, but the huge field of statistical thermodynamics is covered as well. The text explains why a few key discoveries shattered the prevailing broadly accepted classical view of physics. First, matter appears to consist of particles which, when propagating, resemble waves. Consequently, some observable properties cannot be measured simultaneously with arbitrary precision. Second, events with single particles are not determined, but are more or less probable. The essence of this is that the observable properties of a physical system are to be represented by non-commuting mathematical objects instead of real numbers. Chapters on exceptionally simple, but h...
Chahim, M.; Hartl, R.F.; Kort, P.M.
2011-01-01
This paper considers a class of optimal control problems that allows jumps in the state variable. We present the necessary optimality conditions of the Impulse Control Maximum Principle based on the current value formulation. By reviewing the existing impulse control models in the literature, we poi
Ergodic theory and the duality principle on homogeneous spaces
Gorodnik, Alexander
2012-01-01
We prove mean and pointwise ergodic theorems for the action of a discrete lattice subgroup in a connected algebraic Lie group, on infinite volume homogeneous algebraic varieties. Under suitable necessary conditions, our results are quantitative, namely we establish rates of convergence in the mean and pointwise ergodic theorems, which can be estimated explicitly. Our results give a precise and in most cases optimal quantitative form to the duality principle governing dynamics on homogeneous spaces. We illustrate their scope in a variety of equidistribution problems.
Variational principle for theories with dissipation from analytic continuation
Floerchinger, Stefan
2016-01-01
The analytic continuation from the Euclidean domain to real space of the one-particle irreducible quantum effective action is discussed in the context of generalized local equilibrium states. Discontinuous terms associated with dissipative behavior are parametrized in terms of a conveniently defined sign operator. A generalized variational principle is then formulated, which allows to obtain causal and real dissipative equations of motion from the analytically continued quantum effective action. Differential equations derived from the implications of general covariance determine the space-time evolution of the temperature and fluid velocity fields and allow for a discussion of entropy production including a local form of the second law of thermodynamics.
Variational principle for theories with dissipation from analytic continuation
Floerchinger, Stefan
2016-09-01
The analytic continuation from the Euclidean domain to real space of the one-particle irreducible quantum effective action is discussed in the context of generalized local equilibrium states. Discontinuous terms associated with dissipative behavior are parametrized in terms of a conveniently defined sign operator. A generalized variational principle is then formulated, which allows to obtain causal and real dissipative equations of motion from the analytically continued quantum effective action. Differential equations derived from the implications of general covariance determine the space-time evolution of the temperature and fluid velocity fields and allow for a discussion of entropy production including a local form of the second law of thermodynamics.
On Mach's Principle and the "Special" Theory of Relativity
Ashura, Uzumaki
2016-01-01
First, we present a history of the school of thought that the Cosmic Microwave Background Radiation acts as an ether in language familiar to high school students in English-speaking countries. Then we illustrate the properties of this ether and of a hypothetical "test mass" using a brand new thought experiment. Finally, we recount some post-Einstein efforts at a mathematical formulation of Mach's principle and raise some questions about what implications it has for the locality of rotation and for quantum gravity. This paper does not prove Einstein wrong.
Some basic principles for the linear theory of piezoelectric micropolar elastodynamics
无
2010-01-01
According to the basic idea of classical yin-yang complementarity and modern dual-complementarity,in a simple and unified way proposed by Luo,some basic principles in the linear theory of piezoelectric micropolar elastodynamics can be established systematically. In this paper,an important integral relation in terms of convolutions is given,which can be considered as the generalized principle of virtual work in mechanics. Based on this relation,it is not only possible to obtain the principle of virtual work and the reciprocal theorem,but also to systematically derive the complementary functionals for the eleven-field,nine-field and six-field simplified Gurtin-type variational principles and the potential energy-functional for the three-field one in the linear theory of piezoelectric micropolar elastodynamics by the generalized Legendre transformations given in this paper. Furthermore,with this approach,the intrinsic relationships among various principles can be explained clearly.
Some basic principles in dynamic theory of viscoelastic materials with voids
2007-01-01
According to the basic idea of classical yin-yang complementarity and modern dual-complementarity, in a simple and unified way proposed by Luo, some basic principles in the dynamic theory of viscoelastic materials with voids can be estab- lished systematically. In this paper, an important integral relation in terms of con- volutions is given, which can be considered as the generalized principle of virtual work in mechanics. Based on this relation, it is possible not only to obtain the principle of virtual work and the reciprocal theorem, but also to derive systemati- cally the complementary functionals for the eight-field, six-field, four-field simpli- fied Gurtin-type variational principles and the potential energy-functional for the two-field one in the dynamic theory of viscoelastic materials with voids by the generalized Legendre transformations given in this paper. Furthermore, with this approach, the intrinsic relationship among various principles can be explained clearly.
Some basic principles in dynamic theory of viscoelastic materials with voids
LUO En; LI WeiHua
2007-01-01
According to the basic idea of classical yin-yang complementarity and modern dual-complementarity, in a simple and unified way proposed by Luo, some basic principles in the dynamic theory of viscoelastic materials with voids can be established systematically. In this paper, an important integral relation in terms of convolutions is given, which can be considered as the generalized principle of virtual work in mechanics. Based on this relation, it Is possible not only to obtain the principle of virtual work and the reciprocal theorem, but also to derive systematically the complementary functionals for the eight-field, six-field, four-field simplified Gurtin-type variational principles and the potential energy-functional for the two-field one in the dynamic theory of viscoelastic materials with voids by the generalized Legendre transformations given in this paper. Furthermore, with this approach, the intrinsic relationship among various principles can be explained clearly.
Measurement Invariance: A Foundational Principle for Quantitative Theory Building
Nimon, Kim; Reio, Thomas G., Jr.
2011-01-01
This article describes why measurement invariance is a critical issue to quantitative theory building within the field of human resource development. Readers will learn what measurement invariance is and how to test for its presence using techniques that are accessible to applied researchers. Using data from a LibQUAL+[TM] study of user…
First-principles theory of inelastic currents in a scanning tunneling microscope
Stokbro, Kurt; Hu, Ben Yu-Kuang; Thirstrup, C.
1998-01-01
A first-principles theory of inelastic tunneling between a model probe tip and an atom adsorbed on a surface is presented, extending the elastic tunneling theory of Tersoff and Hamann. The inelastic current is proportional to the change in the local density of states at the center of the tip due ...
Rui A. P. Perdigão
2012-06-01
Full Text Available The application of the Maximum Entropy (ME principle leads to a minimum of the Mutual Information (MI, I(X,Y, between random variables X,Y, which is compatible with prescribed joint expectations and given ME marginal distributions. A sequence of sets of joint constraints leads to a hierarchy of lower MI bounds increasingly approaching the true MI. In particular, using standard bivariate Gaussian marginal distributions, it allows for the MI decomposition into two positive terms: the Gaussian MI (I_{g}, depending upon the Gaussian correlation or the correlation between ‘Gaussianized variables’, and a non‑Gaussian MI (I_{ng}, coinciding with joint negentropy and depending upon nonlinear correlations. Joint moments of a prescribed total order p are bounded within a compact set defined by Schwarz-like inequalities, where I_{ng} grows from zero at the ‘Gaussian manifold’ where moments are those of Gaussian distributions, towards infinity at the set’s boundary where a deterministic relationship holds. Sources of joint non-Gaussianity have been systematized by estimating I_{ng} between the input and output from a nonlinear synthetic channel contaminated by multiplicative and non-Gaussian additive noises for a full range of signal-to-noise ratio (snr variances. We have studied the effect of varying snr on I_{g} and I_{ng} under several signal/noise scenarios.
Preisach theory and the principle of loss seperation
L. Dupré
2001-01-01
Full Text Available In this paper we present 2 simplified methods for the evaluation of magnetisation loops in laminated SiFe alloys, using the Preisach theory and the statistical loss theory. These methods are investigated in detail as a practical alternative for a very accurate, but much involved numerical approach, viz a combined lamination model – dynamic Preisach model earlier developed by the authors. Particularly, one of the 2 methods provides accurate results inspite of a dramatic reduction of the CPU-time in comparison with the earlier developed combined model. For the other simplified method, the reduction of CPU-time is less pronounced but still considerable and the results are fairly good.
Deep ecology: Creation of theory, principles and perspectives
Ćorić Dragana
2012-01-01
Full Text Available From the landlord to victim, nature and our environment have gone the long way. It is destructed by the man, the only one that become from her. Deep ecology is a kind of philosophical theory, which would like to lead us to protect and respect our environment again, as we did it before. While trying to reconcile dysfunctional parts of our lives and our indstialized present, deep ecology tried to resolve some rather important ecological issues. Although it is said that deep ecology is a kind of general and not so successful ecological philosophy, that it is quiet misanthropic, almost terroristic, this theory has been populated thru decades, especially in USA and Scandinavian countries. We think that it is important to know where are its 'rights' and 'wrongs', so that we can make our future better.
Student friendly quantum field theory basic principles & quantum electrodynamics
Klauber, Robert D
2013-01-01
By incorporating extensive student input and innovative teaching methodologies, this book aims to make the process of learning quantum field theory easier, and thus more rapid, profound, and efficient, for both students and instructors. Comprehensive explanations are favored over conciseness, every step in derivations is included, and ‘big picture’ overviews are provided throughout. Typical student responses indicate how well the text achieves its aim.
First principles theory of disordered alloys and alloy phase stability
Stocks, G.M.; Nicholson, D.M.C.; Shelton, W.A. [and others
1993-06-05
These lecture notes review the LDA-KKR-CPA method for treating the electronic structure and energetics of random alloys and the MF-CF and GPM theories of ordering and phase stability built on the LDA- KKR-CPA description of the disordered phase. Section 2 lays out the basic LDA-KKR-CPA theory of random alloys and some applications. Section 3 reviews the progress made in understanding specific ordering phenomena in binary solid solutions base on the MF-CF and GPM theories of ordering and phase stability. Examples are Fermi surface nesting, band filling, off diagonal randomness, charge transfer, size difference or local strain fluctuations, magnetic effects; in each case, an attempt is made to link the ordering and the underlying electronic structure of the disordered phase. Section 4 reviews calculations of electronic structure of {beta}-phase Ni{sub c}Al{sub 1-c} alloys using a version of the LDA-KKR-CPA codes generalized to complex lattices.
Principles of hyperplasticity an approach to plasticity theory based on thermodynamic principles
Houlsby, Guy T
2007-01-01
A new approach to plasticity theory firmly routed in and compatible with the laws of thermodynamicsProvides a common basis for the formulation and comparison of many existing plasticity modelsIncorporates and introduction to elasticity, plasticity, thermodynamics and their interactionsShows the reader how to formulate constitutive models completely specified by two scalar potential functions from which the incremental responses of any hyperplastic model can be derived.
Computer-based teaching module design: principles derived from learning theories.
Lau, K H Vincent
2014-03-01
The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to
Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory
Taylor, Jamie M.
2016-09-01
This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.
Pilot-Wave Quantum Theory in Discrete Space and Time and the Principle of Least Action
Gluza, Janusz; Kosek, Jerzy
2016-11-01
The idea of obtaining a pilot-wave quantum theory on a lattice with discrete time is presented. The motion of quantum particles is described by a |Ψ |^2-distributed Markov chain. Stochastic matrices of the process are found by the discrete version of the least-action principle. Probability currents are the consequence of Hamilton's principle and the stochasticity of the Markov process is minimized. As an example, stochastic motion of single particles in a double-slit experiment is examined.
An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling.
Kane, Patrick; Zollman, Kevin J S
2015-01-01
The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the "hybrid equilibrium," to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith's Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory.
First-Principles Atomic Force Microscopy Image Simulations with Density Embedding Theory.
Sakai, Yuki; Lee, Alex J; Chelikowsky, James R
2016-05-11
We present an efficient first-principles method for simulating noncontact atomic force microscopy (nc-AFM) images using a "frozen density" embedding theory. Frozen density embedding theory enables one to efficiently compute the tip-sample interaction by considering a sample as a frozen external field. This method reduces the extensive computational load of first-principles AFM simulations by avoiding consideration of the entire tip-sample system and focusing on the tip alone. We demonstrate that our simulation with frozen density embedding theory accurately reproduces full density functional theory simulations of freestanding hydrocarbon molecules while the computational time is significantly reduced. Our method also captures the electronic effect of a Cu(111) substrate on the AFM image of pentacene and reproduces the experimental AFM image of Cu2N on a Cu(100) surface. This approach is applicable for theoretical imaging applications on large molecules, two-dimensional materials, and materials surfaces.
Dynamics of test bodies in scalar-tensor theory and equivalence principle
Obukhov, Yuri N
2016-01-01
How do test bodies move in scalar-tensor theories of gravitation? We provide an answer to this question on the basis of a unified multipolar scheme. In particular, we give the explicit equations of motion for pointlike, as well as spinning test bodies, thus extending the well-known general relativistic results of Mathisson, Papapetrou, and Dixon to scalar-tensor theories of gravity. We demonstrate the validity of the equivalence principle for test bodies.
Bing SUN; Baozhu GUO
2005-01-01
This paper is concerned with an optimal control problem of an ablation-transpiration cooling control system with Stefan-Signorini boundary condition.As the continuation of the authors'previous paper,the Dubovits Rii-Milyutin functional approach is again adopted in investigation of the Pontryagin's maximun principle of the system.The necessary optimality condition is presented for the problem with free final horizon and phase constraints.
Effective Principles In Designing E-Course In Light Of Learning Theories
Muhammad K. AFIFI
2014-01-01
Full Text Available The researchers conducted an exploratory study to determine the design quality of some E-courses delivered via the web to a number of colleagues at the university. Results revealed a number of shortcomings in the design of these courses, mostly due to the absence of effective principles in the design of these E-courses, especially principles of pedagogy in relation to learning theories. So, this study seeks to identify effective principles in the design of courses for internet-based learning in the light of current learning theories, by answering the following question: What are the most effective principles when designing E-learning courses in the light of current learning theories? After an extensive review and analysis of the literature and previous studies relating to quality standards for the instructional design of E-courses delivered via the web, in particular, and quality standards for E-learning, in general, the results of this study revealed a number of principles for course design in E-learning. These are: identifying learning and performance outcomes; identifying methods and strategies of learning; designing learning activities; providing feedback and motivating the learner and determining the context and impact of learning. In the light of the findings of this study, with reference to the literature, we present a set of recommendations and pedagogical implications for professionals working in course design in E-learning at University of Dammam.
Kamenshchik, A Yu
2013-01-01
We suggest to combine the Anthropic Principle with the Many-Worlds Interpretation of Quantum Theory. Realizing the multiplicity of worlds it provides an opportunity of explanation of some important events which are assumed to be extremely improbable. The Mesoscopic Anthropic Principle suggested here is aimed to explain appearance of such events which are necessary for emergence of Life and Mind. It is complementary to the Cosmological Anthropic Principle explaining the fine tuning of fundamental constants. We briefly discuss various possible applications of the Mesoscopic Anthropic Principle including the Solar Eclipses and assembling of complex molecules. Besides, we address the problem of Time's Arrow in the framework of the Many-Worlds Interpretation. We suggest the recipe for disentangling of quantities defined by fundamental physical laws and by an anthropic selection. The main emphasis is made on the problem of the biological evolution.
Pankovic, Vladan
2010-01-01
In this work we consider some consequences of the Bohr-Sommerfeld-Hansson (Old or quasi-classical) quantum theory of the Newtonian gravity, i.e. of the "gravitational atom". We prove that in this case (for gravitational central force and quantized angular momentum) centrifugal acceleration becomes formally-theoretically dependent (proportional to fourth degree) of the mass of "gravitational electron" rotating around "gravitational nucleus" for any quantum number (state). It seemingly leads toward a paradoxical breaking of the relativistic equivalence principle which contradicts to real experimental data. We demonstrate that this equivalence principle breaking does not really appear in the (quasi classical) quantum theory, but that it necessary appears only in a hypothetical extension of the quantum theory that needs a classical like interpretation of the Bohr-Sommerfeld angular momentum quantization postulate. It is, in some sense, similar to Bell-Aspect analysis that points out that a hypothetical determinis...
The principle of maximum entropy and its applications in ecology%最大熵原理及其在生态学研究中的应用
邢丁亮; 郝占庆
2011-01-01
The principle of maximum entropy (MaxEnt) was originally studied in information theory and statistical mechanics, and was widely employed in a variety of contexts.MaxEnt provides a statistical inference of unknown distributions on the basis of partial knowledge without taking into any unknown information.Recently there has been growing interest in the use of MaxEnt in ecology.In this review, to provide an intuitive understanding of this principle, we firstly employ an example of dice throwing to demonstrate the underlying basis of MaxEnt, and list the steps one should take when applying this principle.Then we focus on its applications in some fields of ecology and biodiversity, including the predicting of species relative abundances using community aggregated traits (CATs), the MaxEnt niche model of biogeography based on environmental factors,the studying of macroecology patterns such as species abundance distribution (SAD) and species-area relationship (SAR), inferences of species interactions using species abundance matrix or merely occurrence (presence/absence) data, and the predicting of food web degree distributions.We also highlight the main debates about these applications and some recent tests of these models' strengths and limitations.We conclude with the discussion of some matters of attention ecologists should keep in mind when using MaxEnt.%最大熵原理(the principle of maximum entropy)起源于信息论和统计力学,是基于有限的已知信息对未知分布进行无偏推断的一种数学方法.这一方法在很多领域都有成功应用,但只是近几年才被应用到生态学研究中,并且还存在很多争论.我们从基本概念和方法出发,用掷骰子的例子阐明了最大熵原理的概念,并提出运用最大熵原理解决问题需要遵从的步骤.最大熵原理在生态学中的应用主要包括以下方面:(1)用群落水平功能性状的平均值作为约束条件来预测群落物种相对多度的模型;(2)
A new orthogonality relationship for orthotropic thin plate theory and its variational principle
LUO; Jianhui; LONG; Yuqiu
2005-01-01
The thought how dual vectors are constructed in a new orthogonality relationship for theory of elasticity is generalized into orthotropic thin plate bending problems by using the analogy theory between plane elasticity problems and plate bending problems. Dual differential equations are directly obtained by using a mixed variables method. A dual differential matrix to be derived possesses a peculiarity of which principal diagonal sub-matrixes are zero matrixes. Two independently and symmetrically orthogonality sub-relationships are discovered. By using the integral form for elastic bending theory of orthotropic thin plate the orthogonality relationship is demonstrated. By selecting felicitous dual vectors a new orthogonality relationship for theory of elasticity can be generalized into elastic bending theory of orthotropic thin plate. By using the integral form a variational principle which is relative to differential form and a whole function expression are proposed.
Asselmeyer-Maluga, Torsten
2016-01-01
In this book, leading theorists present new contributions and reviews addressing longstanding challenges and ongoing progress in spacetime physics. In the anniversary year of Einstein's General Theory of Relativity, developed 100 years ago, this collection reflects the subsequent and continuing fruitful development of spacetime theories. The volume is published in honour of Carl Brans on the occasion of his 80th birthday. Carl H. Brans, who also contributes personally, is a creative and independent researcher and one of the founders of the scalar-tensor theory, also known as Jordan-Brans-Dicke theory. In the present book, much space is devoted to scalar-tensor theories. Since the beginning of the 1990s, Brans has worked on new models of spacetime, collectively known as exotic smoothness, a field largely established by him. In this Festschrift, one finds an outstanding and unique collection of articles about exotic smoothness. Also featured are Bell's inequality and Mach's principle. Personal memories and hist...
Uniform estimate for maximum of randomly weighted sums with applications to insurance risk theory
WANG Dingcheng; SU Chun; ZENG Yong
2005-01-01
This paper obtains the uniform estimate for maximum of sums of independent and heavy-tailed random variables with nonnegative random weights, which can be arbitrarily dependent of each other. Then the applications to ruin probabilities in a discrete time risk model with dependent stochastic returns are considered.
Heshi, Kamal Nosrati; Nasrabadi, Hassanali Bakhtiyar
2016-01-01
The present paper attempts to recognize principles and methods of education based on Wittgenstein's picture theory of language. This qualitative research utilized inferential analytical approach to review the related literature and extracted a set of principles and methods from his theory on picture language. Findings revealed that Wittgenstein…
Brizard, Alain J.
2017-08-01
The nonlinear (full-f) electromagnetic gyrokinetic Vlasov-Maxwell equations are derived in the parallel-symplectic representation from an Eulerian gyrokinetic variational principle. The gyrokinetic Vlasov-Maxwell equations are shown to possess an exact energy conservation law, which is derived by the Noether method from the gyrokinetic variational principle. Here, the gyrocenter Poisson bracket and the gyrocenter Jacobian contain contributions from the perturbed magnetic field. In the full-f formulation of the gyrokinetic Vlasov-Maxwell theory presented here, the gyrocenter parallel-Ampère equation contains a second-order contribution to the gyrocenter current density that is derived from the second-order gyrocenter ponderomotive Hamiltonian.
First-principles Theory of the Momentum-dependent Local Ansatz for Correlated Electron System
Chandra, Sumal; Kakehashi, Yoshiro
The momentum-dependent local-ansatz (MLA) wavefunction describes well correlated electrons in solids in both the weak and strong interaction regimes. In order to apply the theory to the realistic system, we have extended the MLA to the first-principles version using the tight-binding LDA+U Hamiltonian. We demonstrate for the paramagnetic Fe that the first-principles MLA can describe a reasonable correlation energy gain and suppression of charge fluctuations due to electron correlations. Furthermore, we show that the MLA yields a distinct momentum dependence of the momentum distribution, and thus improves the Gutzwiller wavefunction.
Superfluid density in He II near the lambda transition: First principles theory
Jackson, H.W., E-mail: hwjackson2@gmail.com
2015-03-15
A first principles theory of the λ transition in liquid {sup 4}He was introduced in a recent paper [H. W. Jackson, J. Low Temp. Phys. 155, 1 (2009)]. In that theory critical fluctuations consisting of isothermal fourth sound waves are treated with quantum statistical mechanics methods in deriving formulas for constant volume conditions for specific heat, correlation length, equal time pair correlation function, and isothermal compressibility. To leading order terms in (T{sub λ}−T) the theory yields exact results α′=0 and ν′=2/3 for critical exponents at constant volume. A follow-up study in the present paper demonstrates by a least squares fit that a logarithmic function accurately describes the specific heat at svp when (T{sub λ}−T) is between 10{sup −9} K and 10{sup −5} K. This logarithmic divergent behavior conflicts with previous analyses of experimental data and predictions of renormalization group theory that constant pressure specific heat is finite at T{sub λ}, but Is thermodynamically consistent with logarithmic asymptotic behavior of specific heat at constant volume predicted in the new theory. The first principles theory is extended in this paper to derive formulas for superfluid density and for a relation between superfluid density and correlation length in He II near T{sub λ}. Numerical results based on these formulas are in good agreement with experimental data produced by second sound measurements.
Oyola, Yatsandra; Vukovic, Sinisa; Dai, Sheng
2016-05-28
Amidoxime-based polymer adsorbents have attracted interest within the last decade due to their high adsorption capacities for uranium and other rare earth metals from seawater. The ocean contains an approximated 4-5 billion tons of uranium and even though amidoxime-based adsorbents have demonstrated the highest uranium adsorption capacities to date, they are still economically impractical because of their limited recyclability. Typically, the adsorbed metals are eluted with a dilute acid solution that not only damages the amidoxime groups (metal adsorption sites), but is also not strong enough to remove the strongly bound vanadium, which decreases the adsorption capacity with each cycle. We resolved this challenge by incorporating Le Chatelier's principle to recycle adsorbents indefinitely. We used a solution with a high concentration of amidoxime-like chelating agents, such as hydroxylamine, to desorb nearly a 100% of adsorbed metals, including vanadium, without damaging the metal adsorption sites and preserving the high adsorption capacity. The method takes advantage of knowing the binding mode between the amidoxime ligand and the metal and mimics it with chelating agents that then in a Le Chatelier's manner removes metals by shifting to a new chemical equilibrium. For this reason the method is applicable to any ligand-metal adsorbent and it will make an impact on other extraction technologies.
戴天民
2001-01-01
The aim of this paper is to establish new principles of power and energy rate of incremental type in generalized continuum mechanics. By combining new principles of virtual velocity and virtual angular velocity as well as of virtual stress and virtual couple stress with cross terms of incremental rate type a new principle of power and energy rate of incremental rate type with cross terms for micropolar continuum field theories is presented and from it all corresponding equations of motion and boundary conditions as well as power and energy rate equations of incremental rate type for micropolar and nonlocal micropolar continua with the help of generalized Piola's theorems in all and without any additional requirement are derived. Complete results for micromorphic continua could be similarly derived. The derived results in the present paper are believed to be new. They could be used to establish corresponding finite element methods of incremental rate type for generalized continuum mechanics.
Meng Zhang
2016-02-01
Full Text Available Conceptualization in theory development has received limited consideration despite its frequently stressed importance in Information Systems research. This paper focuses on the role of construct clarity in conceptualization, arguing that construct clarity should be considered an essential criterion for evaluating conceptualization and that a focus on construct clarity can advance conceptualization methodology. Drawing from Facet Theory literature, we formulate a set of principles for assessing construct clarity, particularly regarding a construct’s relationships to its extant related constructs. Conscious and targeted attention to this criterion can promote a research ecosystem more supportive of knowledge accumulation.
Furbish, David J.; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan L.
2016-01-01
We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.
Mudunuru, M. K.; Nakshatrala, K. B.
2016-01-01
We present a robust computational framework for advective-diffusive-reactive systems that satisfies maximum principles, the non-negative constraint, and element-wise species balance property. The proposed methodology is valid on general computational grids, can handle heterogeneous anisotropic media, and provides accurate numerical solutions even for very high Péclet numbers. The significant contribution of this paper is to incorporate advection (which makes the spatial part of the differential operator non-self-adjoint) into the non-negative computational framework, and overcome numerical challenges associated with advection. We employ low-order mixed finite element formulations based on least-squares formalism, and enforce explicit constraints on the discrete problem to meet the desired properties. The resulting constrained discrete problem belongs to convex quadratic programming for which a unique solution exists. Maximum principles and the non-negative constraint give rise to bound constraints while element-wise species balance gives rise to equality constraints. The resulting convex quadratic programming problems are solved using an interior-point algorithm. Several numerical results pertaining to advection-dominated problems are presented to illustrate the robustness, convergence, and the overall performance of the proposed computational framework.
A construction principle for ADM-type theories in maximal slicing gauge
Gomes, Henrique
2013-01-01
The differing concepts of time in general relativity and quantum mechanics are widely accused as the main culprits in our persistent failure in finding a complete theory of quantum gravity. Here we address this issue by constructing ADM-type theories \\emph{in a particular time gauge} directly from first principles. The principles are expressed as conditions on phase space constraints: we search for two sets of spatially covariant constraints, which generate symmetries (are first class) and gauge-fix each other leaving two propagating degrees of freedom. One of the sets is the Weyl generator tr$(\\pi)$, and the other is a one-parameter family containing the ADM scalar constraint $\\lambda R- \\beta(\\pi^{ab}\\pi_{ab}+(\\mbox{tr}(\\pi))^2/2))$. The two sets of constraints can be seen as defining ADM-type theories with a maximal slicing gauge-fixing. The principles above are motivated by a heuristic argument relying in the relation between symmetry doubling and exact renormalization arguments for quantum gravity, aside...
Nathaniel James Siebert Ashby
2011-10-01
Full Text Available Daily we make decisions ranging from the mundane to the seemingly pivotal that shape our lives. Assuming rationality, all relevant information about one’s options should be thoroughly examined in order to make the best choice. However, some findings suggest that under specific circumstances thinking too much has disadvantageous effects on decision quality and that it might be best to let the unconscious do the busy work. In three studies we test the capacity assumption and the appropriate weighting principle of unconscious thought theory using a classic risky choice paradigm and including a ‘deliberation with information’ condition. Although we replicate an advantage for unconscious thought over ‘deliberation without information’, we find that ‘deliberation with information’ equals or outperforms unconscious thought in risky choices. These results speak against the generality of the assumption that unconscious thought has a higher capacity for information integration and show that this capacity assumption does not hold in all domains. We furthermore show that ‘deliberate thought with information’ leads to more differentiated knowledge compared to unconscious thought which speaks against the generality of the appropriate weighting assumption.
戴天民
2003-01-01
The purpose is to reestablish the balance laws of momentum, angular momentumand energy and to derive the corresponding local and nonlocal balance equations formicromorphic continuum mechanics and couple stress theory. The desired results formicromorphic continuum mechanics and couple stress theory are naturally obtained via directtransitions and reductions from the coupled conservation law of energy for micropolarcontinuum theory, respectively. The basic balance laws and equation s for micromorphiccontinuum mechanics and couple stress theory are constituted by combining these resultsderived here and the traditional conservation laws and equations of mass and microinertiaand the entropy inequality. The incomplete degrees of the former related continuum theoriesare clarified. Finally, some special cases are conveniently derived.
Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.
Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A
2009-06-01
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.
The breaking of the Equivalence Principle in theories with varying α
Kraiselburd, Lucila; Vucetich, Héctor
2010-11-01
The Standard Model and General Relativity provide a good description of phenomena at low energy. These theories, which agree very well with the experiment, contain a set of parameters called “fundamental constants”, that are assumed invariant under changes in location and reference system. However, their possible variation has been studied since Dirac made the large numbers hypothesis (LNH). Moreover, unified field theory and extra dimensions theories such as Kaluza-Klein or Superstring theories, state not only the variation of these constants, but also the simultaneity of the variations. The Eötvös effect is one of the most sensitive indicators of changes in fundamental constants. Bekenstein (2002) showed that in his theory, using a classical static particle model of matter, there is no Eötvös effect and therefore met the Universality of Free Fall and the Principle of Equivalence. We present different results than those obtained by Bekenstein, Kraiselburd, Vucetich (2009). Modifying his theory, taking more realistic models of matter and using the model THɛμ techniques (Ligtman-Lee (1975) and Haugan (1979), not used before to analyze this model), very small but measurable effects have been found.
Ding, Y. H.; Hu, S. X.
2017-06-01
Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm3 and temperature T = 2000 to 108 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ˜10% stiffer than the last two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ˜20%. By implementing the FPEOS table into the 1-D radiation-hydrodynamic code LILAC, we studied the EOS effects on beryllium-shell-target implosions. The FPEOS simulation predicts higher neutron yield (˜15%) compared to the simulation using the SESAME 2023 EOS table.
Kheyfets, Arkady; Miller, Warner A.
1991-11-01
The boundary of a boundary principle has been suggested by J. A. Wheeler as a realization of the austerity idea in field theories. This principle is described in three basic field theories—electrodynamics, Yang-Mills theory, and general relativity. It is demonstrated that it supplies a unified geometric interpretation of the source current in each of the three theories in terms of a generalized E. Cartan moment of rotation. The extent to which the boundary of a boundary principle represents the austerity principle is discussed. It is concluded that it works in a way analogous to thermodynamic relations and it is argued that deeper principles might be needed to comprehend the nature of austerity.
Rong Jiang
2014-09-01
Full Text Available As the early design decision-making structure, a software architecture plays a key role in the final software product quality and the whole project. In the software design and development process, an effective evaluation of the trustworthiness of a software architecture can help making scientific and reasonable decisions on the architecture, which are necessary for the construction of highly trustworthy software. In consideration of lacking the trustworthiness evaluation and measurement studies for software architecture, this paper provides one trustworthy attribute model of software architecture. Based on this model, the paper proposes to use the Principle of Maximum Entropy (POME and Grey Decision-making Method (GDMM as the trustworthiness evaluation method of a software architecture and proves the scientificity and rationality of this method, as well as verifies the feasibility through case analysis.
戴天民
2003-01-01
The purpose is to reestablish rather complete basic balance equations and boundary conditions for polar thermomechanical continua based on the restudy of the traditional theories of micropolar thermoelasticity and thermopiezoelectricity. The equations of motion and the local balance equation of energy rate for micropolar thermoelasticity are derived from the rather complete principle of virtual power. The equations of motion, the balance equation of entropy and all boundary conditions are derived from the rather complete Hamilton principle. The new balance equations of momentum and energy rate which are essentially different from the existing results are presented. The corresponding results of micromorphic thermoelasticity and couple stress elastodynamics may be naturally obtained by the transition and the reduction from the micropolar case, respectively. Finally, the results of micropolar thermopiezoelectricity are directly given.
On a consistent finite-strain plate theory based on 3-D energy principle
Dai, Hui-Hui
2014-01-01
This paper derives a finite-strain plate theory consistent with the principle of stationary three-dimensional (3-D) potential energy under general loadings with a third-order error. Staring from the 3-D nonlinear elasticity (with both geometrical and material nonlinearity) and by a series expansion, we deduce a vector plate equation with three unknowns, which exhibits the local force-balance structure. The success relies on using the 3-D field equations and bottom traction condition to derive exact recursion relations for the coefficients. Associated weak formulations are considered, leading to a 2-D virtual work principle. An alternative approach based on a 2-D truncated energy is also provided, which is less consistent than the first plate theory but has the advantage of the existence of a 2-D energy function. As an example, we consider the pure bending problem of a hyperelastic block. The comparison between the analytical plate solution and available exact one shows that the plate theory gives second-order...
Hatton, Leslie; Warr, Gregory
2015-01-01
That the physicochemical properties of amino acids constrain the structure, function and evolution of proteins is not in doubt. However, principles derived from information theory may also set bounds on the structure (and thus also the evolution) of proteins. Here we analyze the global properties of the full set of proteins in release 13-11 of the SwissProt database, showing by experimental test of predictions from information theory that their collective structure exhibits properties that are consistent with their being guided by a conservation principle. This principle (Conservation of Information) defines the global properties of systems composed of discrete components each of which is in turn assembled from discrete smaller pieces. In the system of proteins, each protein is a component, and each protein is assembled from amino acids. Central to this principle is the inter-relationship of the unique amino acid count and total length of a protein and its implications for both average protein length and occurrence of proteins with specific unique amino acid counts. The unique amino acid count is simply the number of distinct amino acids (including those that are post-translationally modified) that occur in a protein, and is independent of the number of times that the particular amino acid occurs in the sequence. Conservation of Information does not operate at the local level (it is independent of the physicochemical properties of the amino acids) where the influences of natural selection are manifest in the variety of protein structure and function that is well understood. Rather, this analysis implies that Conservation of Information would define the global bounds within which the whole system of proteins is constrained; thus it appears to be acting to constrain evolution at a level different from natural selection, a conclusion that appears counter-intuitive but is supported by the studies described herein.
Leslie Hatton
Full Text Available That the physicochemical properties of amino acids constrain the structure, function and evolution of proteins is not in doubt. However, principles derived from information theory may also set bounds on the structure (and thus also the evolution of proteins. Here we analyze the global properties of the full set of proteins in release 13-11 of the SwissProt database, showing by experimental test of predictions from information theory that their collective structure exhibits properties that are consistent with their being guided by a conservation principle. This principle (Conservation of Information defines the global properties of systems composed of discrete components each of which is in turn assembled from discrete smaller pieces. In the system of proteins, each protein is a component, and each protein is assembled from amino acids. Central to this principle is the inter-relationship of the unique amino acid count and total length of a protein and its implications for both average protein length and occurrence of proteins with specific unique amino acid counts. The unique amino acid count is simply the number of distinct amino acids (including those that are post-translationally modified that occur in a protein, and is independent of the number of times that the particular amino acid occurs in the sequence. Conservation of Information does not operate at the local level (it is independent of the physicochemical properties of the amino acids where the influences of natural selection are manifest in the variety of protein structure and function that is well understood. Rather, this analysis implies that Conservation of Information would define the global bounds within which the whole system of proteins is constrained; thus it appears to be acting to constrain evolution at a level different from natural selection, a conclusion that appears counter-intuitive but is supported by the studies described herein.
Bedrin, L M; Zagriadskaia, A P; Tomilin, V V; Fedorovtsev, A L
1990-01-01
Possibility of using principles of criminalistic identification theory in investigation of objects of medicolegal expert evaluation is discussed. These principles were analyzed and shown to extend the objects of medicolegal identification, which however possess certain features influencing the specific character of their investigation.
Banerjee, Amartya S.; Suryanarayana, Phanish
2016-11-01
We formulate and implement Cyclic Density Functional Theory (Cyclic DFT) - a self-consistent first principles simulation method for nanostructures with cyclic symmetries. Using arguments based on Group Representation Theory, we rigorously demonstrate that the Kohn-Sham eigenvalue problem for such systems can be reduced to a fundamental domain (or cyclic unit cell) augmented with cyclic-Bloch boundary conditions. Analogously, the equations of electrostatics appearing in Kohn-Sham theory can be reduced to the fundamental domain augmented with cyclic boundary conditions. By making use of this symmetry cell reduction, we show that the electronic ground-state energy and the Hellmann-Feynman forces on the atoms can be calculated using quantities defined over the fundamental domain. We develop a symmetry-adapted finite-difference discretization scheme to obtain a fully functional numerical realization of the proposed approach. We verify that our formulation and implementation of Cyclic DFT is both accurate and efficient through selected examples. The connection of cyclic symmetries with uniform bending deformations provides an elegant route to the ab-initio study of bending in nanostructures using Cyclic DFT. As a demonstration of this capability, we simulate the uniform bending of a silicene nanoribbon and obtain its energy-curvature relationship from first principles. A self-consistent ab-initio simulation of this nature is unprecedented and well outside the scope of any other systematic first principles method in existence. Our simulations reveal that the bending stiffness of the silicene nanoribbon is intermediate between that of graphene and molybdenum disulphide - a trend which can be ascribed to the variation in effective thickness of these materials. We describe several future avenues and applications of Cyclic DFT, including its extension to the study of non-uniform bending deformations and its possible use in the study of the nanoscale flexoelectric effect.
Shu-Kun Lin
2001-03-01
Full Text Available Abstract: Symmetry is a measure of indistinguishability. Similarity is a continuous measure of imperfect symmetry. Lewis' remark that Ã¢Â€Âœgain of entropy means loss of informationÃ¢Â€Â defines the relationship of entropy and information. Three laws of information theory have been proposed. Labeling by introducing nonsymmetry and formatting by introducing symmetry are defined. The function L ( L=lnw, w is the number of microstates, or the sum of entropy and information, L=S+I of the universe is a constant (the first law of information theory. The entropy S of the universe tends toward a maximum (the second law law of information theory. For a perfect symmetric static structure, the information is zero and the static entropy is the maximum (the third law law of information theory. Based on the Gibbs inequality and the second law of the revised information theory we have proved the similarity principle (a continuous higher similarityÃ¢ÂˆÂ’higher entropy relation after the rejection of the Gibbs paradox and proved the Curie-Rosen symmetry principle (a higher symmetryÃ¢ÂˆÂ’higher stability relation as a special case of the similarity principle. The principles of information minimization and potential energy minimization are compared. Entropy is the degree of symmetry and information is the degree of nonsymmetry. There are two kinds of symmetries: dynamic and static symmetries. Any kind of symmetry will define an entropy and, corresponding to the dynamic and static symmetries, there are static entropy and dynamic entropy. Entropy in thermodynamics is a special kind of dynamic entropy. Any spontaneous process will evolve towards the highest possible symmetry, either dynamic or static or both. Therefore the revised information theory can be applied to characterizing all kinds of structural stability and process spontaneity. Some examples in chemical physics have been given. Spontaneous processes of all kinds of molecular
A Toy-World That Satisfies Some Principles of ``El Naschie's E-Infinity Theory''
Sommer, Hanns
2006-06-01
The classical view of our external world is revised and its tacit a priori assumptions are confronted with consequences from Mohammed S. El Naschie's E-infinity theory. The far-reaching investigations of El Naschie have demonstrated the necessity of a new unconventional thinking in physics. First we motivate the a priori assumptions of classical mechanics with the requirements of the mathematical formalisms. We explain the difficulties to construct models for a reality which shows the phenomenon of contextualism, like for example quantum mechanics. In a second step we use the principles of Husserl's phenomenology to deduce a toy world from a contextual understanding of empirical data. In such a toy world one can illustrate the fundamental phenomena from quantum mechanics and ideas from E-infinity theory. We will use this toy world to demonstrate that the classical a priori assumptions are not necessary and that alternative ways of thinking are possible in physics.
Mach's Principle and Model for a Broken Symmetric Theory of Gravity
Bisabr, Y
2003-01-01
We investigate spontaneous symmetry breaking in a conformally invariant gravitational model. In particular, we use a conformally invariant scalar tensor theory as the vacuum sector of a gravitational model to examine the idea that gravitational coupling may be the result of a spontanous symmetry breaking. In this model matter is taken to be coupled with a metric which is different but conformally related to the metric appearing explicitly in the vacuum sector. We show that after the spontanous symmetry breaking the resulting theory is consistent with Mach's principle in the sense that inertial masses of particles have variable configurations in a cosmological context. Moreover, our analysis allows to construct a mechanism in which the resulting large vacuum energy density relaxes during evolution of the universe.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Toward a binary interpretation of acupuncture theory: principles and practical consequences.
González-Correa, Carlos-Augusto
2004-06-01
There is an argument that the universe functions according to binary principles. This paper applies such a system of evaluation to the interpretation of acupuncture principles, the t'ai chi symbol and the I Ching. As a result, it can be shown that a coherent and interesting ordering of some earthly phenomena can be reached. When these same principles are applied to acupuncture theory, a grouping and ordering of the meridians and associated organs and their respective functions are obtained. A suggestion is also presented about how, using a digital device, diagnosis and treatment could be reached in the future if this approach is deepened and proves successful. I call this binary medicine, something that would have some similarities as to how electroacupuncture according to Voll (EAV) works. Basically, diagnosis would be done in terms of excess, balance or deficit (+1, 0 or -1, respectively) and treatment would be done in terms of draining for +1 or filling for -1; 0 means a balanced state that does not need any treatment.
Adapting evidence-based interventions using a common theory, practices, and principles.
Rotheram-Borus, Mary Jane; Swendeman, Dallas; Becker, Kimberly D
2014-01-01
Hundreds of validated evidence-based intervention programs (EBIP) aim to improve families' well-being; however, most are not broadly adopted. As an alternative diffusion strategy, we created wellness centers to reach families' everyday lives with a prevention framework. At two wellness centers, one in a middle-class neighborhood and one in a low-income neighborhood, popular local activity leaders (instructors of martial arts, yoga, sports, music, dancing, Zumba), and motivated parents were trained to be Family Mentors. Trainings focused on a framework that taught synthesized, foundational prevention science theory, practice elements, and principles, applied to specific content areas (parenting, social skills, and obesity). Family Mentors were then allowed to adapt scripts and activities based on their cultural experiences but were closely monitored and supervised over time. The framework was implemented in a range of activities (summer camps, coaching) aimed at improving social, emotional, and behavioral outcomes. Successes and challenges are discussed for (a) engaging parents and communities; (b) identifying and training Family Mentors to promote children and families' well-being; and (c) gathering data for supervision, outcome evaluation, and continuous quality improvement. To broadly diffuse prevention to families, far more experimentation is needed with alternative and engaging implementation strategies that are enhanced with knowledge harvested from researchers' past 30 years of experience creating EBIP. One strategy is to train local parents and popular activity leaders in applying robust prevention science theory, common practice elements, and principles of EBIP. More systematic evaluation of such innovations is needed.
Scale relativity theory and integrative systems biology: 1. Founding principles and scale laws.
Auffray, Charles; Nottale, Laurent
2008-05-01
In these two companion papers, we provide an overview and a brief history of the multiple roots, current developments and recent advances of integrative systems biology and identify multiscale integration as its grand challenge. Then we introduce the fundamental principles and the successive steps that have been followed in the construction of the scale relativity theory, and discuss how scale laws of increasing complexity can be used to model and understand the behaviour of complex biological systems. In scale relativity theory, the geometry of space is considered to be continuous but non-differentiable, therefore fractal (i.e., explicitly scale-dependent). One writes the equations of motion in such a space as geodesics equations, under the constraint of the principle of relativity of all scales in nature. To this purpose, covariant derivatives are constructed that implement the various effects of the non-differentiable and fractal geometry. In this first review paper, the scale laws that describe the new dependence on resolutions of physical quantities are obtained as solutions of differential equations acting in the scale space. This leads to several possible levels of description for these laws, from the simplest scale invariant laws to generalized laws with variable fractal dimensions. Initial applications of these laws to the study of species evolution, embryogenesis and cell confinement are discussed.
Freed, Karl F
2014-10-14
A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, "The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition" [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.
Dewar, Roderick [Unite de Bioclimatologie, INRA Centre de Bordeaux, BP 81, 33883 Villenave d' Ornon (France)
2003-01-24
Jaynes' information theory formalism of statistical mechanics is applied to the stationary states of open, non-equilibrium systems. First, it is shown that the probability distribution p{sub {gamma}} of the underlying microscopic phase space trajectories {gamma} over a time interval of length {tau} satisfies p{sub {gamma}} {proportional_to} exp({tau}{sigma}{sub {gamma}}/2k{sub B}) where {sigma}{sub {gamma}} is the time-averaged rate of entropy production of {gamma}. Three consequences of this result are then derived: (1) the fluctuation theorem, which describes the exponentially declining probability of deviations from the second law of thermodynamics as {tau} {yields} {infinity}; (2) the selection principle of maximum entropy production for non-equilibrium stationary states, empirical support for which has been found in studies of phenomena as diverse as the Earth's climate and crystal growth morphology; and (3) the emergence of self-organized criticality for flux-driven systems in the slowly-driven limit. The explanation of these results on general information theoretic grounds underlines their relevance to a broad class of stationary, non-equilibrium systems. In turn, the accumulating empirical evidence for these results lends support to Jaynes' formalism as a common predictive framework for equilibrium and non-equilibrium statistical mechanics.
Laurence, P.; Shen, M.C.
1982-03-01
Based upon the existence and uniqueness of a solution to the linearized Lundquist equations established previously, the modified energy principle for the sigma-stability of a confined toroidal plasma is rigorously justified. A variational principle is developed to find the infimum of sigma, and an estimate for the maximum growth rate is obtained. The results are also extended to a diffuse pinch and a multiple tori plasma.
DAI Tian-min
2005-01-01
Theoretical incompleteness of the existing conservation laws of energy for polar continuum mechanics is further clarified. For completeness, the principles of total work and energy and of total work and energy of incremental rate type are postulated. Via total variations of the former and the latter of them, the principles of virtual displacement and microrotation & stress and couple stress as well as virtual velocity and angular velocity &stress rate and couple stress rate are immediately obtained, respectively. From these principles all balance equations and boundary conditions for micropolar mechanics are naturally and simultaneously deduced. The essential differences between the nontraditional results obtained in this paper and the existing conservation laws of energy are expounded.
Phase transition in gauge theories, monopoles and the Multiple Point Principle
Das, C R
2005-01-01
This review is devoted to the Multiple Point Principle (MPP), according to which several vacuum states with the same energy density exist in Nature. The MPP is implemented to the Standard Model (SM), Family replicated gauge group model (FRGGM) and phase transitions in gauge theories with/without monopoles. Lattice gauge theories are reviewed. The lattice results for critical coupling constants are compared with those of the Higgs Monopole Model (HMM), in which the lattice artifact monopoles are replaced by the point-like Higgs scalar particles with a magnetic charge. Considering our (3+1)-dimensional space-time as discrete, for example, as a lattice with a parameter a=\\lambda_P, equal to the Planck length, we have investigated the additional contributions of monopoles to beta-functions of renormalization group equations in the FRGGM extended beyond the SM at high (the Planck scale) energies. We have reviewed that, in contrast to the Anti-grand unified theory (AGUT), there exists a possibility of unification o...
Analytical study of Yang-Mills theory in the infrared from first principles
Siringo, Fabio
2015-01-01
Pure Yang-Mills SU(N) theory is studied in the Landau gauge and four dimensional space. While leaving the original Lagrangian unmodified, a double perturbative expansion is devised, based on a massive free-particle propagator. In dimensional regularization, all diverging mass terms cancel exactly in the double expansion, without the need to include mass counterterms that would spoil the symmetry of the Lagrangian. No free parameters are included that were not in the original theory, yielding a fully analytical approach from first principles. The expansion is safe in the infrared and is equivalent to the standard perturbation theory in the UV. At one-loop, explicit analytical expressions are given for the propagators and the running coupling and are found in excellent agreement with the data of lattice simulations. A universal scaling property is predicted for the inverse propagators and shown to be satisfied by the lattice data. Higher loops are found to be negligible in the infrared below 300 MeV where the c...
Wang, Sheng-Quan; Si, Zong-Guo; Brodsky, Stanley J
2016-01-01
The D0 collaboration at FermiLab has recently measured the top-quark pair forward-backward asymmetry in $\\bar p p \\to t \\bar t X$ reactions as a function of the $\\bar t t $ invariant mass $M_{t\\bar{t}}$. D0 observed that the asymmetry $A_{\\rm FB}(M_{t\\bar{t}})$ first increases and then decreases as $M_{t\\bar{t}}$ is increased. This behavior is not explained using conventional renormalization scale-setting, even by a next-to-next-to-leading order (N$^2$LO) QCD calculation -- one predicts a monotonically increasing behavior. In the conventional scale-setting method, one simply guesses a single renormalization scale $\\mu_r$ for the argument of the QCD running coupling and then varies it over an arbitrary range. However, the conventional method has inherent difficulties. ...... In contrast, if one fixes the scale using the Principle of Maximum Conformality (PMC), the resulting pQCD predictions are renormalization-scheme independent since all of the scheme-dependent $\\{\\beta_i\\}$-terms in the QCD perturbative seri...
2015-08-20
the water balance equation. (a) water balance based estimate of Fw using evaporation E data from OAFlux product, precipitation P data from GPCP...cover types including bare soil, canopy, water , snow and ice, was expanded to modeling global surface energy budget, fluxes and greenhouse gases (e.g...carbon dioxide and methane) fluxes, ocean freshwater fluxes, regional crop yield among others. An on-going study suggests that the global annual
Cancer control through principles of systems science, complexity, and chaos theory: a model.
Janecka, Ivo P
2007-06-05
Cancer is a significant medical and societal problem. This reality arises from the fact that an exponential and an unrestricted cellular growth destabilizes human body as a system. From this perspective, cancer is a manifestation of a system-in-failing.A model of normal and abnormal cell cycle oscillations has been developed incorporating systems science, complexity, and chaos theories. Using this model, cancer expresses a failing subsystem and is characterized by a positive exponential growth taking place in the outer edge of chaos. The overall survival of human body as a system is threatened. This model suggests, however, that cancer's exponential cellular growth and disorganized complexity could be controlled through the process of induction of differentiation of cancer stem cells into cells of low and basic functionality. This concept would imply reorientation of current treatment principles from cellular killing (cyto-toxic therapies) to cellular retraining (cyto-education).
The principle of stationary nonconservative action for classical mechanics and field theories
Galley, Chad R; Stein, Leo C
2014-01-01
We further develop a recently introduced variational principle of stationary action for problems in nonconservative classical mechanics and extend it to classical field theories. The variational calculus used is consistent with an initial value formulation of physical problems and allows for time-irreversible processes, such as dissipation, to be included at the level of the action. In this formalism, the equations of motion are generated by extremizing a nonconservative action $\\mathcal{S}$, which is a functional of a doubled set of degrees of freedom. The corresponding nonconservative Lagrangian contains a potential $K$ which generates nonconservative forces and interactions. Such a nonconservative potential can arise in several ways, including from an open system interacting with inaccessible degrees of freedom or from integrating out or coarse-graining a subset of variables in closed systems. We generalize Noether's theorem to show how Noether currents are modified and no longer conserved when $K$ is non-...
Principles of Motor Recovery After Neurological Injury Based on a Motor Control Theory.
Levin, Mindy F
2016-01-01
Problems of neurological rehabilitation are considered based on two levels of the International Classification of Functioning (ICF)-Body Structures and Function level and Activity level-and modulating factors related to the individual and the environment. Specifically, at the Body Structures and Function level, problems addressed include spasticity, muscle weakness, disordered muscle activation patterns and disruptions in coordinated movement. At the Activity level, deficits in multi-joint and multi-segment upper limb reaching movements are reviewed. We address how physiologically well established principles in the control of actions, Threshold Control and Referent Control as outlined in the Equilibrium-Point theory can help advance the understanding of underlying deficits that may limit recovery at each level.
Dielectric Anisotropy of the GaP /Si (001 ) Interface from First-Principles Theory
Kumar, Pankaj; Patterson, Charles H.
2017-06-01
First-principles calculations of the dielectric anisotropy of the GaP /Si (001 ) interface are compared to the anisotropy extracted from reflectance measurements on GaP thin films on Si(001) [O. Supplie et al., Phys. Rev. B 86, 035308 (2012), 10.1103/PhysRevB.86.035308]. Optical excitations from two states localized in several Si layers adjacent to the interface result in the observed anisotropy of the interface. The calculations show excellent agreement with experiment only for a gapped interface with a P layer in contact with Si and show that a combination of theory and experiment can reveal localized electronic states and the atomic structure at buried interfaces.
Gavrilenko, A V; Bonner, C E; Sun, S -S; Zhang, C; Gavrilenko, V I
2008-01-01
Optical absorption spectra of poly(thiophene vinylene) (PTV) conjugated polymers have been studied at room temperature in the spectral range of 450 to 800 nm. A dominant peak located at 577 nm and a prominent shoulder at 619 nm are observed. Another shoulder located at 685 nm is observed at high concentration and after additional treatment (heat, sonification) only. Equilibrium atomic geometries and optical absorption of PTV conjugated polymers have also been studied by first principles density functional theory (DFT). For PTV in solvent, the theoretical calculations predict two equilibrium geometries with different interchain distances. By comparative analysis of the experimental and theoretical data, it is demonstrated that the new measured long-wavelength optical absorption shoulder is consistent with new optical absorption peak predicted for most energetically favorable PTV phase in the solvent. This shoulder is interpreted as a direct indication of increased interchain interaction in the solvent which ha...
Lee, B; Rudd, R E
2006-10-19
We report the results of first-principles density functional theory calculations of the Young's modulus and other mechanical properties of hydrogen-passivated Si {l_angle}001{r_angle} nanowires. The nanowires are taken to have predominantly {l_brace}100{r_brace}surfaces, with small {l_brace}110{r_brace} facets according to the Wulff shape. The Young's modulus, the equilibrium length and the constrained residual stress of a series of prismatic beams of differing sizes are found to have size dependences that scale like the surface area to volume ratio for all but the smallest beam. The results are compared with a continuum model and the results of classical atomistic calculations based on an empirical potential. We attribute the size dependence to specific physical structures and interactions. In particular, the hydrogen interactions on the surface and the charge density variations within the beam are quantified and used both to parameterize the continuum model and to account for the discrepancies between the two models and the first-principles results.
Phase Transition in Gauge Theories, Monopoles and the Multiple Point Principle
Das, C. R.; Laperashvili, L. V.
This review is devoted to the Multiple Point Principle (MPP), according to which several vacuum states with the same energy density exist in Nature. The MPP is implemented to the Standard Model (SM), Family replicated gauge group model (FRGGM) and phase transitions in gauge theories with/without monopoles. Using renormalization group equations for the SM, the effective potential in the two-loop approximation is investigated, and the existence of its postulated second minimum at the fundamental scale is confirmed. Phase transitions in the lattice gauge theories are reviewed. The lattice results for critical coupling constants are compared with those of the Higgs monopole model, in which the lattice artifact monopoles are replaced by the point-like Higgs scalar particles with magnetic charge. Considering our (3+1)-dimensional space-time as, in some way, discrete or imagining it as a lattice with a parameter a = λP, where λP is the Planck length, we have investigated the additional contributions of monopoles to the β-functions of renormalization group equations for running fine structure constants αi(μ) (i = 1, 2, 3 correspond to the U(1), SU(2) and SU(3) gauge groups of the SM) in the FRGGM extended beyond the SM at high energies. It is shown that monopoles have Nfam times smaller magnetic charge in the FRGGM than in the SM (Nfam is a number of families in the FRGGM). We have estimated also the enlargement of a number of fermions in the FRGGM leading to the suppression of the asymptotic freedom in the non-Abelian theory. We have reviewed that, in contrast to the case of the Anti-grand-unified-theory (AGUT), there exists a possibility of unification of all gauge interactions (including gravity) near the Planck scale due to monopoles. The possibility of the [SU(5)]3 or [SO(10)]3 unification at the GUT-scale ~1018 GeV is briefly considered.
Quantum theory of the Generalised Uncertainty Principle and the existence of a Minimal Length
Bruneton, Jean-Philippe
2016-01-01
We extend significantly previous works on the Hilbert space representations of the Generalized Uncertainty Principle (GUP) in 3+1 dimensions of the form $[X_i,P_j] = i F_{ij}$ where $ F_{ij} = f(P^2) \\delta_{ij} + g(P^2) P_i P_j $ for any functions $f$. However, we restrict our study to the case of commuting $X$'s. We focus in particular on the symmetries of the theory, and the minimal length that emerge in some cases. We first show that, at the algebraic level, there exists an unambiguous mapping between the GUP with a deformed quantum algebra and a quadratic Hamiltonian into a standard, Heisenberg algebra of operators and an aquadratic Hamiltonian, provided the boost sector of the symmetries is modified accordingly. The theory can also be mapped to a completely standard Quantum Mechanics with standard symmetries, but with momentum dependent position operators. Next, we investigate the Hilbert space representations of these algebraically equivalent models, and focus, specifically on whether they exhibit a mi...
First-principles AFM image simulation with frozen density embedding theory
Sakai, Yuki; Lee, Alex J.; Chelikowsky, James R.
We present efficient first-principles method of non-contact atomic force microscopy (nc-AFM). Ordinary nc-AFM simulations based on density functional theory (DFT) require exhaustive computational cost because it involves thousands of total energy calculations. Regarding the sample as a fixed external potential can reduce the computational cost, and we adopt frozen density embedding theory (FDET) for this purpose. Simulated nc-AFM images with FDET using a carbon monoxide tip well reproduces the full DFT images of benzene, pentacene, and graphene, although optimized tip-sample distances and interaction energies in FDET are underestimated and overestimated, respectively. The FDET-based simulation method is promising for AFM image simulation of surfaces and two-dimensional materials. This work was supported by U.S. DOE under Grant No. DE-FG02-06ER46286 and Award No. DE-SC0008877, and by Welch Foundation under Grant F-1837. Computational resources are provided by NERSC and TACC.
George, Janine; Deringer, Volker L; Wang, Ai; Müller, Paul; Englert, Ulli; Dronskowski, Richard
2016-12-21
Thermal properties of solid-state materials are a fundamental topic of study with important practical implications. For example, anisotropic displacement parameters (ADPs) are routinely used in physics, chemistry, and crystallography to quantify the thermal motion of atoms in crystals. ADPs are commonly derived from diffraction experiments, but recent developments have also enabled their first-principles prediction using periodic density-functional theory (DFT). Here, we combine experiments and dispersion-corrected DFT to quantify lattice thermal expansion and ADPs in crystalline α-sulfur (S8), a prototypical elemental solid that is controlled by the interplay of covalent and van der Waals interactions. We begin by reporting on single-crystal and powder X-ray diffraction measurements that provide new and improved reference data from 10 K up to room temperature. We then use several popular dispersion-corrected DFT methods to predict vibrational and thermal properties of α-sulfur, including the anisotropic lattice thermal expansion. Hereafter, ADPs are derived in the commonly used harmonic approximation (in the computed zero-Kelvin structure) and also in the quasi-harmonic approximation (QHA) which takes the predicted lattice thermal expansion into account. At the PPBE+D3(BJ) level, the QHA leads to excellent agreement with experiments. Finally, more general implications of this study for theory and experiment are discussed.
Plotnitsky, Arkady
2016-01-01
The book considers foundational thinking in quantum theory, focusing on the role the fundamental principles and principle thinking there, including thinking that leads to the invention of new principles, which is, the book contends, one of the ultimate achievements of theoretical thinking in physics and beyond. The focus on principles, prominent during the rise and in the immediate aftermath of quantum theory, has been uncommon in more recent discussions and debates concerning it. The book argues, however, that exploring the fundamental principles and principle thinking is exceptionally helpful in addressing the key issues at stake in quantum foundations and the seemingly interminable debates concerning them. Principle thinking led to major breakthroughs throughout the history of quantum theory, beginning with the old quantum theory and quantum mechanics, the first definitive quantum theory, which it remains within its proper (nonrelativistic) scope. It has, the book also argues, been equally important in qua...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Steier, E. Joseph, III
2010-01-01
The objective of this dissertation was to explore the concept that knowledge and application of theories, principles and methods of adult learning to teaching may be a core management competency needed for companies to improve employee reaction to learning, knowledge transfer and behavior as well as engagement, retention and profitability.…
NEW PRINCIPLES OF WORK AND ENERGY AS WELL AS POWER AND ENERGY RATE FOR CONTINUUM FIELD THEORIES
戴天民
2001-01-01
New principles of work and energy as well as power and energy rate with cross terms for polar and nonlocal polar continuum field theories were presented and from them all corresponding equations of motion and boundary conditions as well as complete equations of energy and energy rate with the help of generalized Piola' s theorems were naturally derived in all and without any additional requirement. Finally, some new balance laws of energy and energy rate for generalized continuum mechanics were established. The new principles of work and energy as well as power and energy rate with cross terms presented in this paper are believed to be new and they have corrected the incompleteness of all existing corresponding principles and laws without cross terms in literatures of generalized continuum field theories.
Mukhopadhyay, S., E-mail: sanghamitra.mukhopadhyay@stfc.ac.uk [ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire OX11 0QX (United Kingdom); Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Gutmann, M.J.; Jura, M. [ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire OX11 0QX (United Kingdom); Jochym, D.B. [Scientific Computing Department, Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire OX11 0QX (United Kingdom); Jimenez-Ruiz, M. [Institut Laue Langevin, 6 rue Jules Horowitz 38042, Grenoble Cedex 9 (France); Sturniolo, S.; Refson, K. [Scientific Computing Department, Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire OX11 0QX (United Kingdom); Fernandez-Alonso, F. [ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire OX11 0QX (United Kingdom); Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom)
2013-12-12
Highlights: • We have presented results of neutron diffraction on croconic acid (CA). • We have presented results of inelastic neutron scattering (INS) spectra. • INS is compared with lattice dynamical simulations using density functional theory. • The prominent doublet in INS spectra around 1000 cm{sup −1} are from two hydrogen ions. • We identify the role of these H ions in the ferroelectricity of the CA crystal. - Abstract: A combination of neutron-scattering experiments and first-principles calculations using density-functional theory have been performed to explore the structural and dynamical properties of the single-component organic ferroelectric croconic acid. Neutron diffraction and spectroscopy have been used to determine the location and underlying vibrational motions of the hydrogen ions within the crystalline lattice, respectively. On the computational front we find that dispersion corrections within the generalised-gradient approximation are essential to obtain a satisfactory crystal structure for this organic solid. Two distinct types of hydrogen ions in the crystal also have been identified, located at the ‘hinge’ and ‘terrace’ positions of a pleated, accordion-like structure. Phonon calculations and simulated neutron spectra show that the prominent doublet observed at ca. 1000 cm{sup −1} arises from out-of-plane motions associated with these two types of hydrogen ions. Calculated Born-effective-charge tensors yield an anomalously high dynamic charge centered on the hydrogen ions at the hinges, a finding which serves to identify the primary motif underpinning ferroelectric behaviour in this novel material.
Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)
2015-05-26
A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R
Algorithm Preserving Mass Fraction Maximum Principle for Multi-component Flows%多组份流动质量分数保极值原理算法
唐维军; 蒋浪; 程军波
2014-01-01
We propose a new method for compressible multi⁃component flows with Mie⁃Gruneisen equation of state based on mass fraction. The model preserves conservation law of mass, momentum and total energy for mixture flows. It also preserves conservation of mass of all single components. Moreover, it prevents pressure and velocity from jumping across interface that separate regions of different fluid components. Wave propagation method is used to discretize this quasi⁃conservation system. Modification of numerical method is adopted for conservative equation of mass fraction. This preserves the maximum principle of mass fraction. The wave propagation method which is not modified for conservation equations of flow components mass, cannot preserve the mass fraction in the interval [0,1]. Numerical results confirm validity of the method.%对基于质量分数的Mie⁃Gruneisen状态方程多流体组份模型提出了新的数值方法。该模型保持混合流体的质量、动量、和能量守恒，保持各组份分质量守恒，在多流体组份界面处保持压力和速度一致。该模型是拟守恒型方程系统。对该模型系统的离散采用波传播算法。与直接对模型中所有守恒方程采用相同算法不同的是，在处理分介质质量守恒方程时，对波传播算法进行了修正，使之满足质量分数保极值原理。而不作修改的算法则不能保证质量分数在[0，1]范围。数值实验验证了该方法有效。
Shu, Weixing; Luo, Hailu; Wen, Shuangchun; Fan, Dianyuan
2012-01-01
Associating transformation optics with the fundamental concepts of wavefront and optical path lengths (OPLs) we propose a general theory for the transformation between two arbitrary wavefronts by a slab. The OPLs travelled by rays across a wavefront rather than the shape of a medium essentially determine the resulting phase front in accord with the principle of equal optical path. So one phase transformation can be realized by materials of various profiles as long as the needed OPLs are satisfied. Thereupon we find a general coordinate transformation method for phase conversion, i.e. the profile of the spatial separation between the two wavefronts is taken to be transformed to a plane surface. Interestingly, for the mutual conversion between planar and curved wavefronts, the method reduce to an inverse transformation method in which it is the reversed shape of the desired wavefront that is converted to a planar one. As an application, four kinds of phase transformation, planar to planar, planar/curved to curv...
Selection principles and pattern formation in fluid mechanics and nonlinear shell theory
Sather, Duane P.
1987-01-01
Research accomplishments are summarized and publications generated under the contract are listed. The general purpose of the research was to investigate various symmetry breaking problems in fluid mechanics by the use of structure parameters and selection principles. Although all of the nonlinear problems studied involved systems of partial differential equations, many of these problems led to the study of a single nonlinear operator equation of the form F(w, lambda, gamma) = 0, (w is an element of H), (lambda is an element of R1), (gamma is an element of R1). Instead of varying only the load parameter lambda, as is often done in the study of such equations, one of the main ideas used was to vary the structure parameter gamma in such a way that stable solutions were obtained. In this way one determines detailed stability results by making use of the structure of the model equations and the known physical parameters of the problem. The approach was carried out successfully for Benard-type convection problems, Taylor-like problems for short cylinders, rotating Couette-Poiseuille channel flows, and plane Couette flows. The main focus of the research was on wave theory of vortex breakdown in a tube. A number of preliminary results for inviscid axisymmetric flows were obtained.
Sindelka, Milan; Moiseyev, Nimrod
2006-04-27
We study a general problem of the translational/rotational/vibrational/electronic dynamics of a diatomic molecule exposed to an interaction with an arbitrary external electromagnetic field. The theory developed in this paper is relevant to a variety of specific applications, such as alignment or orientation of molecules by lasers, trapping of ultracold molecules in optical traps, molecular optics and interferometry, rovibrational spectroscopy of molecules in the presence of intense laser light, or generation of high order harmonics from molecules. Starting from the first quantum mechanical principles, we derive an appropriate molecular Hamiltonian suitable for description of the center of mass, rotational, vibrational, and electronic molecular motions driven by the field within the electric dipole approximation. Consequently, the concept of the Born-Oppenheimer separation between the electronic and the nuclear degrees of freedom in the presence of an electromagnetic field is introduced. Special cases of the dc/ac-field limits are then discussed separately. Finally, we consider a perturbative regime of a weak dc/ac field, and obtain simple analytic formulas for the associated Born-Oppenheimer translational/rotational/vibrational molecular Hamiltonian.
A new ordering principle in quantum field theory and its consequences
Greben, Jan M
2016-01-01
The ad-hoc imposition of normal ordering on the Lagrangian, energy-momentum tensor and currents is a standard tool in quantum field theory (QFT) to eliminate infinite vacuum expectation values (v.e.v.) However, for fermionic expressions these infinite terms are due to anti-particles only. This exposes an asymmetry in standard QFT, which can be traced back to a bias towards particles in the Dirac bra-ket notation. To counter this bias a new ordering principle (called the $\\mathbb{R}$-product) is required which restores the symmetry (or rather duality) between particles and anti-particles and eliminates the infinite v.e.v. While this $\\mathbb{R}$-product was already used in a bound-state application, this paper aims to give it a more general foundation and analyze its overall impact in QFT. For boson fields the particle bias is hidden and the fields must first be expanded into bilinear particle-anti-particle fermion operators. This new representation also leads to vanishing v.e.v.'s and avoids some common techn...
Pressler, David E.
2012-03-01
A great discrepancy exists - the speed of light and the neutrino speed must be identical; as indicated by supernova1987A; yet, OPERA predicts faster-than-light neutrinos. Einstein's theories are based on the invariance of the speed of light, and no privileged Galilean frame of reference exists. Both of these hypotheses are in error and must be reconciled in order to solve the dilemma. The Michelson-Morley Experiment was misinterpreted - my Neoclassical Theory postulates that BOTH mirrors of the interferometer physically and absolutely move towards its center. The result is a three-directional-Contraction, (x, y, z axis), an actual distortion of space itself; a C-Space condition. ``PRESSLER'S LAW OF C-SPACE: The speed of light, c, will always be measured the same speed in all three directions (˜300,000 km/sec), in ones own inertial reference system, and will always be measured as having a different speed in all other inertial frames which are at a different kinetic energy level or at a location with a different strength gravity field'' Thus, the faster you go, motion, or the stronger the gravity field the smaller you get in all three directions. OPERA results are explained; at the surface of Earth, the strength of gravity field is at maximum -- below the earth's surface, time and space is less distorted; therefore, time is absolutely faster accordingly. Reference OPERA's preprint: Neutrino's faster time-effect due to altitude difference; (10-13ns) x c (299792458m) = 2.9 x 10-5 m/ns x distance (730085m) + 21.8m.) This is consistent with the OPERA result.
Jaehnert, Martin [MPIWG, Berlin (Germany)
2013-07-01
In 1922 Niels Bohr wrote a letter to Arnold Sommerfeld complaining that: ''[i]n the last years my attempts to develop the principles of quantum theory were met with very little understanding.'' Looking for the correspondence idea in publications, one finds that the principle was indeed hardly applied by physicists outside of Copenhagen. Only by 1922 physicists from wider research networks of quantum theory started to transfer the principle into their research fields, often far removed from its initial realm of atomic spectroscopy. How and why did physicists suddenly become interested in the idea that Bohr*s writings had been promoting since 1918? How was the correspondence principle transferred to these fields and how did its transfer affect these fields and likewise the correspondence principle itself? To discuss these questions, my talk focuses on the work of James Franck and Friedrich Hund on the Ramsauer effect in 1922 and follows the interrelation of the developing understanding of a newly found effect and the adaptation of the correspondence idea in a new conceptual and sociological context.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Milton, Kimball A
2015-01-01
Starting from the earlier notions of stationary action principles, these tutorial notes shows how Schwinger’s Quantum Action Principle descended from Dirac’s formulation, which independently led Feynman to his path-integral formulation of quantum mechanics. Part I brings out in more detail the connection between the two formulations, and applications are discussed. Then, the Keldysh-Schwinger time-cycle method of extracting matrix elements is described. Part II will discuss the variational formulation of quantum electrodynamics and the development of source theory.
Kvaal, Simen; Helgaker, Trygve
2015-11-14
The relationship between the densities of ground-state wave functions (i.e., the minimizers of the Rayleigh-Ritz variation principle) and the ground-state densities in density-functional theory (i.e., the minimizers of the Hohenberg-Kohn variation principle) is studied within the framework of convex conjugation, in a generic setting covering molecular systems, solid-state systems, and more. Having introduced admissible density functionals as functionals that produce the exact ground-state energy for a given external potential by minimizing over densities in the Hohenberg-Kohn variation principle, necessary and sufficient conditions on such functionals are established to ensure that the Rayleigh-Ritz ground-state densities and the Hohenberg-Kohn ground-state densities are identical. We apply the results to molecular systems in the Born-Oppenheimer approximation. For any given potential v ∈ L(3/2)(ℝ(3)) + L(∞)(ℝ(3)), we establish a one-to-one correspondence between the mixed ground-state densities of the Rayleigh-Ritz variation principle and the mixed ground-state densities of the Hohenberg-Kohn variation principle when the Lieb density-matrix constrained-search universal density functional is taken as the admissible functional. A similar one-to-one correspondence is established between the pure ground-state densities of the Rayleigh-Ritz variation principle and the pure ground-state densities obtained using the Hohenberg-Kohn variation principle with the Levy-Lieb pure-state constrained-search functional. In other words, all physical ground-state densities (pure or mixed) are recovered with these functionals and no false densities (i.e., minimizing densities that are not physical) exist. The importance of topology (i.e., choice of Banach space of densities and potentials) is emphasized and illustrated. The relevance of these results for current-density-functional theory is examined.
Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars
2016-01-01
Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation training...
Revisiting a theory of negotiation: the utility of Markiewicz (2005) proposed six principles.
McDonald, Diane
2008-08-01
their differences and be willing to move on. But the problem is that evaluators are not necessarily equipped with the technical or personal skills required for effective negotiation. In addition, the time and effort that are required to undertake this mediating role are often not sufficiently understood by those who commission a review. With such issues in mind Markiewicz, A. [(2005). A balancing act: Resolving multiple stakeholder interests in program evaluation. Evaluation Journal of Australasia, 4(1-2), 13-21] has proposed six principles upon which to build a case for negotiation to be integrated into the evaluation process. This paper critiques each of these principles in the context of an evaluation undertaken of a youth program. In doing so it challenges the view that stakeholder consensus is always possible if program improvement is to be achieved. This has led to some refinement and further extension of the proposed theory of negotiation that is seen to be instrumental to the role of an evaluator.
Okie, Jordan G.; Van Horn, David J.; Storch, David; Barrett, John E.; Gooseff, Michael N.; Kopsova, Lenka; Takacs-Vesbach, Cristina D.
2015-01-01
The causes of biodiversity patterns are controversial and elusive due to complex environmental variation, covarying changes in communities, and lack of baseline and null theories to differentiate straightforward causes from more complex mechanisms. To address these limitations, we developed general diversity theory integrating metabolic principles with niche-based community assembly. We evaluated this theory by investigating patterns in the diversity and distribution of soil bacteria taxa across four orders of magnitude variation in spatial scale on an Antarctic mountainside in low complexity, highly oligotrophic soils. Our theory predicts that lower temperatures should reduce taxon niche widths along environmental gradients due to decreasing growth rates, and the changing niche widths should lead to contrasting α- and β-diversity patterns. In accord with the predictions, α-diversity, niche widths and occupancies decreased while β-diversity increased with increasing elevation and decreasing temperature. The theory also successfully predicts a hump-shaped relationship between α-diversity and pH and a negative relationship between α-diversity and salinity. Thus, a few simple principles explained systematic microbial diversity variation along multiple gradients. Such general theory can be used to disentangle baseline effects from more complex effects of temperature and other variables on biodiversity patterns in a variety of ecosystems and organisms. PMID:26019154
Chi, Do Minh
2001-01-01
We advance a famous principle - causality principle - but under a new view. This principle is a principium automatically leading to most fundamental laws of the nature. It is the inner origin of variation, rules evolutionary processes of things, and the answer of the quest for ultimate theories of the Universe.
陈洪兵
2012-01-01
Suiting punishment to crime is not only legislative principle, but also principle of interpretative theory. It is an important guiding role of the interpretation of constitutive requirements. Previous understanding that is Sticking to different strictly overlap of articles of law from imaginative joinder of offenses usually cause imbalance of crime and punishment . We should prohibits deci- sively the previous understanding and adapt the principle of suiting punishment to crime in interpretative theory in maximum scale for realizing fairness and justice of criminal law.%罪刑相适应不仅是立法原则，还是解释论应当遵循的重要原则，对构成要件的解释具有重要的指导作用。坚持严格区分法条竞合与想象竞合犯，固守“本法另有规定的，依照规定”表明只能适用特别法的先前理解，总是指责立法存在缺陷，必然导致罪刑不均衡的刑法条文比比皆是的现象。我们应当果断摒弃先前错误的理解与做法，在解释论中最大限度地贯彻罪刑相适应原则，充分运用竞合论原理“从一重处断”，以实现刑法的公平正义。
Effective Principles in Designing E-Course in Light of Learning Theories
Afifi, Muhammad K.; Alamri, Saad S.
2014-01-01
The researchers conducted an exploratory study to determine the design quality of some E-courses delivered via the web to a number of colleagues at the university. Results revealed a number of shortcomings in the design of these courses, mostly due to the absence of effective principles in the design of these E-courses, especially principles of…
Lubrication Chemistry Viewed from DFT-Based Concepts and Electronic Structural Principles
Jin Yuansheng; Yang He; Li Shenghua
2003-01-01
Abstract: Fundamental molecular issues in lubrication chemistry were reviewed under categories of solution chemistry, contact chemistry and tribochemistry. By introducing the Density Functional Theory(DFT)-derived chemical reactivity parameters (chemical potential, electronegativity, hardness, softness and Fukui function) and related electronic structural principles (electronegativity equalization principle, hard-soft acid-base principle, and maximum hardness principle), their relevancy to lu...
Wu, Y L
2003-01-01
Through defining irreducible loop integrals (ILIs), a set of conditions for the regularized (quadratically and logarithmically) divergent ILIs are resulted from the generalized Ward identities of gauge invariance in non-Abelian gauge theories. Overlapping ultraviolet (UV) divergences are explicitly shown to be factorizable in the ILIs and be harmless. A new regularization and renormalization method is presented in the initial space-time dimension of the theory. The procedure respects unitarity and causality. Of interest, the method leads to an infinity free renormalization and meanwhile maintains the symmetry principles of the original theory except the intrinsic mass scale caused conformal scaling symmetry breaking and the anomaly induced symmetry breaking. Quantum field theories (QFTs) regularized through the new method are well defined and governed by a physically meaningful characteristic energy scale (CES) $M_c$ and a physically interesting sliding energy scale (SES) $\\mu_s$ which can run from $\\mu_s \\si...
Rawls and the Aristotelian Principle An Approach to the Idea of the Good in A Theory of Justice
Pablo Andrés Aguayo Westwood
2014-12-01
Full Text Available In order to ground and reinforce his theory of primary goods, J. Rawls introduces the idea of the “Aristotelian principle” in §65 of A Theory of Justice. The article discusses the difficulties entailed by the acceptance of this notion, as well as the limitations of the idea of the good underlying that principle. The objective is to show that the conception of the good presented by Rawls is affected by “moral insufficiency” and to argue in support of the thesis that his approach to the idea of the good overshadows the practical dimension of rationality.
Tang, Jian-Shun; Wang, Yi-Tao; Yu, Shang; He, De-Yong; Xu, Jin-Shi; Liu, Bi-Heng; Chen, Geng; Sun, Yong-Nan; Sun, Kai; Han, Yong-Jian; Li, Chuan-Feng; Guo, Guang-Can
2016-10-01
The experimental progress achieved in parity-time () symmetry in classical optics is the most important accomplishment in the past decade and stimulates many new applications, such as unidirectional light transport and single-mode lasers. However, in the quantum regime, some controversial effects are proposed for -symmetric theory, for example, the potential violation of the no-signalling principle. It is therefore important to understand whether -symmetric theory is consistent with well-established principles. Here, we experimentally study this no-signalling problem related to the -symmetric theory using two space-like separated entangled photons, with one of them passing through a post-selected quantum gate, which effectively simulates a -symmetric evolution. Our results suggest that the superluminal information transmission can be simulated when the successfully -symmetrically evolved subspace is solely considered. However, considering this subspace is only a part of the full Hermitian system, additional information regarding whether the -symmetric evolution is successful is necessary, which transmits to the receiver at maximally light speed, maintaining the no-signalling principle.
Kelderman, Henk
1992-01-01
In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual cou
Laben, J K; Dodd, D; Sneed, L
1991-01-01
Group psychotherapy has been considered the treatment of choice by many therapists working with offenders within the criminal justice system. However, there has been little written by nurses regarding this special population. This article's purpose is to illustrate how King's theory of goal attainment may be used in conducting group psychotherapy with offender populations. The application of King's model is demonstrated in three milieus: an inpatient setting for juvenile sexual offenders, a state maximum security prison, and a halfway house for offenders involved in a work-release program. The methodology and use of visual aids in actualizing King's theory of mutual goal setting and goal attainment are discussed.
Fischer, Alastair J; Ghelardi, Gemma
2016-01-01
The precautionary principle (PP) has been used in the evaluation of the effectiveness and/or cost-effectiveness of interventions designed to prevent future harms in a range of activities, particularly in the area of the environment. Here, we provide details of circumstances under which the PP can be applied to the topic of harm reduction in Public Health. The definition of PP that we use says that the PP reverses the onus of proof of effectiveness between an intervention and its comparator when the intervention has been designed to reduce harm. We first describe the two frameworks used for health-care evaluation: evidence-based medicine (EBM) and decision theory (DT). EBM is usually used in treatment effectiveness evaluation, while either EBM or DT may be used in evaluating the effectiveness of the prevention of illness. For cost-effectiveness, DT is always used. The expectation in Public Health is that interventions employed to reduce harm will not actually increase harm, where "harm" in this context does not include opportunity cost. That implies that an intervention's effectiveness can often be assumed. Attention should therefore focus on its cost-effectiveness. This view is consistent with the conclusions of DT. It is also very close to the PP notion of reversing the onus of proof, but is not consistent with EBM as normally practiced, where the onus is on showing a new practice to be superior to usual practice with a sufficiently high degree of certainty. Under our definitions, we show that where DT and the PP differ in their evaluation is in cost-effectiveness, but only for decisions that involve potential catastrophic circumstances, where the nation-state will act as if it is risk-averse. In those cases, it is likely that the state will pay more, and possibly much more, than DT would allow, in an attempt to mitigate impending disaster. That is, the rules that until now have governed all cost-effectiveness analyses are shown not to apply to catastrophic
Kawazura, Yohei; Morrison, Philip J
2016-01-01
Two types of Eulerian action principles for relativistic extended magnetohydrodynamics (MHD) are formulated. With the first, the action is extremized under the constraints of density, entropy, and Lagrangian label conservation, which leads to a Clebsch representation for a generalized momentum and a generalized vector potential. The second action arises upon transformation to physical field variables, giving rise to a covariant bracket action principle, i.e., a variational principle in which constrained variations are generated by a degenerate Poisson bracket. Upon taking appropriate limits, the action principles lead to relativistic Hall MHD and well-known relativistic ideal MHD. For the first time, the Hamiltonian formulation of relativistic Hall MHD with electron thermal inertia (akin to [Comisso \\textit{et al.}, Phys. Rev. Lett. {\\bf 113}, 045001 (2014)] for the electron--positron plasma) is introduced. This thermal inertia effect allows for violation of the frozen-in magnetic flux condition in marked con...
Design Principles for Serious Video Games in Mathematics Education: From Theory to Practice
Konstantinos Chorianopoulos; Michail Giannakos
2014-01-01
There is growing interest in the employment of serious video games in science education, but there are no clear design principles. After surveying previous work in serious video game design, we highlighted the following design principles: 1) engage the students with narrative (hero, story), 2) employ familiar gameplay mechanics from popular video games, 3) engage students into constructive trial and error game-play and 4) situate collaborative learning. As illustrated examples we designed two...
Design Principles for Serious Video Games in Mathematics Education: From Theory to Practice
Konstantinos Chorianopoulos; Michail Giannakos
2014-01-01
There is growing interest in the employment of serious video games in science education, but there are no clear design principles. After surveying previous work in serious video game design, we highlighted the following design principles: 1) engage the students with narrative (hero, story), 2) employ familiar gameplay mechanics from popular video games, 3) engage students into constructive trial and error game-play and 4) situate collaborative learning. As illustrated examples we designed two...
Zhang, Wei; Zhang, Rui-xian; Li, Jian
2015-12-01
All previous literatures about Chinese herbal medicines show distinctive traditional Chinese medicine (TCM) flavors. Compendium of Materia Medica is an influential book in TCM history. The TCM flavor theory and flavor standardization principle in this book has important significance for modern TCM flavor standardization. Compendium of Materia Medica pays attention to the flavor theory, explain the relations between the flavor of medicine and its therapeutic effects by means of Neo-Confucianism of the Song and Ming Dynasties. However,the book has not reflected and further developed the systemic theory, which originated in the Jin and Yuan dynasty. In Compendium of Materia Medica , flavor are standardized just by tasting medicines, instead of deducing flavors. Therefore, medicine tasting should be adopted as the major method to standardize the flavor of medicine.
Szalma, James L
2014-12-01
Motivation is a driving force in human-technology interaction. This paper represents an effort to (a) describe a theoretical model of motivation in human technology interaction, (b) provide design principles and guidelines based on this theory, and (c) describe a sequence of steps for the. evaluation of motivational factors in human-technology interaction. Motivation theory has been relatively neglected in human factors/ergonomics (HF/E). In both research and practice, the (implicit) assumption has been that the operator is already motivated or that motivation is an organizational concern and beyond the purview of HF/E. However, technology can induce task-related boredom (e.g., automation) that can be stressful and also increase system vulnerability to performance failures. A theoretical model of motivation in human-technology interaction is proposed, based on extension of the self-determination theory of motivation to HF/E. This model provides the basis for both future research and for development of practical recommendations for design. General principles and guidelines for motivational design are described as well as a sequence of steps for the design process. Human motivation is an important concern for HF/E research and practice. Procedures in the design of both simple and complex technologies can, and should, include the evaluation of motivational characteristics of the task, interface, or system. In addition, researchers should investigate these factors in specific human-technology domains. The theory, principles, and guidelines described here can be incorporated into existing techniques for task analysis and for interface and system design.
Houde, Joseph
2006-01-01
Andragogy, originally proposed by Malcolm Knowles, has been criticized as an atheoretical model. Validation of andragogy has been advocated by scholars, and this paper explores one method for that process. Current motivation theory, specifically socioemotional selectivity and self-determination theory correspond with aspects of andragogy. In…
Scaling theory put into practice: First-principles modeling of transport in doped silicon nanowires
Markussen, Troels; Rurali, R.; Jauho, Antti-Pekka
2007-01-01
We combine the ideas of scaling theory and universal conductance fluctuations with density-functional theory to analyze the conductance properties of doped silicon nanowires. Specifically, we study the crossover from ballistic to diffusive transport in boron or phosphorus doped Si nanowires...
Bussotti, Paolo
2015-01-01
This book presents new insights into Leibniz’s research on planetary theory and his system of pre-established harmony. Although some aspects of this theory have been explored in the literature, others are less well known. In particular, the book offers new contributions on the connection between the planetary theory and the theory of gravitation. It also provides an in-depth discussion of Kepler’s influence on Leibniz’s planetary theory and, more generally, on Leibniz’s concept of pre-established harmony. Three initial chapters presenting the mathematical and physical details of Leibniz’s works provide a frame of reference. The book then goes on to discuss research on Leibniz’s conception of gravity and the connection between Leibniz and Kepler. .
Mekios, Constantinos
2016-04-01
Twentieth-century theoretical efforts towards the articulation of general system properties came short of having the significant impact on biological practice that their proponents envisioned. Although the latter did arrive at preliminary mathematical formulations of such properties, they had little success in showing how these could be productively incorporated into the research agenda of biologists. Consequently, the gap that kept system-theoretic principles cut-off from biological experimentation persisted. More recently, however, simple theoretical tools have proved readily applicable within the context of systems biology. In particular, examples reviewed in this paper suggest that rigorous mathematical expressions of design principles, imported primarily from engineering, could produce experimentally confirmable predictions of the regulatory properties of small biological networks. But this is not enough for contemporary systems biologists who adopt the holistic aspirations of early systemologists, seeking high-level organizing principles that could provide insights into problems of biological complexity at the whole-system level. While the presented evidence is not conclusive about whether this strategy could lead to the realization of the lofty goal of a comprehensive explanatory integration, it suggests that the ongoing quest for organizing principles is pragmatically advantageous for systems biologists. The formalisms postulated in the course of this process can serve as bridges between system-theoretic concepts and the results of molecular experimentation: they constitute theoretical tools for generalizing molecular data, thus producing increasingly accurate explanations of system-wide phenomena.
Kawazura, Yohei; Miloshevich, George; Morrison, Philip J.
2017-02-01
Two types of Eulerian action principles for relativistic extended magnetohydrodynamics (MHD) are formulated. With the first, the action is extremized under the constraints of density, entropy, and Lagrangian label conservation, which leads to a Clebsch representation for a generalized momentum and a generalized vector potential. The second action arises upon transformation to physical field variables, giving rise to a covariant bracket action principle, i.e., a variational principle in which constrained variations are generated by a degenerate Poisson bracket. Upon taking appropriate limits, the action principles lead to relativistic Hall MHD and well-known relativistic ideal MHD. For the first time, the Hamiltonian formulation of relativistic Hall MHD with electron thermal inertia (akin to Comisso et al., Phys. Rev. Lett. 113, 045001 (2014) for the electron-positron plasma) is introduced. This thermal inertia effect allows for violation of the frozen-in magnetic flux condition in marked contrast to nonrelativistic Hall MHD that does satisfy the frozen-in condition. We also find the violation of the frozen-in condition is accompanied by freezing-in of an alternative flux determined by a generalized vector potential. Finally, we derive a more general 3 + 1 Poisson bracket for nonrelativistic extended MHD, one that does not assume smallness of the electron ion mass ratio.
Owusu-Agyeman, Yaw; Larbi-Siaw, Otu
2017-01-01
This study argues that in developing a robust framework for students in a blended learning environment, Structural Alignment (SA) becomes the third principle of specialisation in addition to Epistemic Relation (ER) and Social Relation (SR). We provide an extended code: (ER+/-, SR+/-, SA+/-) that present strong classification and framing to the…
Design Principles for Serious Video Games in Mathematics Education: From Theory to Practice
Konstantinos Chorianopoulos
2014-09-01
Full Text Available There is growing interest in the employment of serious video games in science education, but there are no clear design principles. After surveying previous work in serious video game design, we highlighted the following design principles: 1 engage the students with narrative (hero, story, 2 employ familiar gameplay mechanics from popular video games, 3 engage students into constructive trial and error game-play and 4 situate collaborative learning. As illustrated examples we designed two math video games targeted to primary education students. The gameplay of the math video games embeds addition operations in a seamless way, which has been inspired by that of classic platform games. In this way, the students are adding numbers as part of popular gameplay mechanics and as a means to reach the video game objective, rather than as an end in itself. The employment of well-defined principles in the design of math video games should facilitate the evaluation of learning effectiveness by researchers. Moreover, educators can deploy alternative versions of the games in order to engage students with diverse learning styles. For example, some students might be motived and benefited by narrative, while others by collaboration, because it is unlikely that one type of serious video game might fit all learning styles. The proposed principles are not meant to be an exhaustive list, but a starting point for extending the list and applying them in other cases of serious video games beyond mathematics and learning.
A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory
Prozan, R. J.
1982-01-01
The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.
A Proposed New "Nano-Particle" Theory of Light Based on Heat Transfer Principles
Das, Ashis
2004-05-01
Till date theories of light (visible and other radiations over electromagnetic scale) are divided into two classes viz. particle and wave theory. A particle on the classical view is a concentration of energy and other properties in space and time, whereas a wave is spread out over a larger region of space and time. It is generally understood that particle theory talks about corpuscles of finite measurable mass whereas wave theory is about packets of massless energy. This paper is a summary of thoughts collected so far on building a only - particle theory of light or other radiations assuming the Universe to be filled with "nano-particles" or very small particles and large particles. Although revolutionary and very thought provoking and unbelievably challenging the collected pointers outlined in this account appear very logical and mathematically sound although experiments are required to give this theory a firm basis for wide spread recognition in scientific forums. The major support for nano-particle theory comes from the observation of a term called "radiation pressure" which incorporates a sense of impact or pressure and therefore a force and so some particle impact although very feeble compared to normal large particle impact yielding noticeable effect on most pressure gauges measuring this. Similar feeble impact effects are possible in other phenomena like current, magnetic field etc. whose measurement will require very sensitive instruments. In this paper, I have explained that common method of estimation of momentum and heat transfer applied to very small mass nano-particles can explain at least three major phenomena of visble light viz. rectilinear propagation, reflection and refraction. Other phenomena such as diffraction, interference, polarization, diffusion etc will be presented in a future paper. This presentation is meant for collecting wide readership views to approve or deny this explanation of only particle theory after famous Compton scattering
The physical Church-Turing thesis and the principles of quantum theory
Arrighi, Pablo
2011-01-01
Notoriously, quantum computation shatters complexity theory, but is innocuous to computability theory. Yet several works have shown how quantum theory as it stands could breach the physical Church-Turing thesis. We draw a clear line as to when this is the case, in a way that is inspired by Gandy. Gandy formulates postulates about physics, such as homogeneity of space and time, bounded density and velocity of information --- and proves that the physical Church-Turing thesis is a consequence of these postulates. We provide a quantum version of the theorem. Thus this approach exhibits a formal non-trivial interplay between theoretical physics symmetries and computability assumptions.
Study on Universal Principle in Literary Theory Teaching%文学理论教学中的原则探究
陈长利
2015-01-01
Facing the drawbacks of essentialism literary theory mode of teaching,the essay discusses the six basic aspects of mode of teaching basing on relationalism literary theory.They are:relation principle,framing principle,process principle,dialectical principle,dialogue principle and problem principle.These aspects are related to the nature,purpose,mehod,composition,mode and power of literary theory teaching.The essay is a systematic thing and theoretical description of universal princi-ple followed by literary theory teaching after essentialism literary theory.%针对本质主义文论教学模式的弊端，本文系统描述了关系主义文论教学模式的6个基本原则，即关系原则、构架原则、过程原则、辩证原则、对话原则、问题原则。它们关涉文学理论教学的性质、目的、方法、构成、方式、动力等方面，是对本质主义文论之后教学原则的系统反思与理论描述。
Hong Xu 徐红; David LAWSON
2004-01-01
Osteoporosis is a world wide problem that is increasing in significance as the global population both increases and ages. While osteoporosis has been extensively studied in recent years, the utilization of Traditional Chinese Herbal Medicine for the prevention and treatment of this condition have seldom been examined in the Western world. This paper reviews the theories and the literature that relate to prevention and treatment of bone loss at the time of menopause according to the principles of Traditional Chinese Herbal Medicine. Practical developments in these areas are also illustrated in this paper based on the authors' research findings in recent studies.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben
2014-01-01
response criteria. User-operated audiometry was developed as an alternative to traditional audiometry for research purposes among musicians. Design: Test-retest reliability of the user-operated audiometry system was evaluated and the user-operated audiometry system was compared with traditional audiometry......Objective: To create a user-operated pure-tone audiometry method based on the method of maximum likelihood (MML) and the two-alternative forced-choice (2AFC) paradigm with high test-retest reliability without the need of an external operator and with minimal influence of subjects' fluctuating....... Study sample: Test-retest reliability of user-operated 2AFC audiometry was tested with 38 naïve listeners. User-operated 2AFC audiometry was compared to traditional audiometry in 41 subjects. Results: The repeatability of user-operated 2AFC audiometry was comparable to traditional audiometry...
Variational principles for buckling and vibration of MWCNTs modeled by strain gradient theory
徐晓建; 邓子辰
2014-01-01
Variational principles for the buckling and vibration of multi-walled carbon nanotubes (MWCNTs) are established with the aid of the semi-inverse method. They are used to derive the natural and geometric boundary conditions coupled by small scale parameters. Hamilton’s principle and Rayleigh’s quotient for the buckling and vibration of the MWCNTs are given. The Rayleigh-Ritz method is used to study the buckling and vibration of the single-walled carbon nanotubes (SWCNTs) and double-walled carbon nanotubes (DWCNTs) with three typical boundary conditions. The numerical results reveal that the small scale parameter, aspect ratio, and boundary conditions have a profound effect on the buckling and vibration of the SWCNTs and DWCNTs.
Staats, Peter S; Hekmat, Hamid; Staats, Arthur W
2004-01-01
The psychological behaviorism theory of pain unifies biological, behavioral, and cognitive-behavioral theories of pain and facilitates development of a common vocabulary for pain research across disciplines. Pain investigation proceeds in seven interacting realms: basic biology, conditioned learning, language cognition, personality differences, pain behavior, the social environment, and emotions. Because pain is an emotional response, examining the bidirectional impact of emotion is pivotal to understanding pain. Emotion influences each of the other areas of interest and causes the impact of each factor to amplify or diminish in an additive fashion. Research based on this theory of pain has revealed the ameliorating impact on pain of (1) improving mood by engaging in pleasant sexual fantasies, (2) reducing anxiety, and (3) reducing anger through various techniques. Application of the theory to therapy improved the results of treatment of osteoarthritic pain. The psychological behaviorism theory of the placebo considers the placebo a stimulus conditioned to elicit a positive emotional response. This response is most powerful if it is elicited by conditioned language. Research based on this theory of the placebo that pain is ameliorated by a placebo suggestion and augmented by a nocebo suggestion and that pain sensitivity and pain anxiety increase susceptibility to a placebo.
Shen Xiaoming; Du Yuanhao; Shi Xuemin
2005-01-01
The brain is the sea of marrow, stores the cerebral spirit and dominates all the life activities of the human body, which are the basic TCM knowledge about the brain. Based on this knowledge, the pathogenesis of climacteric syndrome is considered as consumption and deficiency of kidney-essence, and incoordination between the brain and kidney. The principle of acupuncture treatment should be soothing the mind and tonifying the kidney.
Using the IRPA Guiding Principles on Stakeholder Engagement: putting theory into practice.
Jones, C Rick
2011-11-01
The International Radiation Protection Association (IRPA) published their Guiding Principles for Radiation Protection Professionals on Stakeholder Engagement in February 2009. The publication of this document is the culmination of four years of work by the Spanish Society for Radiological Protection, the French Society of Radioprotection, the United Kingdom Society of Radiological Protection, and the IRPA organization, with full participation by the Italian Associate Society and the Nuclear Energy Agency's Committee on Radiation Protection and Public Health. The Guiding Principles provide field-tested and sound counsel to the radiation protection profession to aid it in successfully engaging with stakeholders in decision-making processes that result in mutually agreeable and sustainable decisions. Stakeholders in the radiation protection decision making process are now being recognized as a spectrum of individuals and organizations specific to the situation. It is also important to note that stakeholder engagement is not needed or advised in all decision making situations, although it has been shown to be a tool of first choice in dealing with such topics as intervention and chronic exposure situations, as well as situations that have reached an impasse using traditional approaches to decision-making. To enhance the contribution of the radiation protection profession, it is important for radiation protection professionals and their national professional societies to embrace and implement the IRPA Guiding Principles in a sustainable way by making them a cornerstone of their operations and an integral part of day-to-day activities.
Myslivets, S A; Kimberg, V V; George, T F; George, Thomas F.
2003-01-01
A scheme is analyzed for effcient generation of vacuum ultraviolet radiation through four-wave mixing processes assisted by the technique of Stark-chirped rapid adiabatic passage. These opportunities are associated with pulse excitation of laddertype short-wavelength two-photon atomic or molecular transitions so that relaxation processes can be neglected. In this three-laser technique, a delayed-pulse of strong oR-resonant infrared radiation sweeps the laser-induced Stark-shift of a two-photon transition in a such way that facilitates robust maximum two-photon coherence induced by the first ultraviolet laser. A judiciously delayed third pulse scatters at this coherence and generates short-wavelength radiation. A theoretical analysis of these problems based on the density matrix is performed. A numerical model is developed to carry out simulations of a typical experiment. The results illustrate a behavior of populations, coherence and generated radiation along the medium as well as opportunities of effcient ge...
Tsyshevsky, Roman V; Sharia, Onise; Kuklja, Maija M
2016-02-19
This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.
戴安民
2003-01-01
The purpose is to reestablish the coupled conservation laws, the local conservation equations and the jump conditions of mass and inertia for polar continuum theories. In this connection the new material derivatives of the deformation gradient, the line element, the surface element and the volume element were derived and the generalized Reynolds transport theorem was presented. Combining these conservation laws of mass and inertia with the balance laws of momentum, angular momentum and energy derived in our previous papers of this series, a rather complete system of coupled basic laws and principles for polar continuum theories is constituted on the whole. From this system the coupled nonlocal balance equations of mass, inertia, momentum, angular momentum and energy may be obtained by the usual localization.
Roman V. Tsyshevsky
2016-02-01
Full Text Available This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.
Schild Action and Space-Time Uncertainty Principle in String Theory
Yoneya, T
1997-01-01
We show that the path-integral quantization of relativistic strings with the Schild action is essentially equivalent to the usual Polyakov quantization at critical space-time dimensions. We then present an interpretation of the Schild action which points towards a derivation of superstring theory as a theory of quantized space-time where the squared string scale, $\\ell_s^2 \\sim \\alpha'$, plays the role of the minimum quantum for space-time areas. A tentative approach towards such a goal is proposed, based on a microcanonical formulation of large N supersymmetric matrix model.
RENEWAL OF BASIC LAWS AND PRINCIPLES FOR POLAR CONTINUUM THEORIES (Ⅰ)-MICROPOLAR CONTINUA
戴天民
2003-01-01
Based on the restudies of existing polar continuum theories rather completesystems of basic balance laws and equations for micropolar continuum theory are presented.In these new systems not only the additional angular momentum, surface moment and bodymoment produced by the linear momentum, surface force and body force, respectively, butalso the additional velocity produced by the angular velocity are considered. The newcoupled balance laws of linear momentum, angular momentum and energy arereestablished. From them the new coupled local and nonlocal balance equatiors arenaturally derived. Via contrast it can be clearly seen that the new results are believed to berather general and complete.
Schild Action and Space-Time Uncertainty Principle in String Theory
Yoneya, Tamiaki
1997-01-01
We show that the path-integral quantization of relativistic strings with the Schild action is essentially equivalent to the usual Polyakov quantization at critical space-time dimensions. We then present an interpretation of the Schild action which points towards a derivation of superstring theory as a theory of quantized space-time where the squared string scale plays the role of the minimum quantum for space-time areas. A tentative approach towards such a goal is proposed, based on a microca...
Inelastic transport theory from first principles: Methodology and application to nanoscale devices
Frederiksen, Thomas; Paulsson, Magnus; Brandbyge, Mads
2007-01-01
We describe a first-principles method for calculating electronic structure, vibrational modes and frequencies, electron-phonon couplings, and inelastic electron transport properties of an atomic-scale device bridging two metallic contacts under nonequilibrium conditions. The method extends...... approximation. While these calculations often are computationally demanding, we show how they can be approximated by a simple and efficient lowest order expansion. Our method also addresses effects of energy dissipation and local heating of the junction via detailed calculations of the power flow. We...
First-principles theory, coarse-grained models, and simulations of ferroelectrics.
Waghmare, Umesh V
2014-11-18
CONSPECTUS: A ferroelectric crystal exhibits macroscopic electric dipole or polarization arising from spontaneous ordering of its atomic-scale dipoles that breaks inversion symmetry. Changes in applied pressure or electric field generate changes in electric polarization in a ferroelectric, defining its piezoelectric and dielectric properties, respectively, which make it useful as an electromechanical sensor and actuator in a number of applications. In addition, a characteristic of a ferroelectric is the presence of domains or states with different symmetry equivalent orientations of spontaneous polarization that are switchable with large enough applied electric field, a nonlinear property that makes it useful for applications in nonvolatile memory devices. Central to these properties of a ferroelectric are the phase transitions it undergoes as a function of temperature that involve lowering of the symmetry of its high temperature centrosymmetric paraelectric phase. Ferroelectricity arises from a delicate balance between short and long-range interatomic interactions, and hence the resulting properties are quite sensitive to chemistry, strains, and electric charges associated with its interface with substrate and electrodes. First-principles density functional theoretical (DFT) calculations have been very effective in capturing this and predicting material and environment specific properties of ferroelectrics, leading to fundamental insights into origins of ferroelectricity in oxides and chalcogenides uncovering a precise picture of electronic hybridization, topology, and mechanisms. However, use of DFT in molecular dynamics for detailed prediction of ferroelectric phase transitions and associated temperature dependent properties has been limited due to large length and time scales of the processes involved. To this end, it is quite appealing to start with input from DFT calculations and construct material-specific models that are realistic yet simple for use in
Temperament: Theory and Practice. Brunner/Mazel Basic Principles into Practice Series, Volume 12.
Chess, Stella; Thomas, Alexander
This book outlines the basic tenets and applications of the theory of temperament based on the findings of the New York Longitudinal Study begun in 1956. It describes the concept and definition of temperament, reviews studies that support and expand on the definition, and explores temperament and its impact across various practice settings and…
Lattice instability and martensitic transformation in LaAg predicted from first-principles theory
Vaitheeswaran, G.; Kanchana, V.; Zhang, X.
2012-01-01
, calculated using density functional perturbation theory, are in good agreement with available inelastic neutron scattering data. Under pressure, the phonon dispersions develop imaginary frequencies, starting at around 2.3 GPa, in good accordance with the martensitic instability observed above 3.4 GPa...
Aldalalah, Osamah Ahmad; Fong, Soon Fook
2010-01-01
The purpose of this study was to investigate the effects of modality and redundancy principles on the attitude and learning of music theory among primary pupils of different aptitudes in Jordan. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The…
Prigogine, I; George, C
1983-07-01
The second law of thermodynamics, for quantum systems, is formulated, on the microscopic level. As for classical systems, such a formulation is only possible when specific conditions are satisfied (continuous spectrum, nonvanishing of the collision operator, etc.). The unitary dynamical group can then be mapped into two contractive semigroups, reaching equilibrium either for t --> +infinity or for t --> -infinity. The second law appears as a symmetry-breaking selection principle, limiting the observables and density functions to the class that tends to thermodynamic equilibrium in the future (for t --> +infinity). The physical content of the dynamical structure is now displayed in terms of the appropriate semigroup, which is realized through a nonunitary transformation. The superposition principle of quantum mechanics has to be reconsidered as irreversible processes transform pure states into mixtures and unitary transformations are limited by the requirement that entropy remains invariant. In the semigroup representation, interacting fields lead to units that behave incoherently at equilibrium. Inversely, nonequilibrium constraints introduce correlations between these units.
Michele Campagna
2014-05-01
Full Text Available Geodesign is a trans-disciplinary concept emerging in a growing debate among scholars in North America, Europe and Asia with the aim of bridging the gap between landscape architecture, spatial planning and design, and Geographic Information Science. The concept entails the application of methods and techniques for planning sustainable development in an integrated process, from project conceptualization to analysis, simulation and evaluation, from scenario design to impact assessment, in a process including stakeholder participation and collaboration in decision-making strongly relaying on the use of digital information technologies. As such, the concept may be not entirely new. However, it is argued here, its application have not reached expected results so far. Hence, more research is needed in order to better understand methodological, technical, organizational, professional and institutional issues for a fruitful application of Geodesign principles and method in the practices. In line with the above assumptions, this paper is aimed at supplying early critical insights as a contribution towards a clearer understanding of the relationships between Geodesign concepts and planning regulations. The auspice with this first endeavour along this research issue is to make a more explicit and robust link between policy principles and planning, design and decision-making methods and tools, possibly as a small contribution to bring innovation in the planning education, governance and practice.
Prigogine, I.; George, Cl.
1983-07-01
The second law of thermodynamics, for quantum systems, is formulated, on the microscopic level. As for classical systems, such a formulation is only possible when specific conditions are satisfied (continuous spectrum, nonvanishing of the collision operator, etc.). The unitary dynamical group can then be mapped into two contractive semigroups, reaching equilibrium either for t → +∞ or for t → -∞. The second law appears as a symmetry-breaking selection principle, limiting the observables and density functions to the class that tends to thermodynamic equilibrium in the future (for t → +∞). The physical content of the dynamical structure is now displayed in terms of the appropriate semigroup, which is realized through a nonunitary transformation. The superposition principle of quantum mechanics has to be reconsidered as irreversible processes transform pure states into mixtures and unitary transformations are limited by the requirement that entropy remains invariant. In the semigroup representation, interacting fields lead to units that behave incoherently at equilibrium. Inversely, nonequilibrium constraints introduce correlations between these units.
Pastor-Bernier, Alexandre; Plott, Charles R; Schultz, Wolfram
2017-03-07
Revealed preference theory provides axiomatic tools for assessing whether individuals make observable choices "as if" they are maximizing an underlying utility function. The theory evokes a tradeoff between goods whereby individuals improve themselves by trading one good for another good to obtain the best combination. Preferences revealed in these choices are modeled as curves of equal choice (indifference curves) and reflect an underlying process of optimization. These notions have far-reaching applications in consumer choice theory and impact the welfare of human and animal populations. However, they lack the empirical implementation in animals that would be required to establish a common biological basis. In a design using basic features of revealed preference theory, we measured in rhesus monkeys the frequency of repeated choices between bundles of two liquids. For various liquids, the animals' choices were compatible with the notion of giving up a quantity of one good to gain one unit of another good while maintaining choice indifference, thereby implementing the concept of marginal rate of substitution. The indifference maps consisted of nonoverlapping, linear, convex, and occasionally concave curves with typically negative, but also sometimes positive, slopes depending on bundle composition. Out-of-sample predictions using homothetic polynomials validated the indifference curves. The animals' preferences were internally consistent in satisfying transitivity. Change of option set size demonstrated choice optimality and satisfied the Weak Axiom of Revealed Preference (WARP). These data are consistent with a version of revealed preference theory in which preferences are stochastic; the monkeys behaved "as if" they had well-structured preferences and maximized utility.
Variational principle for the P(4) affine theory of gravitation and electromagnetism
Chilton, J.H.; Norris, L.K. [North Carolina State Univ., Raleigh, NC (United States)
1992-07-01
We propose a Lagrangian for the P(4) theory of gravitation and electromagnetism which is a straightforward generalization of the Einstein Lagrangian. A constrained Palatini variation of this Lagrangian yields the geometrical Einstein-Maxwell affine field equations. We show that these results can be extended easily to include both electric and magnetic charges. Finally, we consider conservation laws arising from the invariance properties of the Lagrangian. 14 refs.
RENEWAL OF BASIC LAWS AND PRINCIPLES FOR POLAR CONTINUUM THEORIES (Ⅲ)-NOETHER'S THEOREM
戴天民
2003-01-01
The existing various couple stress theories have been carefully restudied. Thepurpose is to propose a coupled Noether's theorem and to reestablish rather completeconservation laws and balance equations for couple stress elastodynamics. The new concreteforms of various conservation laws of couple stress elasticity are derived. The precise natureof these conservation laws which result from the given invariance requirements areestablished. Various special cases are reduced and the results of micropolar continua may benaturally transited from the results presented in this paper.
Selection principles and pattern formation in fluid mechanics and nonlinear shell theory
Sather, Duane P.
1987-01-01
Wave theories of vortex breakdown were studied. A setting which involved dynamical systems and bifurcations of homoclinic and heteroclinic orbits in infinite-dimensional spaces was investigated. The determination of axisymmetric inviscid flows bifurcating from the primary flow lead to the study of a system of ordinary differential equations. The problem of rotating plane Couette flow was solved by means of the structure parameter approach.
A New Monotone Iteration Principle in the Theory of Nonlinear Fractional Differential Equations
Bapurao C. Dhage
2015-08-01
Full Text Available In this paper the author proves the algorithms for the existence as well as approximations of the solutions for the initial value problems of nonlinear fractional diﬀerential equations using the operator theoretic techniques in a partially ordered metric space. The main results rely on the Dhage iteration principle embodied in the recent hybrid ﬁxed point theorems of Dhage (2014 in a partially ordered normed linear space and the existence and approximations of the solutions of the considered nonlinear fractional diﬀerential equations are obtained under weak mixed partial continuity and partial Lipschitz conditions. Our hypotheses and existence and approximation results are also well illustrated by some numerical examples.
A New Monotone Iteration Principle in the Theory of Nonlinear Fractional Differential Equations
Bapurao C. Dhage
2015-08-01
Full Text Available In this paper the author proves the algorithms for the existence as well as approximations of the solutions for the initial value problems of nonlinear fractional diﬀerential equations using the operator theoretic techniques in a partially ordered metric space. The main results rely on the Dhage iteration principle embodied in the recent hybrid ﬁxed point theorems of Dhage (2014 in a partially ordered normed linear space and the existence and approximations of the solutions of the considered nonlinear fractional diﬀerential equations are obtained under weak mixed partial continuity and partial Lipschitz conditions. Our hypotheses and existence and approximation results are also well illustrated by some numerical examples.
Unnikrishnan, C S
2012-01-01
I show that no force or torque is generated in cases involving a charge and a magnet with their relative velocity zero, in any inertial frame of reference. A recent suspicion of an anomalous torque and conflict with relativity in this case is rested. What is distilled as `Lorentz force' in standard electrodynamics, with relative velocity as the parameter, is an under-representation of two distinct physical phenomena, an effect due to Lorentz contraction and another due to the Ampere current-current interaction, rolled into one due to prejudice from special relativity applied only to linear motion. When both are included in the analysis of the problem there is no anomalous force or torque, ensuring the validity of Poincare's principle of relativity. The issue of validity of electrodynamics without the concept of absolute rest, however, is subtle and empirically open when general noninertial motion is considered, as I will discuss in another paper.
Segmentation of electron tomographic data sets using fuzzy set theory principles.
Garduño, Edgar; Wong-Barnum, Mona; Volkmann, Niels; Ellisman, Mark H
2008-06-01
In electron tomography the reconstructed density function is typically corrupted by noise and artifacts. Under those conditions, separating the meaningful regions of the reconstructed density function is not trivial. Despite development efforts that specifically target electron tomography manual segmentation continues to be the preferred method. Based on previous good experiences using a segmentation based on fuzzy logic principles (fuzzy segmentation) where the reconstructed density functions also have low signal-to-noise ratio, we applied it to electron tomographic reconstructions. We demonstrate the usefulness of the fuzzy segmentation algorithm evaluating it within the limits of segmenting electron tomograms of selectively stained, plastic embedded spiny dendrites. The results produced by the fuzzy segmentation algorithm within the framework presented are encouraging.
A variational principle for compressible fluid mechanics. Discussion of the one-dimensional theory
Prozan, R. J.
1982-01-01
The second law of thermodynamics is used as a variational statement to derive a numerical procedure to satisfy the governing equations of motion. The procedure, based on numerical experimentation, appears to be stable provided the CFL condition is satisfied. This stability is manifested no matter how severe the gradients (compression or expansion) are in the flow field. For reasons of simplicity only one dimensional inviscid compressible unsteady flow is discussed here; however, the concepts and techniques are not restricted to one dimension nor are they restricted to inviscid non-reacting flow. The solution here is explicit in time. Further study is required to determine the impact of the variational principle on implicit algorithms.
The Cooperative Principle: Is Grices Theory Suitable to Indonesian Language Culture?
Agnes Herawati
2013-04-01
Full Text Available Article discussed how native speakers of Indonesian observed Grices maxims. One hundred conversations contributed in live talk show from varied Indonesia television channels were analysed. The results show that Grices maxims are fulfilled in many conversations. Nevertheless, in other situations, two kinds of non-fulfilment of the maxims are observed. First, the speaker deliberately exploits a maxim, which is suitable to Grices theory. Second, the speaker fails to observe but does not exploit a maxim, which leads to some interpretations of the cultural patterns of the Indonesian language: communicative politeness, high context culture and the needs of harmony in communication that are considered as the manifesting of Indonesian culture.
Phase stability in heavy f-electron metals from first-principles theory
Soderlind, P
2005-11-17
The structural phase stability of heavy f-electron metals is studied by means of density-functional theory (DFT). These include temperature-induced transitions in plutonium metal as well as pressure-induced transitions in the trans-plutonium metals Am, Cm, Bk, and Cf. The early actinides (Th-Np) display phases that could be rather well understood from the competition of a crystal-symmetry breaking mechanism (Peierls distortion) of the 5f states and electrostatic forces, while for the trans-plutonium metals (Am-Cf) the ground-state structures are governed by 6d bonding. We show in this paper that new physics is needed to understand the phases of the actinides in the volume range of about 15-30 {angstrom}{sup 3}. At these volumes one would expect, from theoretical arguments made in the past, to encounter highly complex crystal phases due to a Peierls distortion. Here we argue that the symmetry reduction associated with spin polarization can make higher symmetry phases competitive. Taking this into account, DFT is shown to describe the well-known phase diagram of plutonium and also the recently discovered complex and intriguing high-pressure phase diagrams of Am and Cm. The theory is further applied to investigate the behaviors of Bk and Cf under compression.
Principles of Volcano Risk Metrics: theory and the case study of Mt. Vesuvius and Campi Flegrei.
Marzocchi, W.; Woo, G.
2009-04-01
Despite volcanic risk having been defined quantitatively more than thirty years ago, it has been always managed without being effectively measured. Yet, the recent substantial progress in quantifying eruption probability paves the way for a new era of rational science-based volcano risk management, that we name Volcanic Risk Metrics (VRM). In this talk, we propose some principles of VRM, based on two main components: a probabilistic volcanic hazard assessment and eruption forecasting, and a cost/benefit analysis. In a nutshell, the method assists managers in decision-making under uncertainty, weighing appropriately the cost and benefit of actions to mitigate the effects of a threat having a specific probability of occurrence. The strategy has the potential to rationalize decision-making across a broad spectrum of volcanological questions: what areas should be covered by emergency plan? What early preparations should be made for a volcano crisis? When should the call for evacuation be made? The strategy has the paramount advantage of providing a set of quantitative and transparent 'rules' that can be established before a crisis, optimizing and clarifying decision-making procedures. It places volcanologists at the centre of decision-making, applying all their scientific knowledge and observational information to assist authorities in quantifying the positive and negative risk implications of any decision.
Sly, Krystal L; Conboy, John C
2017-06-01
A novel application of second harmonic correlation spectroscopy (SHCS) for the direct determination of molecular adsorption and desorption kinetics to a surface is discussed in detail. The surface-specific nature of second harmonic generation (SHG) provides an efficient means to determine the kinetic rates of adsorption and desorption of molecular species to an interface without interference from bulk diffusion, which is a significant limitation of fluorescence correlation spectroscopy (FCS). The underlying principles of SHCS for the determination of surface binding kinetics are presented, including the role of optical coherence and optical heterodyne mixing. These properties of SHCS are extremely advantageous and lead to an increase in the signal-to-noise (S/N) of the correlation data, increasing the sensitivity of the technique. The influence of experimental parameters, including the uniformity of the TEM00 laser beam, the overall photon flux, and collection time are also discussed, and are shown to significantly affect the S/N of the correlation data. Second harmonic correlation spectroscopy is a powerful, surface-specific, and label-free alternative to other correlation spectroscopic methods for examining surface binding kinetics.
Dodin, I Y; Ruiz, D E
2016-01-01
Applications of variational methods are typically restricted to conservative systems. Some extensions to dissipative systems have been reported too but require ad hoc techniques such as artificial doubling of variables. Here, a different approach is proposed. We show that, for a broad class of dissipative systems of practical interest, variational principles can be formulated using constant Lagrange multipliers and Lagrangians nonlocal in time, which allow treating reversible and irreversible dynamics on the same footing. A general variational theory of linear dispersion is formulated as an example. In particular, we present a variational formulation for linear geometrical optics in a general dissipative medium, which is allowed to be nonstationary, inhomogeneous, nonisotropic, and exhibit both temporal and spatial dispersion simultaneously.
Dodin, I. Y.; Zhmoginov, A. I.; Ruiz, D. E.
2017-04-01
Applications of variational methods are typically restricted to conservative systems. Some extensions to dissipative systems have been reported too but require ad hoc techniques such as the artificial doubling of the dynamical variables. Here, a different approach is proposed. We show that, for a broad class of dissipative systems of practical interest, variational principles can be formulated using constant Lagrange multipliers and Lagrangians nonlocal in time, which allow treating reversible and irreversible dynamics on the same footing. A general variational theory of linear dispersion is formulated as an example. In particular, we present a variational formulation for linear geometrical optics in a general dissipative medium, which is allowed to be nonstationary, inhomogeneous, anisotropic, and exhibit both temporal and spatial dispersion simultaneously.
Aiyoshi, Eitaro; Masuda, Kazuaki
On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.
An emergency department patient flow model based on queueing theory principles.
Wiler, Jennifer L; Bolandifar, Ehsan; Griffey, Richard T; Poirier, Robert F; Olsen, Tava
2013-09-01
The objective was to derive and validate a novel queuing theory-based model that predicts the effect of various patient crowding scenarios on patient left without being seen (LWBS) rates. Retrospective data were collected from all patient presentations to triage at an urban, academic, adult-only emergency department (ED) with 87,705 visits in calendar year 2008. Data from specific time windows during the day were divided into derivation and validation sets based on odd or even days. Patient records with incomplete time data were excluded. With an established call center queueing model, input variables were modified to adapt this model to the ED setting, while satisfying the underlying assumptions of queueing theory. The primary aim was the derivation and validation of an ED flow model. Chi-square and Student's t-tests were used for model derivation and validation. The secondary aim was estimating the effect of varying ED patient arrival and boarding scenarios on LWBS rates using this model. The assumption of stationarity of the model was validated for three time periods (peak arrival rate = 10:00 a.m. to 12:00 p.m.; a moderate arrival rate = 8:00 a.m. to 10:00 a.m.; and lowest arrival rate = 4:00 a.m. to 6:00 a.m.) and for different days of the week and month. Between 10:00 a.m. and 12:00 p.m., defined as the primary study period representing peak arrivals, 3.9% (n = 4,038) of patients LWBS. Using the derived model, the predicted LWBS rate was 4%. LWBS rates increased as the rate of ED patient arrivals, treatment times, and ED boarding times increased. A 10% increase in hourly ED patient arrivals from the observed average arrival rate increased the predicted LWBS rate to 10.8%; a 10% decrease in hourly ED patient arrivals from the observed average arrival rate predicted a 1.6% LWBS rate. A 30-minute decrease in treatment time from the observed average treatment time predicted a 1.4% LWBS. A 1% increase in patient arrivals has the same effect on LWBS rates as a 1
GAO Xue; ZHANG Yue; SHANG Jia-Xiang
2011-01-01
We choose a Si/Ge interface as a research object to investigate the infiuence of interface disorder on thermal boundary conductance. In the calculations, the diffuse mismatch model is used to study thermal boundary conductance between two non-metallic materials, while the phonon dispersion relationship is calculated by the first-principles density functional perturbation theory. The results show that interface disorder limits thermal transport. The increase of atomic spacing at the interface results in weakly coupled interfaces and a decrease in the thermal boundary conductance. This approach shows a simplistic method to investigate the relationship between microstructure and thermal conductivity.%We choose a Si/Ge interface as a research object to investigate the influence of interface disorder on thermal boundary conductance.In the calculations,the diffuse mismatch model is used to study thermal boundary conductance between two non-metallic materials,while the phonon dispersion relationship is calculated by the first-principles density functional perturbation theory.The results show that interface disorder limits thermal transport.The increase of atomic spacing at the interface results in weakly coupled interfaces and a decrease in the thermal boundary conductance.This approach shows a simplistic method to investigate the relationship between microstructure and thermal conductivity.It is well known that interfaces can play a dominant role in the overall thermal transport characteristics of structures whose length scale is less than the phonon mean free path.When heat flows across an interface between two different materials,there exists a temperature jump at the interface.Thermal boundary conductance (TBC),which describes the efficiency of heat flow at material interfaces,plays an importance role in the transport of thermal energy in nanometerscale devices,semiconductor superlattices,thin film multilayers and nanocrystalline materials.[1
[The system theory of aging: methodological principles, basic tenets and applications].
Krut'ko, V N; Dontsov, V I; Zakhar'iashcheva, O V
2009-01-01
The paper deals with the system theory of aging constructed on the basis of present-day scientific methodology--the system approach. The fundamental cause for aging is discrete existence of individual life forms, i.e. living organisms which, from the thermodynamic point of view, are not completely open systems. The primary aging process (build-up of chaos and system disintegration of aging organism) obeys the second law of thermodynamics or the law of entropy increase in individual partly open systems. In living organisms the law is exhibited as synergy of four main aging mechanisms: system "pollution" of organism, loss of non-regenerative elements, accumulation of damages and deformations, generation of variability on all levels, and negative changes in regulation processes and consequent degradation of the organism systematic character. These are the general aging mechanisms; however, the regulatory mechanisms may be important equally for organism aging and search for ways to prolong active life.
RENEWAL OF BASIC LAWS AND PRINCIPLES FOR POLAR CONTINUUM THEORIES (Ⅸ)-THERMOMECHANICS
DAI Tian-min
2005-01-01
The existing fundamental laws of thermodynamics for micropolar continuum field theories are restudied and their incompleteness is pointed out. New first and second fundamental laws for thermostatics and thermodynamics for micropolar continua are postulated. From them all equilibrium equations and the entropy inequality of thermostatics as well as all balance equations and the entropy rate inequalities are naturally and simultaneously deduced. The comparisons between the new results presented here and the corresponding results demonstrated in existing monographs and textbooks concerning micropolar continuum mechanics are made at any time. It should be emphasized to note that, the problem of why the local balance equation of energy and the local entropy inequality could not be obtained from the existing fundamental laws of thermodynamics for micropolar continua, is believed to be clarified.
The d'Alembert-lagrange principle for gradient theories and boundary conditions
Gouin, Henri
2007-01-01
Motions of continuous media presenting singularities are associated with phenomena involving shocks, interfaces or material surfaces. The equations representing evolutions of these media are irregular through geometrical manifolds. A unique continuous medium is conceptually simpler than several media with surfaces of singularity. To avoid the surfaces of discontinuity in the theory, we transform the model by considering a continuous medium taking intoaccount more complete internal energies expressed in gradient developments associated with the variables of state. Nevertheless, resulting equations of motion are of an higher order than those of the classical models: they lead to non-linear models associated with more complex integration processes on the mathematical level as well as on the numerical point of view. In fact, such models allow a precise study of singular zones when they have a non negligible physical thickness. This is typically the case for capillarity phenomena in fluids or mixtures of fluids in...
孙宗颀
2001-01-01
When a crack is subjected to shear force, crack branching usually occurs. Theoretical study shows that the crack branching under shear loading is caused by tensile stress, but not caused by shear fracture. The co-plane shear fracture could be obtained if compressive stress with given direction is applied to the specimen, subsequently, calculated shear fracture toughness, KⅡ C, is larger than KⅠ C. A prerequisite of possible occurrence of mode Ⅱ fracture was proposed. The study of shear fracture shows that the maximum circumferential stress theory considered its criterion as a parametric equation of a curve in KⅠ, KⅡ plane is incorrect; the predicted ratio KⅡ C/KⅠ C=0.866 is incorrect too.
The rudiments of a theory of solar wind/magnetosphere coupling derived from first principles
Borovsky, Joseph E.
2008-08-01
A formula that expresses the dayside reconnection rate in terms of upstream solar wind parameters is derived and tested. The derivation is based on the hypothesis that dayside reconnection is governed by local plasma parameters and that whatever controls those parameters controls the reconnection rate. The starting point of the derivation is the Cassak-Shay formula (from energy conservation principles), which expresses the dayside reconnection rate in terms of four parameters: the magnetic field strengths Bm and Bs in the magnetosphere and magnetosheath and the plasma mass densities ρm and ρs in the magnetosphere and magnetosheath. Using the Rankine-Hugoniot relations at the bow shock and an analysis of the magnetosheath flow, three of these parameters are expressed in terms of upstream solar wind parameters. These three expressions are then used in the Cassak-Shay formula to obtain the "solar wind control function." The interpretation of the control function is that solar wind pressure largely sets the reconnection rate. The solar wind magnetic field enters into the control function because of a bow shock Mach number dependence. The onset of a "plasmasphere effect" occurs when ρm > MA0.87ρsolarwind, wherein the magnetosphere begins to exert control over solar wind/magnetosphere coupling. Using the OMNI2 data set and seven geomagnetic indices, the solar wind control function is tested on its ability to describe the variance in the geomagnetic indices. The control function is found to be successful, statistically as good as the best "solar wind driver function" in the literature. This picture opens a new pathway to understanding and calculating solar wind/magnetosphere coupling.
Kevin J. Black
2013-08-01
Full Text Available Pharmacological challenge imaging has mapped, but rarely quantified, the sensitivity of a biological system to a given drug. We describe a novel method called rapid quantitative pharmacodynamic imaging. This method combines pharmacokinetic-pharmacodynamic modeling, repeated small doses of a challenge drug over a short time scale, and functional imaging to rapidly provide quantitative estimates of drug sensitivity including EC50 (the concentration of drug that produces half the maximum possible effect. We first test the method with simulated data, assuming a typical sigmoidal dose-response curve and assuming imperfect imaging that includes artifactual baseline signal drift and random error. With these few assumptions, rapid quantitative pharmacodynamic imaging reliably estimates EC50 from the simulated data, except when noise overwhelms the drug effect or when the effect occurs only at high doses. In preliminary fMRI studies of primate brain using a dopamine agonist, the observed noise level is modest compared with observed drug effects, and a quantitative EC50 can be obtained from some regional time-signal curves. Taken together, these results suggest that research and clinical applications for rapid quantitative pharmacodynamic imaging are realistic.
David Owen
2012-09-01
Full Text Available This essay considers the role of the ‘all affected interests’ principle in democratic theory, focusing on debates concerning its form, substance and relationship to the resolution of the democratic boundary problem. It begins by defending an ‘all actually affected’ formulation of the principle against Goodin's ‘incoherence argument’ critique of this formulation, before addressing issues concerning how to specify the choice set appropriate to the principle. Turning to the substance of the principle, the argument rejects Nozick's dismissal of its intuitive appeal and considers the two arguments advanced in favour of the principle as a criterion of democratic inclusion: the interlinked interests argument and the tracking power argument. It is shown that neither of these arguments can substantiate a view of the principle as a criterion of democratic inclusion, although both ground a constitutional understanding of the principle as specifying the scope of a duty of justification. It is then proposed that the principle can play an important role in a two-stage resolution of the democratic boundary problem in which it addresses the question of who is entitled to inclusion in the ‘pre-political’ demos that determines whether to constitute a polity. The second stage of this resolution requires an answer to the question of who should constitute the ‘political demos’, that is, the demos of a constituted polity and it is argued that a version of the “all subjected persons” principle can appropriately play this role.
柳益君; 朱明放; 习海旭; 朱广萍; 蒋红芬; 陈丹
2012-01-01
The paper proposes a classification method of Gene Expression Programming(GEP) based on the principle of maximum degree of membership, which is named MDM-GER Describing fuzziness of classification by membership degree of fuzzy set, the GEP classifier approximating membership function is obtained on training data set. For the instance to be classified, it computes the membership degree of in fuzzy sets, and determines the final class based on the principle of maximum degree of membership. The experiments carried on three datasets from the UCI machine learning repository show that MDM-GEP not only is effective for classification, but also resolves the un-classifiable region problems in the conventional simple GEP classification strategy.%提出了一种基于最大隶属度原则的基因表达式编程(Gene Expression Programming,GEP)分类方法MDM-GEP.引入模糊集合中的隶属度描述分类的模糊性,在训练集上得到逼近各类别隶属函数的GEP分类器.对于待分类实例,计算其在各模糊集中的隶属度,基于最大隶属度的模糊模式识别原则确定最终归属类,并在三个UCI数据集上对该算法进行了实验.实验结果表明,MDM-GEP不仅具有较好的分类性能,而且有效解决了传统的简单GEP分类方法中存在的拒分区域问题.
Grigoriev, I. S.; Grigoriev, K. G.
2003-05-01
The necessary first-order conditions of strong local optimality (conditions of maximum principle) are considered for the problems of optimal control over a set of dynamic systems. To derive them a method is suggested based on the Lagrange principle of removing constraints in the problems on a conditional extremum in a functional space. An algorithm of conversion from the problem of optimal control of an aggregate of dynamic systems to a multipoint boundary value problem is suggested for a set of systems of ordinary differential equations with the complete set of conditions necessary for its solution. An example of application of the methods and algorithm proposed is considered: the solution of the problem of constructing the trajectories of a spacecraft flight at a constant altitude above a preset area (or above a preset point) of a planet's surface in a vacuum (for a planet with atmosphere beyond the atmosphere). The spacecraft is launched from a certain circular orbit of a planet's satellite. This orbit is to be determined (optimized). Then the satellite is injected to the desired trajectory segment (or desired point) of a flyby above the planet's surface at a specified altitude. After the flyby the satellite is returned to the initial circular orbit. A method is proposed of correct accounting for constraints imposed on overload (mixed restrictions of inequality type) and on the distance from the planet center: extended (nonpointlike) intermediate (phase) restrictions of the equality type.
Hamilton-Jacobi Many-Worlds Theory and the Heisenberg Uncertainty Principle
Tipler, Frank J
2010-01-01
I show that the classical Hamilton-Jacobi (H-J) equation can be used as a technique to study quantum mechanical problems. I first show that the the Schr\\"odinger equation is just the classical H-J equation, constrained by a condition that forces the solutions of the H-J equation to be everywhere $C^2$. That is, quantum mechanics is just classical mechanics constrained to ensure that ``God does not play dice with the universe.'' I show that this condition, which imposes global determinism, strongly suggests that $\\psi^*\\psi$ measures the density of universes in a multiverse. I show that this interpretation implies the Born Interpretation, and that the function space for $\\psi$ is larger than a Hilbert space, with plane waves automatically included. Finally, I use H-J theory to derive the momentum-position uncertainty relation, thus proving that in quantum mechanics, uncertainty arises from the interference of the other universes of the multiverse, not from some intrinsic indeterminism in nature.
Electrolyte decomposition on Li-metal surfaces from first-principles theory
Ebadi, Mahsa; Brandell, Daniel; Araujo, C. Moyses
2016-11-01
An important feature in Li batteries is the formation of a solid electrolyte interphase (SEI) on the surface of the anode. This film can have a profound effect on the stability and the performance of the device. In this work, we have employed density functional theory combined with implicit solvation models to study the inner layer of SEI formation from the reduction of common organic carbonate electrolyte solvents (ethylene carbonate, propylene carbonate, dimethyl carbonate, and diethyl carbonate) on a Li metal anode surface. Their stability and electronic structure on the Li surface have been investigated. It is found that the CO producing route is energetically more favorable for ethylene and propylene carbonate decomposition. For the two linear solvents, dimethyl and diethyl carbonates, no significant differences are observed between the two considered reduction pathways. Bader charge analyses indicate that 2 e- reductions take place in the decomposition of all studied solvents. The density of states calculations demonstrate correlations between the degrees of hybridization between the oxygen of adsorbed solvents and the upper Li atoms on the surface with the trend of the solvent adsorption energies.
Greenberg, D E
1990-01-01
The significance of 'Beyond the pleasure principle' (BPP) cannot be understood by focusing solely on its manifest content. BPP is the product of theoretical displacements and compromise formations the motivation for which lies in the innovations introduced in 'On narcissism'. These innovations threatened assumptions about conflict and rationality inherent in Freud's libido theory. In BPP Freud attempts to resolve these questions by recasting primary narcissism as an 'inorganic unity'. The coherence of BPP can be restored if we un do these displacements and read its latent content. BPP now appears as a theory of instinctual conflict developing out of primary narcissism. Such development cannot, however, be organized as Freud originally formulated it; we must revise the static assumptions inherent in Freud's developmental view. Further, the question of how anti-developmental regressive forces are kept in check can now be understood by seeing the fear of death as a defensive negation of primary narcissism. This negation mirrors the theoretical repression at work in BPP.
George, Janine; Deringer, Volker L.; Wang, Ai; Müller, Paul; Englert, Ulli; Dronskowski, Richard
2016-12-01
Thermal properties of solid-state materials are a fundamental topic of study with important practical implications. For example, anisotropic displacement parameters (ADPs) are routinely used in physics, chemistry, and crystallography to quantify the thermal motion of atoms in crystals. ADPs are commonly derived from diffraction experiments, but recent developments have also enabled their first-principles prediction using periodic density-functional theory (DFT). Here, we combine experiments and dispersion-corrected DFT to quantify lattice thermal expansion and ADPs in crystalline α-sulfur (S8), a prototypical elemental solid that is controlled by the interplay of covalent and van der Waals interactions. We begin by reporting on single-crystal and powder X-ray diffraction measurements that provide new and improved reference data from 10 K up to room temperature. We then use several popular dispersion-corrected DFT methods to predict vibrational and thermal properties of α-sulfur, including the anisotropic lattice thermal expansion. Hereafter, ADPs are derived in the commonly used harmonic approximation (in the computed zero-Kelvin structure) and also in the quasi-harmonic approximation (QHA) which takes the predicted lattice thermal expansion into account. At the PPBE+D3(BJ) level, the QHA leads to excellent agreement with experiments. Finally, more general implications of this study for theory and experiment are discussed.
Reuter, Matthew G; Harrison, Robert J
2013-09-21
We revisit the derivation of electron transport theories with a focus on the projection operators chosen to partition the system. The prevailing choice of assigning each computational basis function to a region causes two problems. First, this choice generally results in oblique projection operators, which are non-Hermitian and violate implicit assumptions in the derivation. Second, these operators are defined with the physically insignificant basis set and, as such, preclude a well-defined basis set limit. We thus advocate for the selection of physically motivated, orthogonal projection operators (which are Hermitian) and present an operator-based derivation of electron transport theories. Unlike the conventional, matrix-based approaches, this derivation requires no knowledge of the computational basis set. In this process, we also find that common transport formalisms for nonorthogonal basis sets improperly decouple the exterior regions, leading to a short circuit through the system. We finally discuss the implications of these results for first-principles calculations of electron transport.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Dappiaggi, Claudio [Erwin Schroedinger Institut fuer Mathematische Physik, Wien (Austria); Pinamonti, Nicola [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Porrmann, Martin [KwaZulu-Natal Univ. (South Africa). Quantum Research Group, School of Physics; National Institute for Theoretical Physics, Durban (South Africa)
2010-01-15
In the framework of the algebraic formulation, we discuss and analyse some new features of the local structure of a real scalar quantum field theory in a strongly causal spacetime. In particular we use the properties of the exponential map to set up a local version of a bulk-to-boundary correspondence. The bulk is a suitable subset of a geodesic neighbourhood of any but fixed point p of the underlying background, while the boundary is a part of the future light cone having p as its own tip. In this regime, we provide a novel notion for the extended *-algebra of Wick polynomials on the said cone and, on the one hand, we prove that it contains the information of the bulk counterpart via an injective *-homomorphism while, on the other hand, we associate to it a distinguished state whose pull-back in the bulk is of Hadamard form. The main advantage of this point of view arises if one uses the universal properties of the exponential map and of the light cone in order to show that, for any two given backgrounds M and M{sup '} and for any two subsets of geodesic neighbourhoods of two arbitrary points, it is possible to engineer the above procedure such that the boundary extended algebras are related via a restriction homomorphism. This allows for the pull-back of boundary states in both spacetimes and, thus, to set up a machinery which permits the comparison of expectation values of local field observables in M and M{sup '}. (orig.)
Optical properties of orthovanadates, and periodates studied from first principles theory
Shwetha, G. [Department of Physics, Indian Institute of Technology Hyderabad, Ordnance Factory Estate, Yeddumailaram 502 205, Telangana (India); Kanchana, V., E-mail: kanchana@iith.ac.in [Department of Physics, Indian Institute of Technology Hyderabad, Ordnance Factory Estate, Yeddumailaram 502 205, Telangana (India); Vaitheeswaran, G. [Advanced Center of Research in High Energy Materials (ACRHEM), University of Hyderabad, Prof. C. R. Rao Road, Gachibowli, Hyderabad 500 046, Telangana (India)
2015-08-01
Detailed ab-initio studies on electronic structure and optical properties have been carried out for orthovanadates, and periodate compounds, ScVO{sub 4}, YVO{sub 4}, LuVO{sub 4}, and NaIO{sub 4}, KIO{sub 4}, RbIO{sub 4}, CsIO{sub 4} based on the Full potential linearized augmented plane wave method within the frame work of Density Functional Theory using Tran and Blaha modified Becke–Johnson potential (TB-mBJ). We have compared the optical properties of orthovanadates with periodates, and also with its high pressure phase. The main difference observed in moving from orthovanadates to periodates is the increase in band gap, and bands turn out to be less dispersive. By considering all these facts, we predict orthovanadates to be better scintillators than periodates, which is well explained from the band structure, and optical properties calculations. In addition, we also compared the optical properties of orthovanadates at ambient, and high pressure and we observed a decrease in the band gap of orthovanadates, increase in valence band width at high pressure when compared to ambient phase. Tuning the band gap, which is an important criteria for scintillators, can be observed in orthovanadates by decreasing the cation size, and also by moving to the high pressure scheelite phase. High pressure phase of orthovanadates might be more favourable as the zircon to scheelite transition is irreversible, and the transition pressure is also less around 8 GPa. - Graphical abstract: Display Omitted - Highlights: • Orthovanadates, periodates are insulators. • Band gap decreases with decrease in the cation size, and also moving to high pressure phase. • Orthovanadates are better host scintillators than periodates. • Orthovanadates can be better used as host scintillators in the high pressure phase.
Dappiaggi, Claudio [Erwin Schroedinger Institut fuer Mathematische Physik, Wien (Austria); Pinamonti, Nicola [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Porrmann, Martin [KwaZulu-Natal Univ. (South Africa). Quantum Research Group, School of Physics; National Institute for Theoretical Physics, Durban (South Africa)
2010-01-15
In the framework of the algebraic formulation, we discuss and analyse some new features of the local structure of a real scalar quantum field theory in a strongly causal spacetime. In particular we use the properties of the exponential map to set up a local version of a bulk-to-boundary correspondence. The bulk is a suitable subset of a geodesic neighbourhood of any but fixed point p of the underlying background, while the boundary is a part of the future light cone having p as its own tip. In this regime, we provide a novel notion for the extended *-algebra of Wick polynomials on the said cone and, on the one hand, we prove that it contains the information of the bulk counterpart via an injective *-homomorphism while, on the other hand, we associate to it a distinguished state whose pull-back in the bulk is of Hadamard form. The main advantage of this point of view arises if one uses the universal properties of the exponential map and of the light cone in order to show that, for any two given backgrounds M and M{sup '} and for any two subsets of geodesic neighbourhoods of two arbitrary points, it is possible to engineer the above procedure such that the boundary extended algebras are related via a restriction homomorphism. This allows for the pull-back of boundary states in both spacetimes and, thus, to set up a machinery which permits the comparison of expectation values of local field observables in M and M{sup '}. (orig.)
Review on Generalized Uncertainty Principle
Tawfik, Abdel Nasser
2015-01-01
Based on string theory, black hole physics, doubly special relativity and some "thought" experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in understanding recent PLANCK observations on the cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta.
Kouvaris, Kostas; Clune, Jeff; Kounios, Loizos; Brede, Markus; Watson, Richard A
2017-04-01
One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments. Such variability is crucial for evolvability, but poorly understood. In particular, how can natural selection favour developmental organisations that facilitate adaptive evolution in previously unseen environments? Such a capacity suggests foresight that is incompatible with the short-sighted concept of natural selection. A potential resolution is provided by the idea that evolution may discover and exploit information not only about the particular phenotypes selected in the past, but their underlying structural regularities: new phenotypes, with the same underlying regularities, but novel particulars, may then be useful in new environments. If true, we still need to understand the conditions in which natural selection will discover such deep regularities rather than exploiting 'quick fixes' (i.e., fixes that provide adaptive phenotypes in the short term, but limit future evolvability). Here we argue that the ability of evolution to discover such regularities is formally analogous to learning principles, familiar in humans and machines, that enable generalisation from past experience. Conversely, natural selection that fails to enhance evolvability is directly analogous to the learning problem of over-fitting and the subsequent failure to generalise. We support the conclusion that evolving systems and learning systems are different instantiations of the same algorithmic principles by showing that existing results from the learning domain can be transferred to the evolution domain. Specifically, we show that conditions that alleviate over-fitting in learning systems successfully predict which biological conditions (e.g., environmental variation, regularity, noise or a pressure for developmental simplicity) enhance evolvability. This equivalence provides access to a well-developed theoretical framework from
Eick, Caroline Marie; Ryan, Patrick A.
2014-01-01
This article discusses the relevance of an analytic framework that integrates principles of Catholic social teaching, critical pedagogy, and the theory of intersectionality to explain attitudes toward marginalized youth held by Catholic students preparing to become teachers. The framework emerges from five years of action research data collected…
Castagno, Angelina E.; Lee, Stacey J.
2007-01-01
This article examines one university's policies regarding Native mascots and ethnic fraud through a Tribal Critical Race Theory analytic lens. Using the principle of interest convergence, we argue that institutions of higher education allow and even work actively towards a particular form or level of diversity, but they do not extend it far…
Castagno, Angelina E.; Lee, Stacey J.
2007-01-01
This article examines one university's policies regarding Native mascots and ethnic fraud through a Tribal Critical Race Theory analytic lens. Using the principle of interest convergence, we argue that institutions of higher education allow and even work actively towards a particular form or level of diversity, but they do not extend it far…
陈海涛; 黄鑫; 邱林; 王文川
2013-01-01
提出了构建综合考虑自然因素与农作物生长周期之间量化关系的干旱度评价指标，并基于最大熵原理建立了项目区干旱度分布密度函数，避免了以往构建概率分布的随意性，实现了对区域农业干旱度进行量化评价的目的。首先根据作物在非充分灌溉条件下的减产率，建立了干旱程度的量化评价指标，然后通过蒙特卡罗法生成了长系列降雨资料，并计算历年干旱度指标，最后利用最大熵原理，构建了农业干旱度分布的概率分布密度函数。以河南省濮阳市渠村灌区为对象进行了实例计算。结果表明，该模型概念清晰，计算简便实用，结果符合实际，是一种较好的评估方法。%The evaluation index of drought degree,which comprehensively considering the quantitative rela⁃tionship between the crop growing period and natural factors, is presented in this paper. The distribution density function of drought degree has been established based on the maximum-entropy principle. It can avoid the randomness of probability distribution previous constructed and has realized purpose of quantita⁃tive evaluation of agricultural drought degree. Firstly, the quantitative evaluation index of drought degree was established according to the yield reduction rate of deficit irrigation conditions. Secondly,a long series rainfall data were generated by Monte-Carlo method and the past years index of drought degree were calcu⁃lated. Finally, the density function of probability distribution of agricultural drought degree distribution was constructed by using maximum entropy principle. As an example, the calculation results of the distribution of drought degree of agriculture in Qucun irrigation area were presented. The results show that the model provides a better evaluation method with clear concept,simple and practical approach,and reasonable out⁃comes.
王璐; 李光春; 乔相伟; 王兆龙; 马涛
2012-01-01
In order to solve the state estimation problem of nonlinear systems without knowing prior noise statistical characteristics, an adaptive unscented Kalman filter (UKF) based on the maximum likelihood principle and expectation maximization algorithm is proposed in this paper. In our algorithm, the maximum likelihood principle is used to find a log likelihood function with noise statistical characteristics. Then, the problem of noise estimation turns out to be maximizing the mean of the log likelihood function, which can be achieved by using the expectation maximization algorithm. Finally, the adaptive UKF algorithm with a suboptimal and recurred noise statistical estimator can be obtained. The simulation analysis shows that the proposed adaptive UKF algorithm can overcome the problem of filtering accuracy declination of traditional UKF used in nonlinear filtering without knowing prior noise statistical characteristics and that the algorithm can estimate the noise statistical parameters online.%针对噪声先验统计特性未知情况下的非线性系统状态估计问题,提出了基于极大似然准则和最大期望算法的自适应无迹卡尔曼滤波(Unscented Kalman filter,UKF)算法.利用极大似然准则构造含有噪声统计特性的对数似然函数,通过最大期望算法将噪声估计问题转化为对数似然函数数学期望极大化问题,最终得到带次优递推噪声统计估计器的自适应UKF算法.仿真分析表明,与传统UKF算法相比,提出的自适应UKF算法有效克服了传统UKF算法在系统噪声统计特性未知情况下滤波精度下降的问题,并实现了系统噪声统计特性的在线估计.
Hellmich, Christian; Fritsch, Andreas; Dormieux, Luc
Biomimetics deals with the application of nature-made "design solutions" to the realm of engineering. In the quest to understand mechanical implications of structural hierarchies found in biological materials, multiscale mechanics may hold the key to understand "building plans" inherent to entire material classes, here bone and bone replacement materials. Analyzing a multitude of biophysical hierarchical and biomechanical experiments through homogenization theories for upscaling stiffness and strength properties reveals the following design principles: The elementary component "collagen" induces, right at the nanolevel, the mechanical anisotropy of bone materials, which is amplified by fibrillar collagen-based structures at the 100-nm scale, and by pores in the micrometer-to-millimeter regime. Hydroxyapatite minerals are poorly organized, and provide stiffness and strength in a quasi-brittle manner. Water layers between hydroxyapatite crystals govern the inelastic behavior of the nanocomposite, unless the "collagen reinforcement" breaks. Bone replacement materials should mimic these "microstructural mechanics" features as closely as possible if an imitation of the natural form of bone is desired (Gebeshuber et al., Adv Mater Res 74:265-268, 2009).
Anas, M. M.; Othman, A. P.; Gopir, G. [School of Applied Physics, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor (Malaysia)
2014-09-03
Density functional theory (DFT), as a first-principle approach has successfully been implemented to study nanoscale material. Here, DFT by numerical basis-set was used to study the quantum confinement effect as well as electronic properties of silicon quantum dots (Si-QDs) in ground state condition. Selection of quantum dot models were studied intensively before choosing the right structure for simulation. Next, the computational result were used to examine and deduce the electronic properties and its density of state (DOS) for 14 spherical Si-QDs ranging in size up to ∼ 2 nm in diameter. The energy gap was also deduced from the HOMO-LUMO results. The atomistic model of each silicon QDs was constructed by repeating its crystal unit cell of face-centered cubic (FCC) structure, and reconstructed until the spherical shape obtained. The core structure shows tetrahedral (T{sub d}) symmetry structure. It was found that the model need to be passivated, and hence it was noticed that the confinement effect was more pronounced. The model was optimized using Quasi-Newton method for each size of Si-QDs to get relaxed structure before it was simulated. In this model the exchange-correlation potential (V{sub xc}) of the electrons was treated by Local Density Approximation (LDA) functional and Perdew-Zunger (PZ) functional.
Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars
2016-01-01
Background Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation...... training of mastoidectomy. Methods Eighteen novice medical students received 1 h of self-directed virtual reality simulation training of the mastoidectomy procedure randomized for standard instructions (control) or cognitive load theory-based instructions with a worked example followed by a problem...... completion exercise (intervention). Participants then completed two post-training virtual procedures for assessment and comparison. Cognitive load during the post-training procedures was estimated by reaction time testing on an integrated secondary task. Final-product analysis by two blinded expert raters...
何洋; 纪昌明; 田开华; 张验科; 李传刚
2016-01-01
为了更好的研究径流预报误差的分布规律，应用最大熵原理，建立径流预报误差分布的最大熵模型；以官地水库径流预报系列为例，计算其不同预见期的径流预报误差概率密度函数及分布曲线，将该分布曲线与理论正态分布曲线和样本直方图进行对比，结果表明最大熵法求得的误差分布能更好地描述径流预报误差的分布特性。考虑流域径流年内的丰枯变化，以枯水期、汛期和过渡期对径流系列进行划分，分别分析各个时期的误差分布规律，并给出预报误差在不同置信区间下的置信度，从而更好地掌握径流预报误差的分布规律，为提高径流预报精度提供一条新途径。%To deeply study the distribution law of runoff forecast error, the maximum entropy principle is applied and the maximum entropy model for the distribution of runoff prediction error is established in this paper. The authors use the runoff forecast series in Guandi Reservoir as an example and calculate the probability density function and distribution curve of the runoff forecast error for different forecasting periods. The distribution curves are compared with the theoretical normal distribution curves and the histogram of the samples. The results show that the distribution characteristics of the error distribution calculated by the maximum entropy method can describe the runoff forecasting error better. Considering the change of runoff years, the runoff series are divided into droughts, flood and transition seasons. The error distribution rule of each period is analyzed, and the confidence of forecasting error at different confidence interval offered, thus mastering the distribution rule of runoff forecasting error better and providing a new way to improve the accuracy of runoff forecasting.
史敬涛; 吴臻
2011-01-01
An optimal control problem motivated by a portfolio and consumption choice problem in the financial market where the expected utility of the investor is assumed to be the Constant Relative Risk Aversion (CRRA) case is discussed. A local stochastic maximum principle is obtained in the jump-diffusion setting using classical variational method. The result is applied to make optimal portfolio and consumption choice strategy for the problem and the explicit optimal solution in the state feedback form is given.%讨论了由金融市场中投资组合和消费选择问题引出的一类最优控制问题,投资者的期望效用是常数相对风险厌恶(CRRA)情形.在跳扩散框架下,利用古典变分法得到了一个局部随机最大值原理.结果应用到最优投资组合和消费选择策略问题,得到了状态反馈形式的显式最优解.
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Erik Van der Straeten
2009-11-01
Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.
Vignale, Giovanni
2011-01-01
In a paper recently published in Phys. Rev. A [arXiv:1010.4223], Schirmer has criticized an earlier work of mine [arXiv:0803.2727], as well as the foundations of time-dependent density functional theory. In Ref.[2], I showed that the so-called "causality paradox" - i.e., the failure of the exchange-correlation potential derived from the Runge-Gross time-dependent variational principle to satisfy causality requirements - can be solved by a careful reformulation of that variational principle. F...
王光臣; 吴臻
2007-01-01
In this paper, we mainly study a kind of risk-sensitive optimal control problem motiwted by a kind of portfolio choice problem in certain financial market. Using the classical convex variational technique, we obtain the maximum principle for this kind of problem. The form of the maximum principle is similar to its risk-neutral counterpart. But the adjoint equation and the variational inequality heavily depend on the risk-sensitive parameter γ. This is one of the main difference from the risk-neutral case. We use this result to solve a kind of optimal portfolio choice problem. The optimal portfolio strategy obtained by the Bellman dynamic programming principle is a special case of our result when the investor only invests the home bond and the stock. Computational results and figures explicitly illustrate the relationships between the maximum expected utility and the parameters of the model.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Moiseiwitsch, B L
2004-01-01
This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha
A review of Principles and Parameters---An Introduction to Syntactic Theory%《原则与参数》评介
Peter W.Culicover; 伍雅清
2000-01-01
@@《原则与参数》(Principles and Para,eters——An Introduction to Syntactic Theory)是Peter W.Culicover撰写的句法学教材，于1997年由牛津大学出版社出版。这部书与早些年出版的句法学著作如Liliane Haegeman(1991)的《管约论导论》、...
Principled Missing Data Treatments.
Lang, Kyle M; Little, Todd D
2016-04-04
We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.
WU Li-Li; WU Ning; HU Juan-Mei; WU Feng-Min
2008-01-01
For a long time, it has been generally believed that spin-spin interactions can only exist in a theory where Lorentz symmetry is gauged, and a theory with spin-spin interactions is not perturbatively renormalizable. But this is not true. By studying the motion of a spinning particle in gravitational field, it is found that there exist spin-spin interactions in gauge theory of gravity. Its mechanism is that a spinning particle will generate gravitomagnetic field in space-time, and this gravitomagnetic field will interact with the spin of another particle, which will cause spin-spin interactions. So, spin-spin interactions are transmitted by gravitational field. The form of spin-spin interactions in post Newtonian approximations is deduced. This result can also be deduced from the Papapetrou equation. This kind of interaction will not affect the renormalizability of the theory. The spin-spin interactions will violate the weak equivalence principle, and the violation effects are detectable. An experiment is proposed to detect the effects of the violation of the weak equivalence principle.
Hes Tomáš
2017-03-01
Full Text Available Microfinance services are essential tools of formalization of shadow economics, leveraging immature entrepreneurship with external capital. Given the importance of shadow economics for the social balance of developing countries, the importance of an answer to a question of how microfinance entities come into existence, is rather essential. While decision-taking process leading to entrepreneurship were explained by the effectuation theory developed in the 90’, these explanations were not concerned with the logics of creation of microenterprises in neither developing countries nor microfinance village banks. While the abovementioned theories explain the nascence of companies in environment of developed markets, importance of a focus on emerging markets related to large share of human society of microfinance clientele is obvious. The study provides a development streak to the effectuation Theory, adding the musketeer principle to the five effectuation principles proposed by Sarasvathy. Furthermore, the hitherto not considered relationship between social capital and effectuation related concepts is another proposal of the paper focusing on description of the nature of microfinance clientele from the point of view of effectuation theory and social capital drawing a comparison of microfinance markets in four countries, Turkey, Sierra Leone, Indonesia and Afghanistan.
Chemical Principles Exemplified
Plumb, Robert C.
1970-01-01
This is the first of a new series of brief ancedotes about materials and phenomena which exemplify chemical principles. Examples include (1) the sea-lab experiment illustrating principles of the kinetic theory of gases, (2) snow-making machines illustrating principles of thermodynamics in gas expansions and phase changes, and (3) sunglasses that…
曾杰; 张永兴; 靳晓光
2011-01-01
通过分析国内外岩爆预测的判据,选择岩爆发生所需的力学条件、完整性条件、储能条件和脆性条件作为岩爆预测指标.引入岩爆预测的相对隶属度概念,计算了岩爆的相对隶属度模糊矩阵和预测指标的权重,以信息熵来描述并比较岩爆评价中的不确定性,定义了加权广义权距离来表征岩爆的差异.根据最大熵原理建立了岩爆预测的模糊最优化模型,对一些岩石地下工程实例进行了分析,预测结果与其他方法的分析结果以及实际情况基本一致.并将模型运用于葡萄山隧道岩爆预测,预测结果与实际岩爆情况符合较好.%In the analysis of rock burst criterion prediction at home and abroad, the prediction standards of rock burst are selected including the conditions of mechanics integrity, energy and brittle. The concept of relative membership degree on the rock burst prediction was introduced. The weight of standards and fuzzy matrix of relative membership degree are calculated. Uncertainty in rock burst prediction is described and compared according to the information entropy. Generalized weighted distance is also defined to characterize the differences in rock burst based on the maximum entropy principle, the establishment of a rock burst prediction fuzzy optimization model. The results from the application to practical example and comparisons with other methods are fairly good. Finally, the prediction model is applied in Putaoshan tunnel and the predictions are consistent with the actual rock burst.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Svenson, Eric Johan
Participants on the Invincible America Assembly in Fairfield, Iowa, and neighboring Maharishi Vedic City, Iowa, practicing Maharishi Transcendental Meditation(TM) (TM) and the TM-Sidhi(TM) programs in large groups, submitted written experiences that they had had during, and in some cases shortly after, their daily practice of the TM and TM-Sidhi programs. Participants were instructed to include in their written experiences only what they observed and to leave out interpretation and analysis. These experiences were then read by the author and compared with principles and phenomena of modern physics, particularly with quantum theory, astrophysics, quantum cosmology, and string theory as well as defining characteristics of higher states of consciousness as described by Maharishi Vedic Science. In all cases, particular principles or phenomena of physics and qualities of higher states of consciousness appeared qualitatively quite similar to the content of the given experience. These experiences are presented in an Appendix, in which the corresponding principles and phenomena of physics are also presented. These physics "commentaries" on the experiences were written largely in layman's terms, without equations, and, in nearly every case, with clear reference to the corresponding sections of the experiences to which a given principle appears to relate. An abundance of similarities were apparent between the subjective experiences during meditation and principles of modern physics. A theoretic framework for understanding these rich similarities may begin with Maharishi's theory of higher states of consciousness provided herein. We conclude that the consistency and richness of detail found in these abundant similarities warrants the further pursuit and development of such a framework.
de Lusignan, Simon; Krause, Paul
2010-01-01
There has been much criticism of the NHS national programme for information technology (IT); it has been an expensive programme and some elements appear to have achieved little. The Hayes report was written as an independent review of health and social care IT in England. To identify key principles for health IT implementation which may have relevance beyond the critique of NHS IT. We elicit ten principles from the Hayes report, which if followed may result in more effective IT implementation in health care. They divide into patient-centred, subsidiarity and strategic principles. The patient-centred principles are: 1) the patient must be at the centre of all information systems; 2) the provision of patient-level operational data should form the foundation - avoid the dataset mentality; 3) store health data as close to the patient as possible; 4) enable the patient to take a more active role with their health data within a trusted doctor-patient relationship. The subsidiarity principles set out to balance the local and health-system-wide needs: 5) standardise centrally - patients must be able to benefit from interoperability; 6) provide a standard procurement package and an approved process that ensures safety standards and provision of interoperable systems; 7) authorise a range of local suppliers so that health providers can select the system best meeting local needs; 8) allow local migration from legacy systems, as and when improved functionality for patients is available. And finally the strategic principles: 9) evaluate health IT systems in terms of measureable benefits to patients; 10) strategic planning of systems should reflect strategic goals for the health of patients/the population. Had the Hayes principles been embedded within our approach to health IT, and in particular to medical record implementation, we might have avoided many of the costly mistakes with the UK national programme. However, these principles need application within the modern IT
Information theory in computer vision and pattern recognition
Escolano, Francisco; Bonev, Boyan
2009-01-01
Researchers are bringing information theory elements to the computer vision and pattern recognition (CVPR) arena. Among these elements there are measures (entropy, mutual information), principles (maximum entropy, minimax entropy) and theories (rate distortion theory, method of types). This book explores the latter elements.
Hesselink, M.W.
2013-01-01
This short paper contains comments prepared for the 'Foundational Principles of Contract Law Roundtable’ held at Berkeley in January 2013. It discusses the relationships between contract law and democracy, between contract prices and human dignity, and between the American doctrine of
Ozawa, Hisashi; Shimokawa, Shinya; Sakuma, Hirofumi
Turbulence is ubiquitous in nature, yet remains an enigma in many respects. Here we investigate dissipative properties of turbulence so as to find out a statistical "law" of turbulence. Two general expressions are derived for a rate of entropy increase due to thermal and viscous dissipation (turbulent dissipation) in a fluid system. It is found with these equations that phenomenological properties of turbulence such as Malkus's suggestion on maximum heat transport in thermal convection as well as Busse's sug- gestion on maximum momentum transport in shear turbulence can rigorously be ex- plained by a unique state in which the rate of entropy increase due to the turbulent dissipation is at a maximum (dS/dt = Max.). It is also shown that the same state cor- responds to the maximum entropy climate suggested by Paltridge. The tendency to increase the rate of entropy increase has also been confirmed by our recent GCM ex- periments. These results suggest the existence of a universal law that manifests itself in the long-term statistics of turbulent fluid systems from laboratory-scale turbulence to planetary-scale circulations. Ref.) Ozawa, H., Shimokawa, S., and Sakuma, H., Phys. Rev. E 64, 026303, 2001.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Kumar, Mohit; Mookerjee, Sumit; Som, Tapobrata
2016-09-01
We demonstrate that the work function of Al-doped ZnO (AZO) can be tuned externally by applying an electric field. Our experimental investigations using Kelvin probe force microscopy show that by applying a positive or negative tip bias, the work function of AZO film can be enhanced or reduced, which corroborates well with the observed charge transport using conductive atomic force microscopy. These findings are further confirmed by calculations based on first-principles theory. Tuning the work function of AZO by applying an external electric field is not only important to control the charge transport across it, but also to design an Ohmic contact for advanced functional devices.
Kumar, Mohit; Mookerjee, Sumit; Som, Tapobrata
2016-09-16
We demonstrate that the work function of Al-doped ZnO (AZO) can be tuned externally by applying an electric field. Our experimental investigations using Kelvin probe force microscopy show that by applying a positive or negative tip bias, the work function of AZO film can be enhanced or reduced, which corroborates well with the observed charge transport using conductive atomic force microscopy. These findings are further confirmed by calculations based on first-principles theory. Tuning the work function of AZO by applying an external electric field is not only important to control the charge transport across it, but also to design an Ohmic contact for advanced functional devices.
刘同舫
2012-01-01
As a principle of methodology, the theory of Rawls＂ education equity is based on his theory script of justice, and exists insurnountable plight of the theory. The premise of his theory is built on a pure idealistic hypothesis and the understanding of human and human nature, and it is abstract. His perspective of the theory is under the provisions of a uniform scale to promote everyone＇s fairness and equality, and it is one-sided; His end-result of the theory is to pursue the maximum consensus of justice of truth on distribution by the tendency of equalitarianism, and it is Utopian. We should take critical views to the theory of Rawls＂ education equity, be vigilant of the complex of it and prevent the judgment which is not in accord with reality on account of blindly faith.%罗尔斯立足于自身正义论的理论脚本之上的教育公正观作为一种方法论原则，存在不可克服的理论困境。他的理论前提是建立在对人及人性纯粹的唯心主义假设和理解的基础上，是抽象的；他的理论视角是在统一尺度的规定下促进所有人的公平、平等，是片面的；他的理论归宿是以平均主义倾向追求分配上获得最大共识的真理性公正，是乌托邦的。我们应当以批判性的眼光看待罗尔斯的教育公正理论，警惕陷入罗尔斯教育公正理论情结，以防盲目地信仰其理论而做出与现实不相符的判断。
Renewal of basic laws and principles for polar continuum theories (Ⅺ)——consistency problems
DAI Tian-min
2007-01-01
Some consistency problems existing in continuum field theories are briefly reviewed. Three arts of consistency problems are clarified based on the renewed basic laws for polar continua. The first art discusses the consistency problems between the basic laws for polar continua. The second art discusses the consistency problems between the basic laws for polar continua and for other nonpolar continua. The third art discusses the consistency problems between the basic laws for micropolar continuum theories and the dynamical equations for rigid body. The results presented here can help us to get a deeper understanding the structure of the basic laws for various continuum theories and the interrelations between them. In the meantime, these results obtained show clearly that the consistency problems could not be solved in the framework of traditional basic laws for continuum field theories.
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Volov, V T
2013-01-01
The theory of resource distribution in self-organizing systems on the basis of the fractal-cluster method has been presented. This theory consists of two parts: determined and probable. The first part includes the static and dynamic criteria, the fractal-cluster dynamic equations which are based on the fractal-cluster correlations and Fibonacci's range characteristics. The second part of the one includes the foundations of the probable characteristics of the fractal-cluster system. This part includes the dynamic equations of the probable evolution of these systems. By using the numerical researches of these equations for the stationary case the random state field of the one in the phase space of the $D$, $H$, $F$ criteria have been obtained. For the socio-economical and biological systems this theory has been tested.
应用型本科操作系统教材建设研究%Teaching of Principles of Operating System under the Direction of Learning Theories
范辉; 谢青松; 孙述和; 杜萍; 李晋江
2011-01-01
Learning theories provide scientific basis for education theories development and teaching practice, and present various teaching strategies and skills for learning promotion for teachers. Four learning theories such as behaviorism, cognitivism, constructivism and humanism are intensively investigated in this paper. Aiming at the problems existed in teaching of principles of operating systems, this paper proposes the corresponding teaching methods and strategies under the direction of these learning theories, which can establish a bridge between basic learning research and teaching practice, and really benefit students by effective teaching activities.%教材建设是课程建设和人才培养的核心，本文以“十一五”国家级规划教材《操作系统原理与实训教程（第二版）》为背景，重点讨论应用型本科操作系统的课程目标、教材的组织结构等，以及应用型本科操作系统课堂教学实施中的有关问题，希望对同行有所启示。
Saposnik, Gustavo; Johnston, S Claiborne
2016-04-01
Acute stroke care represents a challenge for decision makers. Decisions based on erroneous assessments may generate false expectations of patients and their family members, and potentially inappropriate medical advice. Game theory is the analysis of interactions between individuals to study how conflict and cooperation affect our decisions. We reviewed principles of game theory that could be applied to medical decisions under uncertainty. Medical decisions in acute stroke care are usually made under constrains: short period of time, with imperfect clinical information, limit understanding about patients and families' values and beliefs. Game theory brings some strategies to help us manage complex medical situations under uncertainty. For example, it offers a different perspective by encouraging the consideration of different alternatives through the understanding of patients' preferences and the careful evaluation of cognitive distortions when applying 'real-world' data. The stag-hunt game teaches us the importance of trust to strength cooperation for a successful patient-physician interaction that is beyond a good or poor clinical outcome. The application of game theory to stroke care may improve our understanding of complex medical situations and help clinicians make practical decisions under uncertainty. © 2016 World Stroke Organization.
风水理论中的美学原则及审美取向%The Aesthetic Principles and Standards of Feng Shui Theory
肖冠兰
2012-01-01
Feng Shui is the most important theory system of Chinese traditional culture to estimate the natural environment for site selection. The main aesthetic principles and standards of Feng Shui such as clear priorities, balance and stability, etc. will be analyzed in this paper. And it will also propose that the aesthetic principles and standards of Feng Shui theory play a guiding role in the organization and construction of the classical architectures.%本文通过分析风水理论中对自然环境的评价原则，指出其中所包含的“主次分明”、“均衡稳定”、“宽阔大气”、“屈曲灵动”等美学思想和审美取向及其对中国古典建筑的环境景观、空间构成、建筑群体组合等的指导作用。
Bochove, Erik J; Rao Gudimetla, V S
2017-01-01
We propose a self-consistency condition based on the extended Huygens-Fresnel principle, which we apply to the propagation kernel of the mutual coherence function of a partially coherent laser beam propagating through a turbulent atmosphere. The assumption of statistical independence of turbulence in neighboring propagation segments leads to an integral equation in the propagation kernel. This integral equation is satisfied by a Gaussian function, with dependence on the transverse coordinates that is identical to the previous Gaussian formulation by Yura [Appl. Opt.11, 1399 (1972)APOPAI0003-693510.1364/AO.11.001399], but differs in the transverse coherence length's dependence on propagation distance, so that this established version violates our self-consistency principle. Our formulation has one free parameter, which in the context of Kolmogorov's theory is independent of turbulence strength and propagation distance. We determined its value by numerical fitting to the rigorous beam propagation theory of Yura and Hanson [J. Opt. Soc. Am. A6, 564 (1989)JOAOD60740-323210.1364/JOSAA.6.000564], demonstrating in addition a significant improvement over other Gaussian models.
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Pardini, Lorenzo; Löffler, Stefan; Biddau, Giulio; Hambach, Ralf; Kaiser, Ute; Draxl, Claudia; Schattschneider, Peter
2016-07-15
Transmission electron microscopy has been a promising candidate for mapping atomic orbitals for a long time. Here, we explore its capabilities by a first-principles approach. For the example of defected graphene, exhibiting either an isolated vacancy or a substitutional nitrogen atom, we show that three different kinds of images are to be expected, depending on the orbital character. To judge the feasibility of visualizing orbitals in a real microscope, the effect of the optics' aberrations is simulated. We demonstrate that, by making use of energy filtering, it should indeed be possible to map atomic orbitals in a state-of-the-art transmission electron microscope.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Liu, Jefferson Zhe; Zunger, Alex
2009-07-22
Epitaxial growth of semiconductor alloys onto a fixed substrate has become the method of choice to make high quality crystals. In the coherent epitaxial growth, the lattice mismatch between the alloy film and the substrate induces a particular form of strain, adding a strain energy term into the free energy of the alloy system. Such epitaxial strain energy can alter the thermodynamics of the alloy, leading to a different phase diagram and different atomic microstructures. In this paper, we present a general-purpose mixed-basis cluster expansion method to describe the thermodynamics of an epitaxial alloy, where the formation energy of a structure is expressed in terms of pair and many-body interactions. With a finite number of first-principles calculation inputs, our method can predict the energies of various atomic structures with an accuracy comparable to that of first-principles calculations themselves. Epitaxial (In, Ga)N zinc-blende alloy grown on GaN(001) substrate is taken as an example to demonstrate the details of the method. Two (210) superlattice structures, (InN)(2)/(GaN)(2) (at x = 0.50) and (InN)(4)/(GaN)(1) (at x = 0.80), are identified as the ground state structures, in contrast to the phase-separation behavior of the bulk alloy.
Radim Uhlář
2009-09-01
Full Text Available BACKGROUND: There are several factors (the initial ski jumper's body position and its changes at the transition to the flight phase, the magnitude and the direction of the velocity vector of the jumper's center of mass, the magnitude of the aerodynamic drag and lift forces, etc., which determine the trajectory of the jumper ski system along with the total distance of the jump. OBJECTIVE: The objective of this paper is to bring out a method based on Pontryagin's maximum principle, which allows us to obtain a solution of the optimization problem for flight style control with three constrained control variables – the angle of attack (a, body ski angle (b, and ski opening angle (V. METHODS: The flight distance was used as the optimality criterion. The borrowed regression function was taken as the source of information about the dependence of the drag (D and lift (L area on control variables with tabulated regression coefficients. The trajectories of the reference and optimized jumps were compared with the K = 125 m jumping hill profile in Frenštát pod Radhoštěm (Czech Republic and the appropriate lengths of the jumps, aerodynamic drag and lift forces, magnitudes of the ski jumper system's center of mass velocity vector and it's vertical and horizontal components were evaluated. Admissible control variables were taken at each time from the bounded set to respect the realistic posture of the ski jumper system in flight. RESULTS: It was found that a ski jumper should, within the bounded set of admissible control variables, minimize the angles (a and (b, whereas angle (V should be maximized. The length increment due to optimization is 17%. CONCLUSIONS: For future work it is necessary to determine the dependence of the aerodynamic forces acting on the ski jumper system on the flight via regression analysis of the experimental data as well as the application of the control variables related to the ski jumper's mental and motor abilities. [V
José Oscar de Almeida Marques
2012-06-01
Full Text Available ABSTRACT When Hume, in the Treatise on Human Nature, began his examination of the relation of cause and effect, in particular, of the idea of necessary connection which is its essential constituent, he identified two preliminary questions that should guide his research: (1 For what reason we pronounce it necessary that every thing whose existence has a beginning should also have a cause and (2 Why we conclude that such particular causes must necessarily have such particular effects? (1.3.2, 14-15 Hume observes that our belief in these principles can result neither from an intuitive grasp of their truth nor from a reasoning that could establish them by demonstrative means. In particular, with respect to the first, Hume examines and rejects some arguments with which Locke, Hobbes and Clarke tried to demonstrate it, and suggests, by exclusion, that the belief that we place on it can only come from experience. Somewhat surprisingly, however, Hume does not proceed to show how that derivation of experience could be made, but proposes instead to move directly to an examination of the second principle, saying that, "perhaps, be found in the end, that the same answer will serve for both questions" (1.3.3, 9. Hume's answer to the second question is well known, but the first question is never answered in the rest of the Treatise, and it is even doubtful that it could be, which would explain why Hume has simply chosen to remove any mention of it when he recompiled his theses on causation in the Enquiry concerning Human Understanding. Given this situation, an interesting question that naturally arises is to investigate the relations of logical or conceptual implication between these two principles. Hume seems to have thought that an answer to (2 would also be sufficient to provide an answer to (1. Henry Allison, in his turn, argued (in Custom and Reason in Hume, p. 94-97 that the two questions are logically independent. My proposal here is to try to show
2014-10-14
and materials for energy storage. San Francisco , CA. 10-14 August 2014. 10. 248th ACS National Meeting & Exposition, PHYS Symposium on Quantum...Chemical Calculation of Molecular Properties: A Tribute to Professor Nicholas C. Handy Symposium. San Francisco , CA. 10-14 August 2014. 11. Gordon...Parmigiani, eds. Cluster models for surface and bulk phenomena, pp. 1-82 (NATO ASI Ser., New York, Plenum, 1992). [27] J. L. Whitten and H. Yang, Theory
Vera, R A
2004-01-01
Recent astronomical observations verify the new astrophysical scenario resulting from new conservation laws and a new relativity principle fixed by dual properties of light and by some new gravitational (G) tests. Gravitation turns out to be a refraction phenomenon in which the field does not exchange energy with photons and particles. During a free fall, and during universe expansion, the relative masses of free bodies, with respect to the observer, remain constants. During universe expansion, the average relative distances are conserved because rods must expand in same proportion. The universe entropy is conserved because the new kind of linear black hole, after absorbing radiation, must explode regenerating new primeval gas that provide new fuel for dead galaxies. Thus galaxies must be evolving, indefinitely in closed cycles with luminous and dark periods. All of their phases are to be found anywhere and in any age of the universe. They account for the recent observations
Goodwin, Miki; Candela, Lori
2013-06-01
The aim of this qualitative study was to explore if newly practicing nurses benefited from learning holistic comfort theory during their baccalaureate education, and to provide a conceptual framework to support the transition from school to practice. The study was conducted among graduates of an accelerated baccalaureate nursing program where holistic comfort theory was embedded as a learner-centered philosophy across the curriculum. A phenomenological process using van Manen's qualitative methodology in education involving semi-structured interviews and thematic analysis was used. The nurses recalled what holistic comfort meant to them in school, and described the lived experience of assimilating holistic comfort into their attitudes and behaviors in practice. Themes were established and a conceptual framework was developed to better understand the nurses' lived experiences. Results showed that holistic comfort was experienced as a constructive approach to transcend unavoidable difficulties during the transition from school to practice. Participants described meaningful learning and acquisition of self-strengthening behaviors using holistic comfort theory. Holistic comfort principles were credited for easing nurses into the realities of work and advocating for best patient outcomes. Patient safety and pride in patient care were incidental positive outcomes. The study offers new insights about applying holistic comfort to prepare nurses for the realities of practice. Copyright © 2012 Elsevier Ltd. All rights reserved.
A review of the generalized uncertainty principle.
Tawfik, Abdel Nasser; Diab, Abdel Magied
2015-12-01
Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.
Joe Rosen
2005-12-01
Full Text Available Abstract: The symmetry principle is described in this paper. The full details are given in the book: J. Rosen, Symmetry in Science: An Introduction to the General Theory (Springer-Verlag, New York, 1995.
Principles of Quantum Mechanics
Landé, Alfred
2013-10-01
Preface; Introduction: 1. Observation and interpretation; 2. Difficulties of the classical theories; 3. The purpose of quantum theory; Part I. Elementary Theory of Observation (Principle of Complementarity): 4. Refraction in inhomogeneous media (force fields); 5. Scattering of charged rays; 6. Refraction and reflection at a plane; 7. Absolute values of momentum and wave length; 8. Double ray of matter diffracting light waves; 9. Double ray of matter diffracting photons; 10. Microscopic observation of ρ (x) and σ (p); 11. Complementarity; 12. Mathematical relation between ρ (x) and σ (p) for free particles; 13. General relation between ρ (q) and σ (p); 14. Crystals; 15. Transition density and transition probability; 16. Resultant values of physical functions; matrix elements; 17. Pulsating density; 18. General relation between ρ (t) and σ (є); 19. Transition density; matrix elements; Part II. The Principle of Uncertainty: 20. Optical observation of density in matter packets; 21. Distribution of momenta in matter packets; 22. Mathematical relation between ρ and σ; 23. Causality; 24. Uncertainty; 25. Uncertainty due to optical observation; 26. Dissipation of matter packets; rays in Wilson Chamber; 27. Density maximum in time; 28. Uncertainty of energy and time; 29. Compton effect; 30. Bothe-Geiger and Compton-Simon experiments; 31. Doppler effect; Raman effect; 32. Elementary bundles of rays; 33. Jeans' number of degrees of freedom; 34. Uncertainty of electromagnetic field components; Part III. The Principle of Interference and Schrödinger's equation: 35. Physical functions; 36. Interference of probabilities for p and q; 37. General interference of probabilities; 38. Differential equations for Ψp (q) and Xq (p); 39. Differential equation for фβ (q); 40. The general probability amplitude Φβ' (Q); 41. Point transformations; 42. General theorem of interference; 43. Conjugate variables; 44. Schrödinger's equation for conservative systems; 45. Schr
Mezzasalma, Stefano A
2007-03-15
The theoretical basis of a recent theory of Brownian relativity for polymer solutions is deepened and reexamined. After the problem of relative diffusion in polymer solutions is addressed, its two postulates are formulated in all generality. The former builds a statistical equivalence between (uncorrelated) timelike and shapelike reference frames, that is, among dynamical trajectories of liquid molecules and static configurations of polymer chains. The latter defines the "diffusive horizon" as the invariant quantity to work with in the special version of the theory. Particularly, the concept of universality in polymer physics corresponds in Brownian relativity to that of covariance in the Einstein formulation. Here, a "universal" law consists of a privileged observation, performed from the laboratory rest frame and agreeing with any diffusive reference system. From the joint lack of covariance and simultaneity implied by the Brownian Lorentz-Poincaré transforms, a relative uncertainty arises, in a certain analogy with quantum mechanics. It is driven by the difference between local diffusion coefficients in the liquid solution. The same transformation class can be used to infer Fick's second law of diffusion, playing here the role of a gauge invariance preserving covariance of the spacetime increments. An overall, noteworthy conclusion emerging from this view concerns the statistics of (i) static macromolecular configurations and (ii) the motion of liquid molecules, which would be much more related than expected.
A new layer compound Nb{sub 4}SiC{sub 3} predicted from first-principles theory
Li Chenliang [School of Astronautics, Harbin Institute of Technology, Harbin (China); Kuo Jeilai [School of Physics and Mathematical Sciences, Nanyang Technological University (Singapore); Wang, Biao [School of Physics and Engineering, Sun Yat-Sen University, Guangzhou (China); Li Yuanshi [Department of Cardiology, First Clinical College, Harbin Medical University, Harbin (China); Wang Rui, E-mail: lichenliang1980@yahoo.com.c [Department of Applied Chemistry, Harbin Institute of Technology, Harbin (China)
2009-04-07
We predicted a new layer compound Nb{sub 4}SiC{sub 3} using the first-principles method. The structural stability, mechanical, electronic, theoretical hardness and optical properties of Nb{sub 4}SiC{sub 3} were investigated. A stable Nb{sub 4}SiC{sub 3} phase appears in the {alpha}-type crystal structure. Moreover, the predicted Nb{sub 4}SiC{sub 3} is a metal and exhibits covalent nature. Nb{sub 4}SiC{sub 3} has a theoretical hardness of 10.86 GPa, which is much higher than Nb{sub 4}AlC{sub 3}; at the same time, it is more ductile than Nb{sub 4}AlC{sub 3}. The strong covalent bonding in Nb{sub 4}SiC{sub 3} is responsible for its high bulk modulus and hardness. Nb{sub 4}SiC{sub 3} exhibits slightly anisotropic elasticity. Furthermore, its optical properties are also analysed in detail. It is shown that Nb{sub 4}SiC{sub 3} might be a better candidate material as a coating to avoid solar heating than Ti{sub 4}AlN{sub 3}.
Comandi, G L; Chiofalo, M L; Nobili, A M; Polacco, E; Toncelli, R
2006-01-01
Recent theoretical work suggests that violation of the Equivalence Principle might be revealed in a measurement of the fractional differential acceleration $\\eta$ between two test bodies -of different composition, falling in the gravitational field of a source mass- if the measurement is made to the level of $\\eta\\simeq 10^{-13}$ or better. This being within the reach of ground based experiments, gives them a new impetus. However, while slowly rotating torsion balances in ground laboratories are close to reaching this level, only an experiment performed in low orbit around the Earth is likely to provide a much better accuracy. We report on the progress made with the "Galileo Galilei on the Ground" (GGG) experiment, which aims to compete with torsion balances using an instrument design also capable of being converted into a much higher sensitivity space test. In the present and following paper (Part I and Part II), we demonstrate that the dynamical response of the GGG differential accelerometer set into superc...
Marzocchi, Warner; Woo, Gordon
2009-03-01
Despite volcanic risk having been defined quantitatively more than 30 years ago, this risk has been managed without being effectively measured. The recent substantial progress in quantifying eruption probability paves the way for a new era of rational science-based volcano risk management, based on what may be termed "volcanic risk metrics" (VRM). In this paper, we propose the basic principles of VRM, based on coupling probabilistic volcanic hazard assessment and eruption forecasting with cost-benefit analysis. The VRM strategy has the potential to rationalize decision making across a broad spectrum of volcanological questions. When should the call for evacuation be made? What early preparations should be made for a volcano crisis? Is it worthwhile waiting longer? What areas should be covered by an emergency plan? During unrest, what areas of a large volcanic field or caldera should be evacuated, and when? The VRM strategy has the paramount advantage of providing a set of quantitative and transparent rules that can be established well in advance of a crisis, optimizing and clarifying decision-making procedures. It enables volcanologists to apply all their scientific knowledge and observational information to assist authorities in quantifying the positive and negative risk implications of any decision.
Translation Principles of International Conference English Under Skopos Theory%目的论关照下的国际会议英语的翻译原则
黄丽奇
2011-01-01
International conference is an exchanges. The translation of conference English important would be to the outside world. This paper, analyzing the language form of communication in international a good way to publicize and communicate charateristics of conference Enlish,puts forward four translation principles under Skopos Theory.%在国际交流中，国际会议是一种非常重要的交流形式，做好国际会议的英语翻译对加强我国对外宣传和交流发挥重要的桥梁作用。本文分析了国际会议英语的语言特点，根据功能目的论进行实例分析，提出了四条翻译原则以提高国际会议英语的翻译质量。
杨忠志; 沈尔忠
1995-01-01
On the basis of electronegativity expressed in density functional theory and electronegativity equalization principle, a new scheme for calculating the atomic charges in a molecule has been proposed and designed, which gives a new scale of the atomic electronegativity and hardness in a certain molecular environment and takes the harmonic mean electronegativity as a reference value of the molecular electronegativity so that the multiple-regression and nonuniform parameters in the original method are avoided. This approach can be easily and widely applied to the calculation of atomic charges for a big molecule and quite good results of atomic charges in some illustrated molecules are obtained as compared with those from the ab initio STO-3G SCF calculations.
The Principle and Doctrine on the Theory of Behavior Concurring%试论嫖宿幼女犯罪行为的竞合与冲突
安柯颖
2012-01-01
由于行为不够独立,法定刑畸轻畸重,嫖宿幼女罪的合理性遭到了怀疑。行为竞合理论认为,嫖宿幼女的行为也是奸淫幼女行为的一种,一方面,其将嫖宿幼女行为的加重情形按《刑法》第236条第3款以强奸罪的结果加重犯论处,非但不违反罪刑法定原则,而且能较好地实现罪刑均衡的要求。另一方面,法条竞合犯从一重处的规则也要服从罪刑均衡的要求,不可一概而论。%Because the penalty to the offence of whoring Girls under age of ld is too heavy or too light, and the behavior is not independent enough, the rationality of the offence of whoring Girls under age of 14 is ques- tioned. According to the theory of the behavior concurring, behavior of offending wholing girls under age of 14 also is a rape behavior. For one thing, according to the Criminal Law of the article 236 of paragraph 3, the be- havior should be sentenced as consequential offense of rape. This dealing not only doesnt violate the Principle of Legality, but also meets the requirement of Principle of Balance to Crime and Penalty. For the other, the pun- ishment rule of the crime of competing regulations also needs meeting the requirement of Principle of Balance to Crime and Penalty. It couldnt be an absolute rule.
Noguchi, Yoshifumi [Department of Physics, Graduate School of Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya-ku, Yokohama 240-8501 (Japan); Computational Materials Science Center, National Institute for Materials Science, 1-2-1 Sengen, Tsukuba, Ibaraki 305-0047 (Japan)], E-mail: NOGUCHI.Yoshifumi@nims.go.jp; Ishii, Soh; Ohno, Kaoru [Department of Physics, Graduate School of Engineering, Yokohama National University, 79-5 Tokiwadai, Hodogaya-ku, Yokohama 240-8501 (Japan)
2007-05-15
Short-range electron correlation plays a very important role in small systems and significantly affects the double ionization energy (DIE) spectra and the two-electron distribution functions of a CO molecule, for example. In our calculations, the local density approximation (LDA) of the density functional theory is chosen as a starting point, the GW approximation (GWA) is performed in a next step, and finally the Bethe-Salpeter equation for the T-matrix, describing the particle-particle ladder diagrams up to the infinite order, is solved via the eigenvalue problem. The calculated DIE spectra, which are directly given by the eigenvalues, reflect the short-range electron correlation and are in good agreement with the experiment. We confirm that the Coulomb hole appears in the two-electron distribution function constructed from the eigenfunction.
Marwaha, Sugandha; Goswami, Mousumi; Vashist, Binny
2017-08-01
Cognitive development is a major area of human development and was extensively studied by Jean Piaget. He proposed that the development of intellectual abilities occurs in a series of relatively distinct stages and that a child's way of thinking and viewing the world is different at different stages. To assess Piaget's principles of the intuitive stage of preoperational period among 4-7-year-old children relative to their Intelligence quotient (IQ). Various characteristics as described by Jean Piaget specific for the age group of 4-7 years along with those of the previous (preconceptual stage of preoperational period) and successive periods (concrete operations) were analysed using various experiments in 300 children. These characteristics included the concepts of perceptual and cognitive egocentrism, centration and reversibility. IQ of the children was measured using Seguin form board test. Inferential statistics were performed using Chi-square test and Kruskal Wallis test. The level of statistical significance was set at 0.05. The prevalence of perceptual and cognitive egocentrism was 10.7% and 31.7% based on the experiments and 33% based on the interview question. Centration was present in 96.3% of the children. About 99% children lacked the concept of reversibility according to the clay experiment while 97.7% possessed this concept according to the interview question. The mean IQ score of children who possessed perceptual egocentrism, cognitive egocentrism and egocentrism in dental setting was significantly higher than those who lacked these characteristics. Perceptual egocentrism had almost disappeared and prevalence of cognitive egocentrism decreased with increase in age. Centration and lack of reversibility were appreciated in most of the children. There was a gradual reduction in the prevalence of these characters with increasing age. Mean IQ score of children who possessed perceptual egocentrism, cognitive egocentrism and egocentrism in dental setting was
Zhang, Yun; Wang, Zhe [Department of Physics, Xiangtan University, Xiangtan, 411105 Hunan (China); Cao, Juexian, E-mail: jxcao@xtu.edu.cn [Department of Physics, Xiangtan University, Xiangtan, 411105 Hunan (China); Beijing Computational Science Reasearch Center, 100084 Beijing (China)
2014-11-15
Using the first-principles full-potential linearized augmented plane-wave method, we investigated the stability, elastic and magnetostrictive properties of γ-Fe{sub 4}C and its derivatives. From the formation energy, we show that the most preferable configuration for MFe{sub 3}C (M=Pd, Pt, Rh, Ir) is that the M atom occupies the corner 1a position rather than 3c position. These derivatives are ductile due to high B/G values except for IrFe{sub 3}C. The calculated tetragonal magnetostrictive coefficient λ{sub 001} value for γ-Fe{sub 4}C is −380 ppm, which is larger than the value of Fe{sub 83}Ga{sub 17} (+207 ppm). Due to the strong SOC coupling strength constant (ξ) of Pt, the calculated λ{sub 001} of PtFe{sub 3}C is −691 ppm, which is increased by 80% compared to that of γ-Fe{sub 4}C. We demonstrate the origin of giant magnetostriction coefficient in terms of electronic structures and their responses to the tetragonal lattice distortion. - Highlights: • The most preferable site for M atom of MFe{sub 3}C (M=Pd, Pt, Rh, Ir) is the corner position. • The magnetostrictive coefficient for γ-Fe{sub 4}C is −380 ppm, larger than the value of Fe{sub 83}Ga{sub 17}. • The calculated λ{sub 001} of PtFe{sub 3}C is −691 ppm, which is increased by 80% compared to that of γ-Fe{sub 4}C.
Hill, Rodney
2013-01-01
Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics
CHEN, Wen-Kai; LIU, Shu-Hong; CAO, Mei-Juan; LU, Chun-Hai; XU, Ying; LI, Jun-Qian
2006-01-01
Adsorption of methanol and methoxy at four selected sites (top, bridge, hcp, fcc) on Cu(111) surface has been investigated by density functional theory method at the generalized gradient approximation (GGA) level. The calculation on adsorption energies, geometry and electronic structures, Mulliken charges, and vibrational frequencies of CH3OH and CH3O on clean Cu(111) surface was performed with full-geometry optimization, and compared with the experimental data. The obtained results are in agreement with available experimental data. The most favorite adsorption site for methanol on Cu(111) surface is the top site, where C-O axis is tilted to the surface. Moreover,the preferred adsorption site for methoxy on Cu(111) surface is the fcc site, and it adsorbs in an upright geometry with pseudo-C3v local symmetry. Possible decomposition pathways also have been investigated by transition-state searching methods. Methoxy radical, CH3O, was found to be the decomposition intermediate. Methanol can be adsorbed on the surface with its oxygen atom directly on a Cu atom, and weakly chemisorbed on Cu(111) surface. In contrast to methanol, methoxy is strongly chemisorbed to the surface.
Powell, B J
2015-01-01
We review theories of phosphorescence in cyclometalated complexes. We focus primarily on pseudooctahedrally coordinated $t_{2g}^6$ metals (e.g., [Os(II)(bpy)$_3$]$^{2+}$, Ir(III)(ppy)$_3$ and Ir(III)(ptz)$_3$) as, for reasons that are explored in detail, these show particularly strong phosphorescence. We discuss both first principles approaches and semi-empirical models, e.g., ligand field theory. We show that together these provide a clear understanding of the photophysics and in particular the lowest energy triplet excitation, T$_1$. In order to build a good model relativistic effects need to be included. The role of spin-orbit coupling is well-known, but scalar relativistic effects are also large - and are therefore also introduced and discussed. No expertise in special relativity or relativistic quantum mechanics is assumed and a pedagogical introduction to these subjects is given. It is shown that, once both scalar relativistic effects and spin-orbit coupling are included, time dependent density function...
整体语言法：理论、方法、原则及利弊%Whole Language Approach: Theories, Methods, Principles, Advantages and Drawbacks
戴娜莲
2012-01-01
整体语言法是融语言、语言教学理论和语言实践为一体的一种教学模式。它强调语言教学要由整体到部分、课堂应以学生为中心、教学内容要符合学生的实际需求、教学活动应具有现实意义、学生的听说读写四种技能应同步发展、用母语组织课堂教学等。通过文献回顾概述了整体语言法的基础、理论和方法，分析探讨了整体语言法的原则和利弊。%Whole language approach is a teaching mode which combines language with language teaching theories and practice. Its focuses lie in that language teaching should be the whole first and the local second, classroom activities are student-centered, teaching syllabus meets students' practical needs, teaching activities prove to be of realistic significance, students ' four language skills are simultaneously developed and that their native tongue should be used in class. By means of literature review, this paper attempts to clarify the basis of whole language approach and summarizes its theories, methods, principles, advantages and drawbacks.
Generalized Uncertainty Principle: Approaches and Applications
Tawfik, Abdel Nasser
2014-01-01
We review some highlights from the String theory, the black hole physics and the doubly special relativity and some thought experiments which were suggested to probe the shortest distances and/or maximum momentum at the Planck scale. Furthermore, all models developed in order to implement the minimal length scale and/or the maximum momentum in different physical systems are analysed. We compare between them. They entered the literature as the Generalized Uncertainty Principle (GUP) assuming modified dispersion relation, and therefore are allowed for a wide range of Applications in estimating, for example, the inflationary parameters, Lorentz invariance violation, black hole thermodynamics, Saleker--Wigner inequalities, entropic nature of gravitational laws, Friedmann equations, minimal time measurement and thermodynamics of the high--energy collisions. One of the higher--order GUP approaches gives predictions for the minimal length uncertainty. A second one predicts a maximum momentum and a minimal length unc...
Optical and terahertz spectra analysis by the maximum entropy method.
Vartiainen, Erik M; Peiponen, Kai-Erik
2013-06-01
Phase retrieval is one of the classical problems in various fields of physics including x-ray crystallography, astronomy and spectroscopy. It arises when only an amplitude measurement on electric field can be made while both amplitude and phase of the field are needed for obtaining the desired material properties. In optical and terahertz spectroscopies, in particular, phase retrieval is a one-dimensional problem, which is considered as unsolvable in general. Nevertheless, an approach utilizing the maximum entropy principle has proven to be a feasible tool in various applications of optical, both linear and nonlinear, as well as in terahertz spectroscopies, where the one-dimensional phase retrieval problem arises. In this review, we focus on phase retrieval using the maximum entropy method in various spectroscopic applications. We review the theory behind the method and illustrate through examples why and how the method works, as well as discuss its limitations.
Research on Principles for Features Matching Based on Spectral Graph Theory%基于谱图理论的特征匹配原理研究
于志鹏; 李晓明
2012-01-01
图像匹配是计算机视觉的重要研究领域,它们广泛应用于工业、农业、物体识别、遥感、生物医学以及军事等方面.基于谱图理论的图像匹配算法引起了人们越来越多的兴趣,该算法直接对图像中的特征进行处理,将高度复杂的经典算法转化为组合（离散）谱问题的简单求解,有效地降低了算法复杂度。文章对基于谱图理论的特征匹配算法进行了较为系统的探索,借助数学理论工具,对不同匹配算法中所用谱分解的原理和本质进行了研究。%Images matching is an important research area in computer vision eld,which is widely applied in industrial, agriculture, object recognition, remote sensing, biomedicine, military aair and many other elds. Recently, the algorithms for images matching based on spectral graph theory is the most popular research subject in the world. In the algorithm,features in images are processed directly, and the solution becomes rather simple. The algorithm reduces the complexity effectively. The systematic explore on the algorithm of features matching based on spectral graph theory is addressed in this dissertation. With the help of mathematical theory tools, the research on the principles and essence of the spectral decomposition in the dierent matching algorithms is addressed, too.
Bulmer, M G
1979-01-01
There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Fuzzy tracking algorithm with feedback based on maximum entropy principle%带反馈多传感器模糊最大熵单目标跟踪算法
刘智; 陈丰; 黄继平
2012-01-01
针对矩阵加权融合算法计算量大、传感器数量不易扩充的特点,提出了一种带反馈的模糊最大熵融合算法.该算法采用模糊C-均值算法和最大熵原理计算状态向量中每一分量的权值,不但从整体考虑各分量对融合估计的影响,而且减少了复杂的矩阵运算过程,实时性较好.与矩阵加权算法相比,该融合算法还具有容易扩充的特点,能够直接应用于传感器数量大于2时的融合计算.实验仿真结果表明,融合估计的准确性与矩阵加权融合算法基本一致,算法的有效性得到了验证.%Aiming at the disadvantages of high computation overhead and bad extensibility in matrix weighted fusion methods, this paper proposed a multisensor fusion algorithm with feedback based on fuzzy C-means( FCM) clustering and tnaximun en-tropy principle( MEP). This algorithm combined FCM and MEP to calculate fusion matrix weight of state vector considering ev-ery components integratedly. What' s more, this algoritm had a good real-time performance due to less matrix computation and good extensibility which showed it could directly be applied into tracking system comprising more than two sensors. Experi-ments and results reveal that the tracking accuracy of fusion estimate is consistent with that of matrix weighted fusion methods.
原则理论与法概念争议%Theory of Principles and the Dispute over the Concept of Law
雷磊
2012-01-01
The core issue of dispute over the concept of law in jurisprudence lie in that whether there is a necessary conceptual connection between law and morality. In other words, whether there is a necessary connection between legal validity and moral correctness. In order to justify the connection thesis, Robert Alexy advances the argument from principles on the basis of his early theory of principles, including the thesis of in-corporation, of morality and of correctness. This article the appropriateness of these three these and their rele- vance to the connection of thesis step by step, then argues that, argument from principles is not a appropriate tool to justify the connection thesis. But it does no imply the unavoidable failure of the connection thesis, since the possibility of other ways of justification. The dispute over the concept of law is indeed a dispute over the criteria over the validity of law, and finally a dispute in the political philosophy.%法理学中法概念之争的中心议题在于法律和道德在概念上是否存在必然联系，或者说法律效力和道德正确性之间是否存在必然联系。为了证立联系命题，阿列克西在其早先的原则理论的基础上提出了原则论据，后者包括安置命题、道德命题与正确性命题。在逐一检讨了这三个命题的恰当性以及其与联系命题间的关联度后可以认为，原则论据无法用来证立联系命题。但这并不表示联系命题就必然失败，因为原则理论可以别的方式来证明它。法概念的争议是有关法律效力判准的争议，最终是政治哲学上的争议。
Lek, E; Fairclough, D V; Hall, N G; Hesp, S A; Potter, I C
2012-11-01
The size and age data and patterns of growth of three abundant, reef-dwelling and protogynous labrid species (Coris auricularis, Notolabrus parilus and Ophthalmolepis lineolata) in waters off Perth at c. 32° S and in the warmer waters of the Jurien Bay Marine Park (JBMP) at c. 30° S on the lower west coast of Australia are compared. Using data for the top 10% of values and a randomization procedure, the maximum total length (L(T) ) and mass of each species and the maximum age of the first two species were estimated to be significantly greater off Perth than in the JBMP (all P 0.05). These latitudinal trends, thus, typically conform to those frequently exhibited by fish species and the predictions of the metabolic theory of ecology (MTE). While, in terms of mass, the instantaneous growth rates of each species were similar at both latitudes during early life, they were greater at the higher latitude throughout the remainder and thus much of life, which is broadly consistent with the MTE. When expressed in terms of L(T), however, instantaneous growth rates did not exhibit consistent latitudinal trends across all three species. The above trends with mass, together with those for reproductive variables, demonstrate that a greater amount of energy is directed into somatic growth and gonadal development by each of these species at the higher latitude. The consistency of the direction of the latitudinal trends for maximum body size and age and pattern of growth across all three species implies that each species is responding in a similar manner to differences between the environmental characteristics, such as temperature, at those two latitudes. The individual maximum L(T), mass and age and pattern of growth of O. lineolata at a higher and thus cooler latitude on the eastern Australian coast are consistent with the latitudinal trends exhibited by those characteristics for this species in the two western Australian localities. The implications of using mass rather than
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Osorio-Guillén, J. M.; Espinosa-García, W. F.; Moyses Araujo, C.
2015-09-01
First-principles quasi-particle theory has been employed to assess catalytic power of graphitic carbon nitride, g-C3N4, for solar fuel production. A comparative study between g-h-triazine and g-h-heptazine has been carried out taking also into account van der Waals dispersive forces. The band edge potentials have been calculated using a recently developed approach where quasi-particle effects are taken into account through the GW approximation. First, it was found that the description of ground state properties such as cohesive and surface formation energies requires the proper treatment of dispersive interaction. Furthermore, through the analysis of calculated band-edge potentials, it is shown that g-h-triazine has high reductive power reaching the potential to reduce CO2 to formic acid, coplanar g-h-heptazine displays the highest thermodynamics force toward H2O/O2 oxidation reaction, and corrugated g-h-heptazine exhibits a good capacity for both reactions. This rigorous theoretical study shows a route to further improve the catalytic performance of g-C3N4.
Adriano Bressane
2010-01-01
Full Text Available Several studies are reported on noise pollution literature, with many different approaches. However, there are evident in such publications fundamentals of acoustic theory, aiming at a more appropriate characterization of sound phenomenon. This paper aims to provide an alternative source of consultation for those faced with the fragmented and too much information, sometimes an obstacle to understanding main physical aspects of noise pollution. This paper to present and discuss concepts concerning the sound properties, principles of sound propagation and related phenomena. = Inúmeros são os trabalhos reportados na literatura sobre poluição sonora, com as mais diversas ênfases e enfoques. Todavia, tais publicações não evidenciam os fundamentos da teoria acústica, visando uma caracterização mais apropriada do fenômeno sonoro. O presente trabalho tem por finalidade proporcionar uma fonte alternativa de consulta para aqueles que deparam com a fragmentada e demasiada quantidade de informações, por vezes obstáculo para compreensão física dos principais aspetos da poluição sonora. Aqui são discutidos e apresentados os principais conceitos relacionados às propriedades do som, princípios de propagação e fenômenos sonoros correlatos.
Kabita, Kh; Maibam, Jameson; Indrajit Sharma, B.; Brojen Singh, R. K.; Thapa, R. K.
2016-01-01
We report first principles phase transition, elastic properties and electronic structure for cadmium telluride (CdTe) under induced pressure in the light of density functional theory using the local density approximation (LDA), generalised gradient approximation (GGA) and modified Becke-Johnson (mBJ) potential. The structural phase transition of CdTe from a zinc blende (ZB) to a rock salt (RS) structure within the LDA calculation is 2.2 GPa while that within GGA is found to be at 4 GPa pressure with a volume collapse of 20.9%. The elastic constants and parameters (Zener anisotropy factor, Shear modulus, Poisson’s ratio, Young’s modulus, Kleinmann parameter and Debye’s temperature) of CdTe at different pressures of both the phases have been calculated. The band diagram of the CdTe ZB structure shows a direct band gap of 1.46 eV as predicted by mBJ calculation which gives better results in close agreement with experimental results as compared to LDA and GGA. An increase in the band gap of the CdTe ZB phase is predicted under induced pressure while the metallic nature is retained in the CdTe RS phase.
Principles of Fourier analysis
Howell, Kenneth B
2001-01-01
Fourier analysis is one of the most useful and widely employed sets of tools for the engineer, the scientist, and the applied mathematician. As such, students and practitioners in these disciplines need a practical and mathematically solid introduction to its principles. They need straightforward verifications of its results and formulas, and they need clear indications of the limitations of those results and formulas.Principles of Fourier Analysis furnishes all this and more. It provides a comprehensive overview of the mathematical theory of Fourier analysis, including the development of Fourier series, "classical" Fourier transforms, generalized Fourier transforms and analysis, and the discrete theory. Much of the author''s development is strikingly different from typical presentations. His approach to defining the classical Fourier transform results in a much cleaner, more coherent theory that leads naturally to a starting point for the generalized theory. He also introduces a new generalized theory based ...
Physical Premium Principle: A New Way for Insurance Pricing
Darooneh, Amir H.
2005-03-01
In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical) definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.
Physical Premium Principle: A New Way for Insurance Pricing
Amir H. Darooneh
2005-02-01
Full Text Available Abstract: In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.
Developing principles of growth
Neergaard, Helle; Fleck, Emma
of the principles of growth among women-owned firms. Using an in-depth case study methodology, data was collected from women-owned firms in Denmark and Ireland, as these countries are similar in contextual terms, e.g. population and business composition, dominated by micro, small and medium-sized enterprises....... Extending on principles put forward in effectuation theory, we propose that women grow their firms according to five principles which enable women’s enterprises to survive in the face of crises such as the current financial world crisis....
Turner, Charles K.
2017-01-01
The mainstream theories and models of the physical sciences, including neuroscience, are all consistent with the principle of causality. Wholly causal explanations make sense of how things go, but are inherently value-neutral, providing no objective basis for true beliefs being better than false beliefs, nor for it being better to intend wisely than foolishly. Dennett (1987) makes a related point in calling the brain a syntactic (procedure-based) engine. He says that you cannot get to a semantic (meaning-based) engine from there. He suggests that folk psychology revolves around an intentional stance that is independent of the causal theories of the brain, and accounts for constructs such as meanings, agency, true belief, and wise desire. Dennett proposes that the intentional stance is so powerful that it can be developed into a valid intentional theory. This article expands Dennett’s model into a principle of intentionality that revolves around the construct of objective wisdom. This principle provides a structure that can account for all mental processes, and for the scientific understanding of objective value. It is suggested that science can develop a far more complete worldview with a combination of the principles of causality and intentionality than would be possible with scientific theories that are consistent with the principle of causality alone. PMID:28223954
Simenel, Cédric; Golabek, Cédric; Kedziora, David J.
2011-10-01
Collisions of actinide nuclei form, during very short times of few zs (10-21 s), the heaviest ensembles of interacting nucleons available on Earth. Such collisions are used to produce super-strong electric fields by the huge number of interacting protons to test spontaneous positron-electron pair emission (vacuum decay) predicted by the quantum electrodynamics (QED) theory. Multi-nucleon transfer in actinide collisions could also be used as an alternative way to fusion in order to produce neutron-rich heavy and superheavy elements thanks to inverse quasifission mechanisms. Actinide collisions are studied in a dynamical quantum microscopic approach. The three-dimensional time-dependent Hartree-Fock (TDHF) code tdhf3d is used with a full Skyrme energy density functional to investigate the time evolution of expectation values of one-body operators, such as fragment position and particle number. This code is also used to compute the dispersion of the particle numbers (e.g., widths of fragment mass and charge distributions) from TDHF transfer probabilities, on the one hand, and using the BalianVeneroni variational principle, on the other hand. A first application to test QED is discussed. Collision times in 238U+238U are computed to determine the optimum energy for the observation of the vacuum decay. It is shown that the initial orientation strongly affects the collision times and reaction mechanism. The highest collision times predicted by TDHF in this reaction are of the order of ~ 4 zs at a center of mass energy of 1200 MeV. According to modern calculations based on the Dirac equation, the collision times at Ecm > 1 GeV are sufficient to allow spontaneous electron-positron pair emission from QED vacuum decay, in case of bare uranium ion collision. A second application of actinide collisions to produce neutron-rich transfermiums is discussed. A new inverse quasifission mechanism associated to a specific orientation of the nuclei is proposed to produce transfermium
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Yu, Ching-Feng [Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan (China); Cheng, Hsien-Chie, E-mail: hccheng@fcu.edu.tw [Department of Aerospace and Systems Engineering, Feng Chia University, Taichung 40724, Taiwan (China); Chen, Wen-Hwa, E-mail: whchen@pme.nthu.edu.tw [Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan (China)
2015-01-15
Highlights: • The mechanical and thermodynamic properties of AuIn{sub 2} are reported for the first time. • The calculated lattice constants and elastic properties of AuIn{sub 2} are consistent with the literature data. • The results reveal that AuIn{sub 2} demonstrates low elastic anisotropy, low hardness and high ductility. • It is worth to note that the anisotropic AuIn{sub 2} tends to become elastically isotropic as hydrostatic pressure increases. - Abstract: The structural, mechanical and thermodynamic properties of cubic AuIn{sub 2} crystal in the cubic fluorite structure, and also their temperature, hydrostatic pressure and direction dependences are investigated using first-principles calculations based on density functional theory (DFT) within the generalized gradient approximation (GGA). The optimized lattice constants of AuIn{sub 2} single crystal are first evaluated, by which its hydrostatic pressure-dependent elastic constants are also derived. Then, the hydrostatic pressure-dependent mechanical characteristics of the single crystal, including ductile/brittle behavior and elastic anisotropy, are explored according to the characterized angular character of atomic bonding, Zener anisotropy factor and directional Young’s modulus. Moreover, the polycrystalline elastic properties of AuIn{sub 2}, such as bulk modulus, shear modulus and Young’s modulus, and its ductile/brittle and microhardness characteristics are assessed versus hydrostatic pressure. Finally, the temperature-dependent Debye temperature and heat capacity of AuIn{sub 2} single crystal are investigated by quasi-harmonic Debye modeling. The present results reveal that AuIn{sub 2} crystal demonstrates low elastic anisotropy, low hardness and high ductility. Furthermore, its heat capacity strictly follows the Debye T{sup 3}-law at temperatures below the Debye temperature, and reaches the Dulong–Petit limit at temperatures far above the Debye temperature.
Zhao, Li-Juan; Tian, Wen-Juan; Ou, Ting; Xu, Hong-Guang; Feng, Gang; Xu, Xi-Ling; Zhai, Hua-Jin; Li, Si-Dian; Zheng, Wei-Jun
2016-03-28
We present a combined photoelectron spectroscopy and first-principles theory study on the structural and electronic properties and chemical bonding of B3O3 (-/0) and B3O3H(-/0) clusters. The concerted experimental and theoretical data show that the global-minimum structures of B3O3 and B3O3H neutrals are very different from those of their anionic counterparts. The B3O3 (-) anion is characterized to possess a V-shaped OB-B-BO chain with overall C2 v symmetry (1A), in which the central B atom interacts with two equivalent boronyl (B≡O) terminals via B-B single bonds as well as with one O atom via a B=O double bond. The B3O3H(-) anion has a Cs (2A) structure, containing an asymmetric OB-B-OBO zig-zag chain and a terminal H atom interacting with the central B atom. In contrast, the C2 v (1a) global minimum of B3O3 neutral contains a rhombic B2O2 ring with one B atom bonded to a BO terminal and that of neutral B3O3H (2a) is also of C2 v symmetry, which is readily constructed from C2 v (1a) by attaching a H atom to the opposite side of the BO group. The H atom in B3O3H(-/0) (2A and 2a) prefers to interact terminally with a B atom, rather than with O. Chemical bonding analyses reveal a three-center four-electron (3c-4e) π hyperbond in the B3O3H(-) (2A) cluster and a four-center four-electron (4c-4e) π bond (that is, the so-called o-bond) in B3O3 (1a) and B3O3H (2a) neutral clusters.
Maximum entropy production in environmental and ecological systems.
Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M
2010-05-12
The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.
Fábio Portela Lopes de Almeida
2008-12-01
Full Text Available O artigo tem por propósito discutir a natureza dos princípios constitucionais a partir de duas teorias hermenêuticas distintas: a axiologia e a deontologia. A perspectiva axiológica é descrita a partir da teoria dos princípios delineada por Robert Alexy em sua Teoria dos direitos fundamentais e criticada por ser incapaz de lidar democraticamente com o fato do pluralismo, isto é, com a circunstância de que as sociedades contemporâneas não se estruturam em torno de valores éticos compartilhados intersubjetivamente por todos os cidadãos. Como alternativa a esse modelo, sugere-se, a partir das obras de John Rawls, Ronald Dworkin e Jürgen Habermas, que a adoção de uma perspectiva deontológica, que assume a distinção entre princípios e valores, supera as dificuldades da teoria axiológica. Ao assumir como premissa central a possibilidade de legitimação do direito a partir de princípios justificados a partir de critérios aceitáveis por todos os cidadãos, uma teoria deontológica dos princípios se torna capaz de lidar com a pluralidade de concepções de bem presentes nas sociedades contemporâneas. Nesse sentido, o artigo se situa no campo de estudos próprio da teoria da Constituição.The article discusses the nature of the constitutional principles by opposing two distinct hermeneutic theories: axiology and deontology. the theory of principles proposed by robert alexy is assumed as an ideal example of axiological theory, and criticized for being unable to deal democratically with the fact of pluralism, i. e., the fact that the contemporary societies are not structured on ethical values shared by all the citizens. As an alternative to the axiological model, I suggest, based on a particular reading of the theories of John Rawls, Ronald Dworkin and Jürgen Habermas, that the adoption of a deontological perspective, which assumes a strict distinction between principles and values, overcomes the difficulties of the axiological
Gilleskie, Donna B.; Salemi, Michael K.
2012-01-01
In a typical economics principles course, students encounter a large number of concepts. In a literacy-targeted course, students study a "short list" of concepts that they can use for the rest of their lives. While a literacy-targeted principles course provides better education for nonmajors, it may place economic majors at a…
Maximum entropy principle for stationary states underpinned by stochastic thermodynamics.
Ford, Ian J
2015-11-01
The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.
Maximum entropy principle for stationary states underpinned by stochastic thermodynamics
Ford, Ian J.
2015-11-01
The selection of an equilibrium state by maximizing the entropy of a system, subject to certain constraints, is often powerfully motivated as an exercise in logical inference, a procedure where conclusions are reached on the basis of incomplete information. But such a framework can be more compelling if it is underpinned by dynamical arguments, and we show how this can be provided by stochastic thermodynamics, where an explicit link is made between the production of entropy and the stochastic dynamics of a system coupled to an environment. The separation of entropy production into three components allows us to select a stationary state by maximizing the change, averaged over all realizations of the motion, in the principal relaxational or nonadiabatic component, equivalent to requiring that this contribution to the entropy production should become time independent for all realizations. We show that this recovers the usual equilibrium probability density function (pdf) for a conservative system in an isothermal environment, as well as the stationary nonequilibrium pdf for a particle confined to a potential under nonisothermal conditions, and a particle subject to a constant nonconservative force under isothermal conditions. The two remaining components of entropy production account for a recently discussed thermodynamic anomaly between over- and underdamped treatments of the dynamics in the nonisothermal stationary state.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Principles of Bridge Reliability
Thoft-Christensen, Palle; Nowak, Andrzej S.
The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated...