WorldWideScience

Sample records for maximum principle theory

  1. The Maximum Entropy Principle and the Modern Portfolio Theory

    Directory of Open Access Journals (Sweden)

    Ailton Cassetari

    2003-12-01

    Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.

  2. On the Pontryagin maximum principle for systems with delays. Economic applications

    Science.gov (United States)

    Kim, A. V.; Kormyshev, V. M.; Kwon, O. B.; Mukhametshin, E. R.

    2017-11-01

    The Pontryagin maximum principle [6] is the key stone of finite-dimensional optimal control theory [1, 2, 5]. So beginning with opening the maximum principle it was important to extend the maximum principle on various classes of dynamical systems. In t he paper we consider some aspects of application of i-smooth analysis [3, 4] in the theory of the Pontryagin maximum principle [6] for systems with delays, obtained results can be applied by elaborating optimal program controls in economic models with delays.

  3. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  4. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  5. Maximum entropy principle and hydrodynamic models in statistical mechanics

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2012-01-01

    This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the

  6. Optimal control of a double integrator a primer on maximum principle

    CERN Document Server

    Locatelli, Arturo

    2017-01-01

    This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...

  7. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  8. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  9. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    Science.gov (United States)

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  10. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  11. Discrete maximum principle for the P1 - P0 weak Galerkin finite element approximations

    Science.gov (United States)

    Wang, Junping; Ye, Xiu; Zhai, Qilong; Zhang, Ran

    2018-06-01

    This paper presents two discrete maximum principles (DMP) for the numerical solution of second order elliptic equations arising from the weak Galerkin finite element method. The results are established by assuming an h-acute angle condition for the underlying finite element triangulations. The mathematical theory is based on the well-known De Giorgi technique adapted in the finite element context. Some numerical results are reported to validate the theory of DMP.

  12. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  13. Fundamental Principle for Quantum Theory

    OpenAIRE

    Khrennikov, Andrei

    2002-01-01

    We propose the principle, the law of statistical balance for basic physical observables, which specifies quantum statistical theory among all other statistical theories of measurements. It seems that this principle might play in quantum theory the role that is similar to the role of Einstein's relativity principle.

  14. A maximum principle for time dependent transport in systems with voids

    International Nuclear Information System (INIS)

    Schofield, S.L.; Ackroyd, R.T.

    1996-01-01

    A maximum principle is developed for the first-order time dependent Boltzmann equation. The maximum principle is a generalization of Schofield's κ(θ) principle for the first-order steady state Boltzmann equation, and provides a treatment of time dependent transport in systems with void regions. The formulation comprises a direct least-squares minimization allied with a suitable choice of bilinear functional, and gives rise to a maximum principle whose functional is free of terms that have previously led to difficulties in treating void regions. (Author)

  15. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  16. Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    International Nuclear Information System (INIS)

    Han Yuecai; Hu Yaozhong; Song Jian

    2013-01-01

    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.

  17. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  18. Maximum principle and convergence of central schemes based on slope limiters

    KAUST Repository

    Mehmetoglu, Orhan; Popov, Bojan

    2012-01-01

    A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.

  19. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    Science.gov (United States)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  20. Fundamental principles of quantum theory

    International Nuclear Information System (INIS)

    Bugajski, S.

    1980-01-01

    After introducing general versions of three fundamental quantum postulates - the superposition principle, the uncertainty principle and the complementarity principle - the question of whether the three principles are sufficiently strong to restrict the general Mackey description of quantum systems to the standard Hilbert-space quantum theory is discussed. An example which shows that the answer must be negative is constructed. An abstract version of the projection postulate is introduced and it is demonstrated that it could serve as the missing physical link between the general Mackey description and the standard quantum theory. (author)

  1. A General Stochastic Maximum Principle for SDEs of Mean-field Type

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Djehiche, Boualem; Li Juan

    2011-01-01

    We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.

  2. Setting the renormalization scale in QCD: The principle of maximum conformality

    DEFF Research Database (Denmark)

    Brodsky, S. J.; Di Giustino, L.

    2012-01-01

    A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when the renormali......A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when...... the renormalization scale is set properly, all nonconformal beta not equal 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with beta...... = 0. The resulting scale-fixed predictions using the principle of maximum conformality (PMC) are independent of the choice of renormalization scheme-a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale setting in the Abelian limit...

  3. A Maximum Principle for SDEs of Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)

    2011-06-15

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  4. A Maximum Principle for SDEs of Mean-Field Type

    International Nuclear Information System (INIS)

    Andersson, Daniel; Djehiche, Boualem

    2011-01-01

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  5. Twenty-five years of maximum-entropy principle

    Science.gov (United States)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  6. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  7. Can the maximum entropy principle be explained as a consistency requirement?

    NARCIS (Netherlands)

    Uffink, J.

    1997-01-01

    The principle of maximum entropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in

  8. Maximum principles for boundary-degenerate second-order linear elliptic differential operators

    OpenAIRE

    Feehan, Paul M. N.

    2012-01-01

    We prove weak and strong maximum principles, including a Hopf lemma, for smooth subsolutions to equations defined by linear, second-order, partial differential operators whose principal symbols vanish along a portion of the domain boundary. The boundary regularity property of the smooth subsolutions along this boundary vanishing locus ensures that these maximum principles hold irrespective of the sign of the Fichera function. Boundary conditions need only be prescribed on the complement in th...

  9. The constraint rule of the maximum entropy principle

    NARCIS (Netherlands)

    Uffink, J.

    1995-01-01

    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability

  10. Ethical principles and theories.

    Science.gov (United States)

    Schultz, R C

    1993-01-01

    Ethical theory about what is right and good in human conduct lies behind the issues practitioners face and the codes they turn to for guidance; it also provides guidance for actions, practices, and policies. Principles of obligation, such as egoism, utilitarianism, and deontology, offer general answers to the question, "Which acts/practices are morally right?" A re-emerging alternative to using such principles to assess individual conduct is to center normative theory on personal virtues. For structuring society's institutions, principles of social justice offer alternative answers to the question, "How should social benefits and burdens be distributed?" But human concerns about right and good call for more than just theoretical responses. Some critics (eg, the postmodernists and the feminists) charge that normative ethical theorizing is a misguided enterprise. However, that charge should be taken as a caution and not as a refutation of normative ethical theorizing.

  11. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  12. The Independence of Markov's Principle in Type Theory

    DEFF Research Database (Denmark)

    Coquand, Thierry; Mannaa, Bassel

    2017-01-01

    for the generic point of this model. Instead we design an extension of type theory, which intuitively extends type theory by the addition of a generic point of Cantor space. We then show the consistency of this extension by a normalization argument. Markov's principle does not hold in this extension......In this paper, we show that Markov's principle is not derivable in dependent type theory with natural numbers and one universe. One way to prove this would be to remark that Markov's principle does not hold in a sheaf model of type theory over Cantor space, since Markov's principle does not hold......, and it follows that it cannot be proved in type theory....

  13. Application of the maximum entropy production principle to electrical systems

    International Nuclear Information System (INIS)

    Christen, Thomas

    2006-01-01

    For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated

  14. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang

    2016-01-01

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  15. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua

    2016-10-19

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  16. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  17. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  18. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  19. On a Weak Discrete Maximum Principle for hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Vejchodský, Tomáš

    -, č. 209 (2007), s. 54-65 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/05/0629 Institutional research plan: CEZ:AV0Z20570509; CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.943, year: 2007

  20. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    International Nuclear Information System (INIS)

    Kaya, Savaş; Kaya, Cemal; Islam, Nazmul

    2016-01-01

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  1. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)

    2016-03-15

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  2. A maximum principle for the first-order Boltzmann equation, incorporating a potential treatment of voids

    International Nuclear Information System (INIS)

    Schofield, S.L.

    1988-01-01

    Ackroyd's generalized least-squares method for solving the first-order Boltzmann equation is adapted to incorporate a potential treatment of voids. The adaptation comprises a direct least-squares minimization allied with a suitably-defined bilinear functional. The resulting formulation gives rise to a maximum principle whose functional does not contain terms of the type that have previously led to difficulties in treating void regions. The maximum principle is derived without requiring continuity of the flux at interfaces. The functional of the maximum principle is concluded to have an Euler-Lagrange equation given directly by the first-order Boltzmann equation. (author)

  3. Maximum Principles and Boundary Value Problems for First-Order Neutral Functional Differential Equations

    Directory of Open Access Journals (Sweden)

    Domoshnitsky Alexander

    2009-01-01

    Full Text Available We obtain the maximum principles for the first-order neutral functional differential equation where , and are linear continuous operators, and are positive operators, is the space of continuous functions, and is the space of essentially bounded functions defined on . New tests on positivity of the Cauchy function and its derivative are proposed. Results on existence and uniqueness of solutions for various boundary value problems are obtained on the basis of the maximum principles.

  4. Perspective: Maximum caliber is a general variational principle for dynamical systems.

    Science.gov (United States)

    Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A

    2018-01-07

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  5. Perspective: Maximum caliber is a general variational principle for dynamical systems

    Science.gov (United States)

    Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.

    2018-01-01

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  6. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    Energy Technology Data Exchange (ETDEWEB)

    Donnelly, Catherine, E-mail: C.Donnelly@hw.ac.uk [Heriot-Watt University, Department of Actuarial Mathematics and Statistics (United Kingdom)

    2011-10-15

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  7. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    International Nuclear Information System (INIS)

    Donnelly, Catherine

    2011-01-01

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  8. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    Science.gov (United States)

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  9. Maximum Entropy and Theory Construction: A Reply to Favretti

    Directory of Open Access Journals (Sweden)

    John Harte

    2018-04-01

    Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.

  10. Completely boundary-free minimum and maximum principles for neutron transport and their least-squares and Galerkin equivalents

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1982-01-01

    Some minimum and maximum variational principles for even-parity neutron transport are reviewed and the corresponding principles for odd-parity transport are derived by a simple method to show why the essential boundary conditions associated with these maximum principles have to be imposed. The method also shows why both the essential and some of the natural boundary conditions associated with these minimum principles have to be imposed. These imposed boundary conditions for trial functions in the variational principles limit the choice of the finite element used to represent trial functions. The reasons for the boundary conditions imposed on the principles for even- and odd-parity transport point the way to a treatment of composite neutron transport, for which completely boundary-free maximum and minimum principles are derived from a functional identity. In general a trial function is used for each parity in the composite neutron transport, but this can be reduced to one without any boundary conditions having to be imposed. (author)

  11. Foundations of gravitation theory: the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1978-01-01

    A new framework is presented within which to discuss the principle of equivalence and its experimental tests. The framework incorporates a special structure imposed on the equivalence principle by the principle of energy conservation. This structure includes relations among the conceptual components of the equivalence principle as well as quantitative relations among the outcomes of its experimental tests. One of the most striking new results obtained through use of this framework is a connection between the breakdown of local Lorentz invariance and the breakdown of the principle that all bodies fall with the same acceleration in a gravitational field. An extensive discussion of experimental tests of the equivalence principle and their significance is also presented. Within the above framework, theory-independent analyses of a broad range of equivalence principle tests are possible. Gravitational redshift experiments. Doppler-shift experiments, the Turner-Hill and Hughes-Drever experiments, and a number of solar-system tests of gravitation theories are analyzed. Application of the techniques of theoretical nuclear physics to the quantitative interpretation of equivalence principle tests using laboratory materials of different composition yields a number of important results. It is found that current Eotvos experiments significantly demonstrate the compatibility of the weak interactions with the equivalence principle. It is also shown that the Hughes-Drever experiment is the most precise test of local Lorentz invariance yet performed. The work leads to a strong, tightly knit empirical basis for the principle of equivalence, the central pillar of the foundations of gravitation theory

  12. Bounds and maximum principles for the solution of the linear transport equation

    International Nuclear Information System (INIS)

    Larsen, E.W.

    1981-01-01

    Pointwise bounds are derived for the solution of time-independent linear transport problems with surface sources in convex spatial domains. Under specified conditions, upper bounds are derived which, as a function of position, decrease with distance from the boundary. Also, sufficient conditions are obtained for the existence of maximum and minimum principles, and a counterexample is given which shows that such principles do not always exist

  13. Maximum neutron flux in thermal reactors; Maksimum neutronskog fluksa kod termalnih reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Strugar, P V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1968-07-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples.

  14. The free-energy principle: a unified brain theory?

    Science.gov (United States)

    Friston, Karl

    2010-02-01

    A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.

  15. Optimal Control of Polymer Flooding Based on Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  16. Optimal control problems with delay, the maximum principle and necessary conditions

    NARCIS (Netherlands)

    Frankena, J.F.

    1975-01-01

    In this paper we consider a rather general optimal control problem involving ordinary differential equations with delayed arguments and a set of equality and inequality restrictions on state- and control variables. For this problem a maximum principle is given in pointwise form, using variational

  17. Maximum principle for a stochastic delayed system involving terminal state constraints.

    Science.gov (United States)

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  18. Africa and the Principles and Theories of International Relations ...

    African Journals Online (AJOL)

    To what extent have the principles and theories of international relations (as formulated) accommodated the specific needs and circumstances of Africa? In other words, how can the circumstances and peculiarities of Africa be made to shape and influence the established principles and theories of international relations as ...

  19. The discrete maximum principle for Galerkin solutions of elliptic problems

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2012-01-01

    Roč. 10, č. 1 (2012), s. 25-43 ISSN 1895-1074 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : discrete maximum principle * monotone methods * Galerkin solution Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2012 http://www.springerlink.com/content/x73624wm23x4wj26

  20. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  1. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  2. An extension of the maximum principle to dimensional systems and its application in nuclear engineering problems

    International Nuclear Information System (INIS)

    Gilai, D.

    1976-01-01

    The Maximum Principle deals with optimization problems of systems, which are governed by ordinary differential equations, and which include constraints on the state and control variables. The development of nuclear engineering confronted the designers of reactors, shielding and other nuclear devices with many requests of optimization and savings and it was straight forward to use the Maximum Principle for solving optimization problems in nuclear engineering, in fact, it was widely used both structural concept design and dynamic control of nuclear systems. The main disadvantage of the Maximum Principle is that it is suitable only for systems which may be described by ordinary differential equations, e.g. one dimensional systems. In the present work, starting from the variational approach, the original Maximum Principle is extended to multidimensional systems, and the principle which has been derived, is of a more general form and is applicable to any system which can be defined by linear partial differential equations of any order. To check out the applicability of the extended principle, two examples are solved: the first in nuclear shield design, where the goal is to construct a shield around a neutron emitting source, using given materials, so that the total dose outside of the shielding boundaries is minimized, the second in material distribution design in the core of a power reactor, so that the power peak is minimised. For the second problem, an iterative method was developed. (B.G.)

  3. Quantum theory from first principles an informational approach

    CERN Document Server

    D'Ariano, Giacomo Mauro; Perinotti, Paolo

    2017-01-01

    Quantum theory is the soul of theoretical physics. It is not just a theory of specific physical systems, but rather a new framework with universal applicability. This book shows how we can reconstruct the theory from six information-theoretical principles, by rebuilding the quantum rules from the bottom up. Step by step, the reader will learn how to master the counterintuitive aspects of the quantum world, and how to efficiently reconstruct quantum information protocols from first principles. Using intuitive graphical notation to represent equations, and with shorter and more efficient derivations, the theory can be understood and assimilated with exceptional ease. Offering a radically new perspective on the field, the book contains an efficient course of quantum theory and quantum information for undergraduates. The book is aimed at researchers, professionals, and students in physics, computer science and philosophy, as well as the curious outsider seeking a deeper understanding of the theory.

  4. The underlying principles of relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Logunov, A.A.

    1989-01-01

    The paper deals with the main statements of relativistic theory of gravitation, constructed in result of critical analysis of the general theory of relativity. The principle of geometrization is formulated

  5. Generalized uncertainty principle as a consequence of the effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Faizal, Mir, E-mail: mirfaizalmir@gmail.com [Irving K. Barber School of Arts and Sciences, University of British Columbia – Okanagan, Kelowna, British Columbia V1V 1V7 (Canada); Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada); Ali, Ahmed Farag, E-mail: ahmed.ali@fsc.bu.edu.eg [Department of Physics, Faculty of Science, Benha University, Benha, 13518 (Egypt); Netherlands Institute for Advanced Study, Korte Spinhuissteeg 3, 1012 CG Amsterdam (Netherlands); Nassar, Ali, E-mail: anassar@zewailcity.edu.eg [Department of Physics, Zewail City of Science and Technology, 12588, Giza (Egypt)

    2017-02-10

    We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

  6. Generalized uncertainty principle as a consequence of the effective field theory

    Directory of Open Access Journals (Sweden)

    Mir Faizal

    2017-02-01

    Full Text Available We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

  7. Justifying Design Decisions with Theory-based Design Principles

    OpenAIRE

    Schermann, Michael;Gehlert, Andreas;Pohl, Klaus;Krcmar, Helmut

    2014-01-01

    Although the role of theories in design research is recognized, we show that little attention has been paid on how to use theories when designing new artifacts. We introduce design principles as a new methodological approach to address this problem. Design principles extend the notion of design rationales that document how a design decision emerged. We extend the concept of design rationales by using theoretical hypotheses to support or object to design decisions. At the example of developing...

  8. Conceptual Models and Theory-Embedded Principles on Effective Schooling.

    Science.gov (United States)

    Scheerens, Jaap

    1997-01-01

    Reviews models and theories on effective schooling. Discusses four rationality-based organization theories and a fifth perspective, chaos theory, as applied to organizational functioning. Discusses theory-embedded principles flowing from these theories: proactive structuring, fit, market mechanisms, cybernetics, and self-organization. The…

  9. Statistical Significance of the Maximum Hardness Principle Applied to Some Selected Chemical Reactions.

    Science.gov (United States)

    Saha, Ranajit; Pan, Sudip; Chattaraj, Pratim K

    2016-11-05

    The validity of the maximum hardness principle (MHP) is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd) and LC-BLYP/6-311++G(2df,3pd) (def2-QZVP for iodine and mercury) levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.

  10. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  11. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

    OpenAIRE

    Ge Cheng; Zhenyu Zhang; Moses Ntanda Kyebambe; Nasser Kimbugwe

    2016-01-01

    Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that...

  12. Maximum principles for boundary-degenerate linear parabolic differential operators

    OpenAIRE

    Feehan, Paul M. N.

    2013-01-01

    We develop weak and strong maximum principles for boundary-degenerate, linear, parabolic, second-order partial differential operators, $Lu := -u_t-\\tr(aD^2u)-\\langle b, Du\\rangle + cu$, with \\emph{partial} Dirichlet boundary conditions. The coefficient, $a(t,x)$, is assumed to vanish along a non-empty open subset, $\\mydirac_0!\\sQ$, called the \\emph{degenerate boundary portion}, of the parabolic boundary, $\\mydirac!\\sQ$, of the domain $\\sQ\\subset\\RR^{d+1}$, while $a(t,x)$ may be non-zero at po...

  13. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  14. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  15. Principle-theoretic approach of kondo and construction-theoretic formalism of gauge theories

    International Nuclear Information System (INIS)

    Jain, L.C.

    1986-01-01

    Einstein classified various theories in physics as principle-theories and constructive-theories. In this lecture Kondo's approach to microscopic and macroscopic phenomena is analysed for its principle theoretic pursuit as followed by construction. The fundamentals of his theory may be recalled as Tristimulus principle, Observation principle, Kawaguchi spaces, empirical information, epistemological point of view, unitarity, intrinsicality, and dimensional analysis subject to logical and geometrical achievement. On the other hand, various physicists have evolved constructive gauge theories through the phenomenological point of view, often a collective one. Their synthetic method involves fibre bundles and connections, path integrals as well as other hypothetical structures. They lead towards clarity, completeness and adaptability

  16. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Science.gov (United States)

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  17. Maximum principles and sharp constants for solutions of elliptic and parabolic systems

    CERN Document Server

    Kresin, Gershon

    2012-01-01

    The main goal of this book is to present results pertaining to various versions of the maximum principle for elliptic and parabolic systems of arbitrary order. In particular, the authors present necessary and sufficient conditions for validity of the classical maximum modulus principles for systems of second order and obtain sharp constants in inequalities of Miranda-Agmon type and in many other inequalities of a similar nature. Somewhat related to this topic are explicit formulas for the norms and the essential norms of boundary integral operators. The proofs are based on a unified approach using, on one hand, representations of the norms of matrix-valued integral operators whose target spaces are linear and finite dimensional, and, on the other hand, on solving certain finite dimensional optimization problems. This book reflects results obtained by the authors, and can be useful to research mathematicians and graduate students interested in partial differential equations.

  18. Discrete Maximum Principle for Higher-Order Finite Elements in 1D

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš; Šolín, Pavel

    2007-01-01

    Roč. 76, č. 260 (2007), s. 1833-1846 ISSN 0025-5718 R&D Projects: GA ČR GP201/04/P021 Institutional research plan: CEZ:AV0Z10190503; CEZ:AV0Z20760514 Keywords : discrete maximum principle * discrete Grren´s function * higher-order elements Subject RIV: BA - General Mathematics Impact factor: 1.230, year: 2007

  19. Variational principles for collective motion: Relation between invariance principle of the Schroedinger equation and the trace variational principle

    International Nuclear Information System (INIS)

    Klein, A.; Tanabe, K.

    1984-01-01

    The invariance principle of the Schroedinger equation provides a basis for theories of collective motion with the help of the time-dependent variational principle. It is formulated here with maximum generality, requiring only the motion of intrinsic state in the collective space. Special cases arise when the trial vector is a generalized coherent state and when it is a uniform superposition of collective eigenstates. The latter example yields variational principles uncovered previously only within the framework of the equations of motion method. (orig.)

  20. Gauge theories under incorporation of a generalized uncertainty principle

    International Nuclear Information System (INIS)

    Kober, Martin

    2010-01-01

    There is considered an extension of gauge theories according to the assumption of a generalized uncertainty principle which implies a minimal length scale. A modification of the usual uncertainty principle implies an extended shape of matter field equations like the Dirac equation. If there is postulated invariance of such a generalized field equation under local gauge transformations, the usual covariant derivative containing the gauge potential has to be replaced by a generalized covariant derivative. This leads to a generalized interaction between the matter field and the gauge field as well as to an additional self-interaction of the gauge field. Since the existence of a minimal length scale seems to be a necessary assumption of any consistent quantum theory of gravity, the gauge principle is a constitutive ingredient of the standard model, and even gravity can be described as gauge theory of local translations or Lorentz transformations, the presented extension of gauge theories appears as a very important consideration.

  1. The renormalized action principle in quantum field theory

    International Nuclear Information System (INIS)

    Balasin, H.

    1990-03-01

    The renormalized action principle holds a central position in field theory, since it offers a variety of applications. The main concern of this work is the proof of the action principle within the so-called BPHZ-scheme of renormalization. Following the classical proof given by Lam and Lowenstein, some loopholes are detected and closed. The second part of the work deals with the application of the action principle to pure Yang-Mills-theories within the axial gauge (n 2 ≠ 0). With the help of the action principle we investigate the decoupling of the Faddeev-Popov-ghost-fields from the gauge field. The consistency of this procedure, suggested by three-graph approximation, is proven to survive quantization. Finally we deal with the breaking of Lorentz-symmetry caused by the presence of the gauge-direction n. Using BRST-like techniques and the semi-simplicity of the Lorentz-group, it is shown that no new breakings arise from quantization. Again the main step of the proof is provided by the action principle. (Author, shortened by G.Q.)

  2. Principles of general relativity theory in terms of the present day physics

    International Nuclear Information System (INIS)

    Pervushin, V.N.

    1986-01-01

    A hystory of gradual unification of general relativity theory and quantum field theory on the basis of unified geometrical principles is detected. The gauge invariance principles became universal for construction of all physical theories. Quantum mechanics, electrodynamics and Einstein gravitation theory were used to form geometrical principles. Identity of inertial and gravitational masses is an experimental basis of the general relativity theory (GRT). It is shown that correct understanding of GRT bases is a developing process related to the development of the present physics and stimulating this development

  3. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Ge Cheng

    2016-12-01

    Full Text Available Predicting the outcome of National Basketball Association (NBA matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.

  4. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  5. Applications of the principle of maximum entropy: from physics to ecology.

    Science.gov (United States)

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.

  6. Applications of the principle of maximum entropy: from physics to ecology

    International Nuclear Information System (INIS)

    Banavar, Jayanth R; Volkov, Igor; Maritan, Amos

    2010-01-01

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori. (topical review)

  7. Computer-based teaching module design: principles derived from learning theories.

    Science.gov (United States)

    Lau, K H Vincent

    2014-03-01

    The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to

  8. Stochastic control theory dynamic programming principle

    CERN Document Server

    Nisio, Makiko

    2015-01-01

    This book offers a systematic introduction to the optimal stochastic control theory via the dynamic programming principle, which is a powerful tool to analyze control problems. First we consider completely observable control problems with finite horizons. Using a time discretization we construct a nonlinear semigroup related to the dynamic programming principle (DPP), whose generator provides the Hamilton–Jacobi–Bellman (HJB) equation, and we characterize the value function via the nonlinear semigroup, besides the viscosity solution theory. When we control not only the dynamics of a system but also the terminal time of its evolution, control-stopping problems arise. This problem is treated in the same frameworks, via the nonlinear semigroup. Its results are applicable to the American option price problem. Zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games are studied via a nonlinear semigroup related to DPP (the min-ma...

  9. Relativity and equivalence principles in the gauge theory of gravitation

    International Nuclear Information System (INIS)

    Ivanenko, D.; Sardanashvili, G.

    1981-01-01

    Roles of relativity (RP) and equivalence principles (EP) in the gauge theory of gravity are shown. RP in the gravitational theory in formalism of laminations can be formulated as requirement of covariance of equations relative to the GL + (4, R)(X) gauge group. In such case RP turns out to be identical to the gauge principle in the gauge theory of a group of outer symmetries, and the gravitational theory can be directly constructed as the gauge theory. In general relativity theory the equivalence theory adds RP and is intended for description of transition to a special relativity theory in some system of reference. The approach described takes into account that in the gauge theory, besides gauge fields under conditions of spontaneous symmetry breaking, the Goldstone and Higgs fields can also arise, to which the gravitational metric field is related, what is the sequence of taking account of RP in the gauge theory of gravitation [ru

  10. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2011-01-01

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of (ℎ/2π) 2 . In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when (ℎ/2π)→0.

  11. The Maximum Entropy Production Principle: Its Theoretical Foundations and Applications to the Earth System

    Directory of Open Access Journals (Sweden)

    Axel Kleidon

    2010-03-01

    Full Text Available The Maximum Entropy Production (MEP principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the already established framework of non-equilibrium thermodynamics, with the assumption of local thermodynamic equilibrium at the appropriate scales.

  12. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  13. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basing finite element methods on variational principles, especially if, as maximum and minimum principles, these can provide bounds and hence estimates of accuracy. The non-symmetric (and hence stationary rather than extremum principles) are seen however to play a significant role in optimisation theory. (Orig./A.B.)

  14. Principles of General Systems Theory: Some Implications for Higher Education Administration

    Science.gov (United States)

    Gilliland, Martha W.; Gilliland, J. Richard

    1978-01-01

    Three principles of general systems theory are presented and systems theory is distinguished from systems analysis. The principles state that all systems tend to become more disorderly, that they must be diverse in order to be stable, and that only those maximizing their resource utilization for doing useful work will survive. (Author/LBH)

  15. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  16. Generalized uncertainty principle and the maximum mass of ideal white dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Rashidi, Reza, E-mail: reza.rashidi@srttu.edu

    2016-11-15

    The effects of a generalized uncertainty principle on the structure of an ideal white dwarf star is investigated. The equation describing the equilibrium configuration of the star is a generalized form of the Lane–Emden equation. It is proved that the star always has a finite size. It is then argued that the maximum mass of such an ideal white dwarf tends to infinity, as opposed to the conventional case where it has a finite value.

  17. Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles

    Directory of Open Access Journals (Sweden)

    Paulo H. Egydio

    2008-01-01

    Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.

  18. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  19. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    Energy Technology Data Exchange (ETDEWEB)

    Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  20. A maximum-principle preserving finite element method for scalar conservation equations

    KAUST Repository

    Guermond, Jean-Luc; Nazarov, Murtazo

    2014-01-01

    This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.

  1. A maximum-principle preserving finite element method for scalar conservation equations

    KAUST Repository

    Guermond, Jean-Luc

    2014-04-01

    This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.

  2. Drying principles and theory: An overview

    International Nuclear Information System (INIS)

    Ekechukwu, O.V.

    1995-10-01

    A comprehensive review of the fundamental principles and theories governing the drying process is presented. Basic definitions are given. The development of contemporary models of drying of agricultural products are traced from the earliest reported sorption and moisture equilibrium models, through the single kernel of product models to the thin layer and deep bed drying analysis. (author). 29 refs, 10 figs

  3. Dispersion correction derived from first principles for density functional theory and Hartree-Fock theory.

    Science.gov (United States)

    Guidez, Emilie B; Gordon, Mark S

    2015-03-12

    The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.

  4. A Stochastic Maximum Principle for General Mean-Field Systems

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Li, Juan; Ma, Jin

    2016-01-01

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  5. A Stochastic Maximum Principle for General Mean-Field Systems

    Energy Technology Data Exchange (ETDEWEB)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr [Université de Bretagne-Occidentale, Département de Mathématiques (France); Li, Juan, E-mail: juanli@sdu.edu.cn [Shandong University, Weihai, School of Mathematics and Statistics (China); Ma, Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States)

    2016-12-15

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  6. Variational principle for a prototype Rastall theory of gravitation

    International Nuclear Information System (INIS)

    Smalley, L.L.

    1984-01-01

    A prototype of Rastall's theory of gravity, in which the divergence of the energy-momentum tensor is proportional to the gradient of the scalar curvature, is shown to be derivable from a variational principle. Both the proportionality factor and the unrenormalized gravitational constant are found to be covariantly constant, but not necessarily constant. The prototype theory is, therefore, a gravitational theory with variable gravitational constant

  7. On the use Pontryagin's maximum principle in the reactor profiling problem

    International Nuclear Information System (INIS)

    Silko, P.P.

    1976-01-01

    The optimal given power profile approximation problem in nuclear reactors is posed as one of physical profiling problems in terms of the theory of optimal processes. It is necessary to distribute the concentration of the profiling substance in a certain nuclear reactor in such a way that the power profile obtained in the core would be as near as possible to the given profile. It is suggested that the original system of differential equations describing the behaviour of neutrons in a reactor and some applied requirements may be written in the form of usual differential equations of the first order. The integral quadratic criterion evaluating a deviation of the power profile obtained in a reactor from the given one is used as a purpose function. The initial state is given, the control aim is determined as the necessity of transfer of a control object from the initial state to the given set of finite states known as a purpose set. A class of permissible controls consists of measurable functions in the given range. On solving the formulated problem Pontryagin's maximum principle is used. As an example, the power profile flattening problem is considered, for which a program in Fortran-4 for the 'Minsk-32' computer has been written. The optimal reactor parameters calculated by this program at various boundary values of the control are presented. It is noticed that a type of the optimal reactor configuration depends on boundary values of the control

  8. Energy aspect of the correspondence principle in gravitation theory

    International Nuclear Information System (INIS)

    Mitskevich, N.V.; Nesterov, A.I.

    1976-01-01

    The correspondence of different definitions of invariant values in the general relativity theory with the Newton theory is considered. The analysis is carried out in the system of reference of a single Fermi-observer. It turns out that of the values considered the Papapetru pseudotensor only satisfies the correspondence principle

  9. Comments on a derivation and application of the 'maximum entropy production' principle

    International Nuclear Information System (INIS)

    Grinstein, G; Linsker, R

    2007-01-01

    We show that (1) an error invalidates the derivation (Dewar 2005 J. Phys. A: Math. Gen. 38 L371) of the maximum entropy production (MaxEP) principle for systems far from equilibrium, for which the constitutive relations are nonlinear; and (2) the claim (Dewar 2003 J. Phys. A: Math. Gen. 36 631) that the phenomenon of 'self-organized criticality' is a consequence of MaxEP for slowly driven systems is unjustified. (comment)

  10. Discrete maximum principle for FE solutions of the diffusion-reaction problem on prismatic meshes

    Czech Academy of Sciences Publication Activity Database

    Hannukainen, A.; Korotov, S.; Vejchodský, Tomáš

    2009-01-01

    Roč. 226, č. 2 (2009), s. 275-287 ISSN 0377-0427 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : diffusion-reaction problem * maximum principle * prismatic finite elements Subject RIV: BA - General Mathematics Impact factor: 1.292, year: 2009

  11. A variational principle for Newton-Cartan theory

    International Nuclear Information System (INIS)

    Goenner, H.F.M.

    1984-01-01

    In the framework of a space-time theory of gravitation a variational principle is set up for the gravitational field equations and the equations of motion of matter. The general framework leads to Newton's equations of motion with an unspecified force term and, for irrotational motion, to a restriction on the propagation of the shear tensor along the streamlines of matter. The field equations obtained from the variation are weaker than the standard field equations of Newton-Cartan theory. An application to fluids with shear and bulk viscosity is given. (author)

  12. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  13. Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics

    International Nuclear Information System (INIS)

    Xu, Bin; Zhang, Hongen; Wang, Zhijian; Zhang, Jianbo

    2012-01-01

    By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.

  14. The argument of the principles in contemporary theory of law: An antipositivist plea

    Directory of Open Access Journals (Sweden)

    José Julián Suárez-Rodríguez

    2012-06-01

    Full Text Available The theory of legal principles knows today a resonance unknown in other times of legal science and several authors have dedicated themselves to its formation, each of them giving important elements in its configuration. This article presents the characteristics of the contemporary theory of the principles and the contributions that the most important authors in the field gave to it. Furthermore, it shows how the theory of principles has been developed as an argument against the main thesis of legal positivism, the dominant legal culture until the second half of the twentieth century.

  15. Lattice Field Theory with the Sign Problem and the Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Masahiro Imachi

    2007-02-01

    Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.

  16. Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bin, E-mail: xubin211@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Public Administration College, Zhejiang Gongshang University, Hangzhou, 310018 (China); Zhang, Hongen, E-mail: hongen777@163.com [Department of Physics, Zhejiang University, Hangzhou, 310027 (China); Wang, Zhijian, E-mail: wangzj@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Zhang, Jianbo, E-mail: jbzhang08@zju.edu.cn [Department of Physics, Zhejiang University, Hangzhou, 310027 (China)

    2012-03-19

    By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.

  17. Convex integration theory solutions to the h-principle in geometry and topology

    CERN Document Server

    Spring, David

    1998-01-01

    This book provides a comprehensive study of convex integration theory in immersion-theoretic topology. Convex integration theory, developed originally by M. Gromov, provides general topological methods for solving the h-principle for a wide variety of problems in differential geometry and topology, with applications also to PDE theory and to optimal control theory. Though topological in nature, the theory is based on a precise analytical approximation result for higher order derivatives of functions, proved by M. Gromov. This book is the first to present an exacting record and exposition of all of the basic concepts and technical results of convex integration theory in higher order jet spaces, including the theory of iterated convex hull extensions and the theory of relative h-principles. A second feature of the book is its detailed presentation of applications of the general theory to topics in symplectic topology, divergence free vector fields on 3-manifolds, isometric immersions, totally real embeddings, u...

  18. Principles of Economic Union. An Extension of John Rawls's Theory of Justice

    NARCIS (Netherlands)

    Wolthuis, A.J.

    2017-01-01

    In this article I uncover the principles of justice by which an economic union is to be constituted. For this purpose I extend John Rawls’s constructivist theory of justice to economically integrated societies. With regard to the principles I defend a twofold claim. First, the principles of economic

  19. Discrete maximum principle for Poisson equation with mixed boundary conditions solved by hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš; Šolín, P.

    2009-01-01

    Roč. 1, č. 2 (2009), s. 201-214 ISSN 2070-0733 R&D Projects: GA AV ČR IAA100760702; GA ČR(CZ) GA102/07/0496; GA ČR GA102/05/0629 Institutional research plan: CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM * Poisson equation * mixed boundary conditions Subject RIV: BA - General Mathematics

  20. Principles of the theory of solids

    CERN Document Server

    Ziman, J M

    1972-01-01

    Professor Ziman's classic textbook on the theory of solids was first pulished in 1964. This paperback edition is a reprint of the second edition, which was substantially revised and enlarged in 1972. The value and popularity of this textbook is well attested by reviewers' opinions and by the existence of several foreign language editions, including German, Italian, Spanish, Japanese, Polish and Russian. The book gives a clear exposition of the elements of the physics of perfect crystalline solids. In discussing the principles, the author aims to give students an appreciation of the conditions which are necessary for the appearance of the various phenomena. A self-contained mathematical account is given of the simplest model that will demonstrate each principle. A grounding in quantum mechanics and knowledge of elementary facts about solids is assumed. This is therefore a textbook for advanced undergraduates and is also appropriate for graduate courses.

  1. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1976-01-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented; the deductive approach appears here for the first time in the literature. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution is then re-arranged into the superposition principle. The inductive proof is simpler than Rostoker's although similar in some ways; it differs in that first-order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  2. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1975-12-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution then rearranged into the superposition principle. The inductive proof is simpler than Rostoker's, although similar in some ways; it differs in that first order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  3. The Nature of the Chemical Process. 1. Symmetry Evolution – Revised Information Theory, Similarity Principle and Ugly Symmetry

    Directory of Open Access Journals (Sweden)

    Shu-Kun Lin

    2001-03-01

    Full Text Available Abstract: Symmetry is a measure of indistinguishability. Similarity is a continuous measure of imperfect symmetry. Lewis' remark that “gain of entropy means loss of information” defines the relationship of entropy and information. Three laws of information theory have been proposed. Labeling by introducing nonsymmetry and formatting by introducing symmetry are defined. The function L ( L=lnw, w is the number of microstates, or the sum of entropy and information, L=S+I of the universe is a constant (the first law of information theory. The entropy S of the universe tends toward a maximum (the second law law of information theory. For a perfect symmetric static structure, the information is zero and the static entropy is the maximum (the third law law of information theory. Based on the Gibbs inequality and the second law of the revised information theory we have proved the similarity principle (a continuous higher similarity−higher entropy relation after the rejection of the Gibbs paradox and proved the Curie-Rosen symmetry principle (a higher symmetry−higher stability relation as a special case of the similarity principle. The principles of information minimization and potential energy minimization are compared. Entropy is the degree of symmetry and information is the degree of nonsymmetry. There are two kinds of symmetries: dynamic and static symmetries. Any kind of symmetry will define an entropy and, corresponding to the dynamic and static symmetries, there are static entropy and dynamic entropy. Entropy in thermodynamics is a special kind of dynamic entropy. Any spontaneous process will evolve towards the highest possible symmetry, either dynamic or static or both. Therefore the revised information theory can be applied to characterizing all kinds of structural stability and process spontaneity. Some examples in chemical physics have been given. Spontaneous processes of all kinds of molecular

  4. A Stochastic Maximum Principle for Risk-Sensitive Mean-Field Type Control

    KAUST Repository

    Djehiche, Boualem; Tembine, Hamidou; Tempone, Raul

    2015-01-01

    In this paper we study mean-field type control problems with risk-sensitive performance functionals. We establish a stochastic maximum principle (SMP) for optimal control of stochastic differential equations (SDEs) of mean-field type, in which the drift and the diffusion coefficients as well as the performance functional depend not only on the state and the control but also on the mean of the distribution of the state. Our result extends the risk-sensitive SMP (without mean-field coupling) of Lim and Zhou (2005), derived for feedback (or Markov) type optimal controls, to optimal control problems for non-Markovian dynamics which may be time-inconsistent in the sense that the Bellman optimality principle does not hold. In our approach to the risk-sensitive SMP, the smoothness assumption on the value-function imposed in Lim and Zhou (2005) needs not be satisfied. For a general action space a Peng's type SMP is derived, specifying the necessary conditions for optimality. Two examples are carried out to illustrate the proposed risk-sensitive mean-field type SMP under linear stochastic dynamics with exponential quadratic cost function. Explicit solutions are given for both mean-field free and mean-field models.

  5. A Stochastic Maximum Principle for Risk-Sensitive Mean-Field Type Control

    KAUST Repository

    Djehiche, Boualem

    2015-02-24

    In this paper we study mean-field type control problems with risk-sensitive performance functionals. We establish a stochastic maximum principle (SMP) for optimal control of stochastic differential equations (SDEs) of mean-field type, in which the drift and the diffusion coefficients as well as the performance functional depend not only on the state and the control but also on the mean of the distribution of the state. Our result extends the risk-sensitive SMP (without mean-field coupling) of Lim and Zhou (2005), derived for feedback (or Markov) type optimal controls, to optimal control problems for non-Markovian dynamics which may be time-inconsistent in the sense that the Bellman optimality principle does not hold. In our approach to the risk-sensitive SMP, the smoothness assumption on the value-function imposed in Lim and Zhou (2005) needs not be satisfied. For a general action space a Peng\\'s type SMP is derived, specifying the necessary conditions for optimality. Two examples are carried out to illustrate the proposed risk-sensitive mean-field type SMP under linear stochastic dynamics with exponential quadratic cost function. Explicit solutions are given for both mean-field free and mean-field models.

  6. Post-Newtonian approximation of the maximum four-dimensional Yang-Mills gauge theory

    International Nuclear Information System (INIS)

    Smalley, L.L.

    1982-01-01

    We have calculated the post-Newtonian approximation of the maximum four-dimensional Yang-Mills theory proposed by Hsu. The theory contains torsion; however, torsion is not active at the level of the post-Newtonian approximation of the metric. Depending on the nature of the approximation, we obtain the general-relativistic values for the classical Robertson parameters (γ = β = 1), but deviations for the Nordtvedt effect and violations of post-Newtonian conservation laws. We conclude that in its present form the theory is not a viable theory of gravitation

  7. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  8. Collapsing of multigroup cross sections in optimization problems solved by means of the maximum principle of Pontryagin

    International Nuclear Information System (INIS)

    Anton, V.

    1979-05-01

    A new formulation of multigroup cross section collapsing based on the conservation of point or zone value of hamiltonian is presented. This attempt is proper to optimization problems solved by means of maximum principle of Pontryagin. (author)

  9. Noncommutative Common Cause Principles in algebraic quantum field theory

    International Nuclear Information System (INIS)

    Hofer-Szabó, Gábor; Vecsernyés, Péter

    2013-01-01

    States in algebraic quantum field theory “typically” establish correlation between spacelike separated events. Reichenbach's Common Cause Principle, generalized to the quantum field theoretical setting, offers an apt tool to causally account for these superluminal correlations. In the paper we motivate first why commutativity between the common cause and the correlating events should be abandoned in the definition of the common cause. Then we show that the Noncommutative Weak Common Cause Principle holds in algebraic quantum field theory with locally finite degrees of freedom. Namely, for any pair of projections A, B supported in spacelike separated regions V A and V B , respectively, there is a local projection C not necessarily commuting with A and B such that C is supported within the union of the backward light cones of V A and V B and the set {C, C ⊥ } screens off the correlation between A and B.

  10. Ergodicity, Maximum Entropy Production, and Steepest Entropy Ascent in the Proofs of Onsager's Reciprocal Relations

    Science.gov (United States)

    Benfenati, Francesco; Beretta, Gian Paolo

    2018-04-01

    We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).

  11. Dynamic Optimization of a Polymer Flooding Process Based on Implicit Discrete Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and some inequality constraints as polymer concentration and injection amount limitation. The optimal control model is discretized by full implicit finite-difference method. To cope with the discrete optimal control problem (OCP, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s discrete maximum principle. A modified gradient method with new adjoint construction is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  12. Weak principle of equivalence and gauge theory of tetrad aravitational field

    International Nuclear Information System (INIS)

    Tunyak, V.N.

    1978-01-01

    It is shown that, unlike the tetrade formulation of the general relativity theory derived from the requirement on the Poincare group localization, the tetrade gravitation theory corresponding to the Trader formulation of the weak equivalence principle, where the nongravitational-matter Lagrangian is the direct covariant generalization of the partial relativistic expression on the Riemann space-time is incompatible with the known method for deriving the calibration theory of the tetrade gravitation field

  13. Directionality Theory and the Entropic Principle of Natural Selection

    Directory of Open Access Journals (Sweden)

    Lloyd A. Demetrius

    2014-10-01

    Full Text Available Darwinian fitness describes the capacity of an organism to appropriate resources from the environment and to convert these resources into net-offspring production. Studies of competition between related types indicate that fitness is analytically described by entropy, a statistical measure which is positively correlated with population stability, and describes the number of accessible pathways of energy flow between the individuals in the population. Directionality theory is a mathematical model of the evolutionary process based on the concept evolutionary entropy as the measure of fitness. The theory predicts that the changes which occur as a population evolves from one non-equilibrium steady state to another are described by the following directionality principle–fundamental theorem of evolution: (a an increase in evolutionary entropy when resource composition is diverse, and resource abundance constant; (b a decrease in evolutionary entropy when resource composition is singular, and resource abundance variable. Evolutionary entropy characterizes the dynamics of energy flow between the individual elements in various classes of biological networks: (a where the units are individuals parameterized by age, and their age-specific fecundity and mortality; where the units are metabolites, and the transitions are the biochemical reactions that convert substrates to products; (c where the units are social groups, and the forces are the cooperative and competitive interactions between the individual groups. % This article reviews the analytical basis of the evolutionary entropic principle, and describes applications of directionality theory to the study of evolutionary dynamics in two biological systems; (i social networks–the evolution of cooperation; (ii metabolic networks–the evolution of body size. Statistical thermodynamics is a mathematical model of macroscopic behavior in inanimate matter based on entropy, a statistical measure which

  14. Principle-based concept analysis: intentionality in holistic nursing theories.

    Science.gov (United States)

    Aghebati, Nahid; Mohammadi, Eesa; Ahmadi, Fazlollah; Noaparast, Khosrow Bagheri

    2015-03-01

    This is a report of a principle-based concept analysis of intentionality in holistic nursing theories. A principle-based concept analysis method was used to analyze seven holistic theories. The data included eight books and 31 articles (1998-2011), which were retrieved through MEDLINE and CINAHL. Erickson, Kriger, Parse, Watson, and Zahourek define intentionality as a capacity, a focused consciousness, and a pattern of human being. Rogers and Newman do not explicitly mention intentionality; however, they do explain pattern and consciousness (epistemology). Intentionality has been operationalized as a core concept of nurse-client relationships (pragmatic). The theories are consistent on intentionality as a noun and as an attribute of the person-intentionality is different from intent and intention (linguistic). There is ambiguity concerning the boundaries between intentionality and consciousness (logic). Theoretically, intentionality is an evolutionary capacity to integrate human awareness and experience. Because intentionality is an individualized concept, we introduced it as "a matrix of continuous known changes" that emerges in two forms: as a capacity of human being and as a capacity of transpersonal caring. This study has produced a theoretical definition of intentionality and provides a foundation for future research to further investigate intentionality to better delineate its boundaries. © The Author(s) 2014.

  15. Relationship between Maximum Principle and Dynamic Programming for Stochastic Recursive Optimal Control Problems and Applications

    Directory of Open Access Journals (Sweden)

    Jingtao Shi

    2013-01-01

    Full Text Available This paper is concerned with the relationship between maximum principle and dynamic programming for stochastic recursive optimal control problems. Under certain differentiability conditions, relations among the adjoint processes, the generalized Hamiltonian function, and the value function are given. A linear quadratic recursive utility portfolio optimization problem in the financial engineering is discussed as an explicitly illustrated example of the main result.

  16. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    Science.gov (United States)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  17. Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.

    Science.gov (United States)

    Frieden, B Roy; Gatenby, Robert A

    2013-10-01

    Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.

  18. Discrete Maximum Principle for a 1D Problem with Piecewise-Constant Coefficients Solved by hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš; Šolín, Pavel

    2007-01-01

    Roč. 15, č. 3 (2007), s. 233-243 ISSN 1570-2820 R&D Projects: GA ČR GP201/04/P021; GA ČR GA102/05/0629 Institutional research plan: CEZ:AV0Z10190503; CEZ:AV0Z20570509 Keywords : discrete maximum principle * hp-FEM * Poisson equation Subject RIV: BA - General Mathematics

  19. The canonical equation of adaptive dynamics for life histories: from fitness-returns to selection gradients and Pontryagin's maximum principle.

    Science.gov (United States)

    Metz, Johan A Jacob; Staňková, Kateřina; Johansson, Jacob

    2016-03-01

    This paper should be read as addendum to Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013). Our goal is, using little more than high-school calculus, to (1) exhibit the form of the canonical equation of adaptive dynamics for classical life history problems, where the examples in Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013) are chosen such that they avoid a number of the problems that one gets in this most relevant of applications, (2) derive the fitness gradient occurring in the CE from simple fitness return arguments, (3) show explicitly that setting said fitness gradient equal to zero results in the classical marginal value principle from evolutionary ecology, (4) show that the latter in turn is equivalent to Pontryagin's maximum principle, a well known equivalence that however in the literature is given either ex cathedra or is proven with more advanced tools, (5) connect the classical optimisation arguments of life history theory a little better to real biology (Mendelian populations with separate sexes subject to an environmental feedback loop), (6) make a minor improvement to the form of the CE for the examples in Dieckmann et al. and Parvinen et al.

  20. On the role of the equivalence principle in the general relativity theory

    International Nuclear Information System (INIS)

    Gertsenshtein, M.E.; Stanyukovich, K.P.; Pogosyan, V.A.

    1977-01-01

    The conditions under which the solutions of the general relativity theory equations satisfy the correspondence principle are considered. It is shown that in general relativity theory, as in a plane space any systems of coordinates satisfying the topological requirements of continuity and uniqueness are admissible. The coordinate transformations must be mutually unique, and the following requirements must be met: the transformations of the coordinates xsup(i)=xsup(i)(anti xsup(k)) must preserve the class of the function, while the transformation jacobian must be finite and nonzero. The admissible metrics in the Tolmen problem for a vacuum are considered. A prohibition of the vacuum solution of the Tolmen problem is obtained from the correspondence principle. The correspondence principle is applied to the solution of the Friedmann problem by constructing a spherical symmetric self-similar solution, in which replacement of compression by expansion occurs at a finite density. The examples adduced convince that the application of the correspondence principle makes it possible to discard physically inadmissible solutions and obtained new physical results

  1. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  2. On the fundamental principles of the relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Logunov, A.A.; Mestvirishvili, M.A.

    1990-01-01

    This paper expounds consistently within the frames of the Special Relativity Theory the fundamental postulates of the Relativistic Theory of Gravitation (RTG) which make it possible to obtain the unique complete system of the equations for gravitational field. Major attention has been paid to the analysis of the gauge group and of the causality principle. Some results related to the evolution of the Friedmann Universe, to gravitational collapse, etc. being the consequences of the RTG equations are also presented. 7 refs

  3. Role of Logic and Mentality as the Basics of Wittgenstein's Picture Theory of Language and Extracting Educational Principles and Methods According to This Theory

    Science.gov (United States)

    Heshi, Kamal Nosrati; Nasrabadi, Hassanali Bakhtiyar

    2016-01-01

    The present paper attempts to recognize principles and methods of education based on Wittgenstein's picture theory of language. This qualitative research utilized inferential analytical approach to review the related literature and extracted a set of principles and methods from his theory on picture language. Findings revealed that Wittgenstein…

  4. Novel theory of the human brain: information-commutation basis of architecture and principles of operation

    Directory of Open Access Journals (Sweden)

    Bryukhovetskiy AS

    2015-02-01

    Full Text Available Andrey S Bryukhovetskiy Center for Biomedical Technologies, Federal Research and Clinical Center for Specialized Types of Medical Assistance and Medical Technologies of the Federal Medical Biological Agency, NeuroVita Clinic of Interventional and Restorative Neurology and Therapy, Moscow, Russia Abstract: Based on the methodology of the informational approach and research of the genome, proteome, and complete transcriptome profiles of different cells in the nervous tissue of the human brain, the author proposes a new theory of information-commutation organization and architecture of the human brain which is an alternative to the conventional systemic connective morphofunctional paradigm of the brain framework. Informational principles of brain operation are defined: the modular principle, holographic principle, principle of systematicity of vertical commutative connection and complexity of horizontal commutative connection, regulatory principle, relay principle, modulation principle, “illumination” principle, principle of personalized memory and intellect, and principle of low energy consumption. The author demonstrates that the cortex functions only as a switchboard and router of information, while information is processed outside the nervous tissue of the brain in the intermeningeal space. The main structural element of information-commutation in the brain is not the neuron, but information-commutation modules that are subdivided into receiver modules, transmitter modules, and subscriber modules, forming a vertical architecture of nervous tissue in the brain as information lines and information channels, and a horizontal architecture as central, intermediate, and peripheral information-commutation platforms. Information in information-commutation modules is transferred by means of the carriers that are characteristic to the specific information level from inductome to genome, transcriptome, proteome, metabolome, secretome, and magnetome

  5. Application of maximum values for radiation exposure and principles for the calculation of radiation doses

    International Nuclear Information System (INIS)

    2007-08-01

    The guide presents the definitions of equivalent dose and effective dose, the principles for calculating these doses, and instructions for applying their maximum values. The limits (Annual Limit on Intake and Derived Air Concentration) derived from dose limits are also presented for the purpose of monitoring exposure to internal radiation. The calculation of radiation doses caused to a patient from medical research and treatment involving exposure to ionizing radiation is beyond the scope of this ST Guide

  6. Experimental verification of the imposing principle for maximum permissible levels of multicolor laser radiation

    Directory of Open Access Journals (Sweden)

    Ivashin V.A.

    2013-12-01

    Full Text Available Aims. The study presents the results of experimental research to verify the principle overlay for maximum permissible levels (MPL of multicolor laser radiation single exposure on eyes. This principle of the independence of the effects of radiation with each wavelength (the imposing principle, was founded and generalized to a wide range of exposure conditions. Experimental verification of this approach in relation to the impact of laser radiation on tissue fundus of an eye, as shows the analysis of the literature was not carried out. Material and methods. Was used in the experimental laser generating radiation with wavelengths: Л1 =0,532 microns, A2=0,556to 0,562 microns and A3=0,619to 0,621 urn. Experiments were carried out on eyes of rabbits with evenly pigmented eye bottom. Results. At comparison of results of processing of the experimental data with the calculated data it is shown that these levels are close by their parameters. Conclusions. For the first time in the Russian Federation had been performed experimental studies on the validity of multi-colored laser radiation on the organ of vision. In view of the objective coincidence of the experimental data with the calculated data, we can conclude that the mathematical formulas work.

  7. Variational principle for the Bloch unified reaction theory

    International Nuclear Information System (INIS)

    MacDonald, W.; Rapheal, R.

    1975-01-01

    The unified reaction theory formulated by Claude Bloch uses a boundary value operator to write the Schroedinger equation for a scattering state as an inhomogeneous equation over the interaction region. As suggested by Lane and Robson, this equation can be solved by using a matrix representation on any set which is complete over the interaction volume. Lane and Robson have proposed, however, that a variational form of the Bloch equation can be used to obtain a ''best'' value for the S-matrix when a finite subset of this basis is used. The variational principle suggested by Lane and Robson, which gives a many-channel S-matrix different from the matrix solution on a finite basis, is considered first, and it is shown that the difference results from the fact that their variational principle is not, in fact, equivalent to the Bloch equation. Then a variational principle is presented which is fully equivalent to the Bloch form of the Schroedinger equation, and it is shown that the resulting S-matrix is the same as that obtained from the matrix solution of this equation. (U.S.)

  8. Designing the Electronic Classroom: Applying Learning Theory and Ergonomic Design Principles.

    Science.gov (United States)

    Emmons, Mark; Wilkinson, Frances C.

    2001-01-01

    Applies learning theory and ergonomic principles to the design of effective learning environments for library instruction. Discusses features of electronic classroom ergonomics, including the ergonomics of physical space, environmental factors, and workstations; and includes classroom layouts. (Author/LRW)

  9. Physical principles, geometrical aspects, and locality properties of gauge field theories

    International Nuclear Information System (INIS)

    Mack, G.; Hamburg Univ.

    1981-01-01

    Gauge field theories, particularly Yang - Mills theories, are discussed at a classical level from a geometrical point of view. The introductory chapters are concentrated on physical principles and mathematical tools. The main part is devoted to locality problems in gauge field theories. Examples show that locality problems originate from two sources in pure Yang - Mills theories (without matter fields). One is topological and the other is related to the existence of degenerated field configurations of the infinitesimal holonomy groups on some extended region of space or space-time. Nondegenerate field configurations in theories with semisimple gauge groups can be analysed with the help of the concept of a local gauge. Such gauges play a central role in the discussion. (author)

  10. Hydrodynamic Relaxation of an Electron Plasma to a Near-Maximum Entropy State

    International Nuclear Information System (INIS)

    Rodgers, D. J.; Servidio, S.; Matthaeus, W. H.; Mitchell, T. B.; Aziz, T.; Montgomery, D. C.

    2009-01-01

    Dynamical relaxation of a pure electron plasma in a Malmberg-Penning trap is studied, comparing experiments, numerical simulations and statistical theories of weakly dissipative two-dimensional (2D) turbulence. Simulations confirm that the dynamics are approximated well by a 2D hydrodynamic model. Statistical analysis favors a theoretical picture of relaxation to a near-maximum entropy state with constrained energy, circulation, and angular momentum. This provides evidence that 2D electron fluid relaxation in a turbulent regime is governed by principles of maximum entropy.

  11. Detailed balance principle and finite-difference stochastic equation in a field theory

    International Nuclear Information System (INIS)

    Kozhamkulov, T.A.

    1986-01-01

    A finite-difference equation, which is a generalization of the Langevin equation in field theory, has been obtained basing upon the principle of detailed balance for the Markov chain. Advantages of the present approach as compared with the conventional Parisi-Wu method are shown for examples of an exactly solvable problem of zero-dimensional quantum theory and a simple numerical simulation

  12. Classical field theory in the space of reference frames. [Space-time manifold, action principle

    Energy Technology Data Exchange (ETDEWEB)

    Toller, M [Dipartimento di Matematica e Fisica, Libera Universita, Trento (Italy)

    1978-03-11

    The formalism of classical field theory is generalized by replacing the space-time manifold M by the ten-dimensional manifold S of all the local reference frames. The geometry of the manifold S is determined by ten vector fields corresponding to ten operationally defined infinitesimal transformations of the reference frames. The action principle is written in terms of a differential 4-form in the space S (the Lagrangian form). Densities and currents are represented by differential 3-forms in S. The field equations and the connection between symmetries and conservation laws (Noether's theorem) are derived from the action principle. Einstein's theory of gravitation and Maxwell's theory of electromagnetism are reformulated in this language. The general formalism can also be used to formulate theories in which charge, energy and momentum cannot be localized in space-time and even theories in which a space-time manifold cannot be defined exactly in any useful way.

  13. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  14. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  15. An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling

    Science.gov (United States)

    Kane, Patrick; Zollman, Kevin J. S.

    2015-01-01

    The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the “hybrid equilibrium,” to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith’s Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory. PMID:26348617

  16. The principles of quantum theory, from Planck's quanta to the Higgs boson the nature of quantum reality and the spirit of Copenhagen

    CERN Document Server

    Plotnitsky, Arkady

    2016-01-01

    The book considers foundational thinking in quantum theory, focusing on the role the fundamental principles and principle thinking there, including thinking that leads to the invention of new principles, which is, the book contends, one of the ultimate achievements of theoretical thinking in physics and beyond. The focus on principles, prominent during the rise and in the immediate aftermath of quantum theory, has been uncommon in more recent discussions and debates concerning it. The book argues, however, that exploring the fundamental principles and principle thinking is exceptionally helpful in addressing the key issues at stake in quantum foundations and the seemingly interminable debates concerning them. Principle thinking led to major breakthroughs throughout the history of quantum theory, beginning with the old quantum theory and quantum mechanics, the first definitive quantum theory, which it remains within its proper (nonrelativistic) scope. It has, the book also argues, been equally important in qua...

  17. Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.

    Science.gov (United States)

    Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  18. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.; Ito, N.

    2013-01-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  19. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.

    2013-10-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  20. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang

    2014-01-01

    Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

  1. First-principles theory of inelastic currents in a scanning tunneling microscope

    DEFF Research Database (Denmark)

    Stokbro, Kurt; Hu, Ben Yu-Kuang; Thirstrup, C.

    1998-01-01

    A first-principles theory of inelastic tunneling between a model probe tip and an atom adsorbed on a surface is presented, extending the elastic tunneling theory of Tersoff and Hamann. The inelastic current is proportional to the change in the local density of states at the center of the tip due...... to the addition of the adsorbate. We use the theory to investigate the vibrational heating of an adsorbate below a scanning tunneling microscopy tip. We calculate the desorption rate of PI from Si(100)-H(2 X 1) as a function of the sample bias and tunnel current, and find excellent a,agreement with recent...

  2. A review of the generalized uncertainty principle

    International Nuclear Information System (INIS)

    Tawfik, Abdel Nasser; Diab, Abdel Magied

    2015-01-01

    Based on string theory, black hole physics, doubly special relativity and some ‘thought’ experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed. (review)

  3. Reconceptualization of the Diffusion Process: An Application of Selected Principles from Modern Systems Theory.

    Science.gov (United States)

    Silver, Wayne

    A description of the communication behaviors in high innovation societies depends on the application of selected principles from modern systems theory. The first is the principle of equifinality which explains the activities of open systems. If the researcher views society as an open system, he frees himself from the client approach since society…

  4. Principle of detailed balance and the finite-difference stochastic equation in field theory

    International Nuclear Information System (INIS)

    Kozhamkulov, T.A.

    1986-01-01

    The principle of detailed balance for the Markov chain is used to obtain a finite-difference equation which generalizes the Langevin equation in field theory. The advantages of using this approach compared to the conventional Parisi-Wu method are demonstrated for the examples of an exactly solvable problem in zero-dimensional quantum theory and a simple numerical simulation

  5. Calculus of variations and optimal control theory a concise introduction

    CERN Document Server

    Liberzon, Daniel

    2011-01-01

    This textbook offers a concise yet rigorous introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. Designed specifically for a one-semester course, the book begins with calculus of variations, preparing the ground for optimal control. It then gives a complete proof of the maximum principle and covers key topics such as the Hamilton-Jacobi-Bellman theory of dynamic programming and linear-quadratic optimal control. Calculus of Variations and Optimal Control Theory also traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter, and suggestions for further study. Offers a concise yet rigorous introduction Requires limited background in control theory or advanced mathematics Provides a complete proof of the maximum principle Uses consistent notation in the exposition of classical and modern topics Traces the h...

  6. The collapsing of multigroup cross sections in optimization problems solved by means of the Pontryagin maximum principle in nuclear reactor dynamics

    International Nuclear Information System (INIS)

    Anton, V.

    1979-12-01

    The collapsing formulae for the optimization problems solved by means of the Pontryagin maximum principle in nuclear reactor dynamics are presented. A comparison with the corresponding formulae of the static case is given too. (author)

  7. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  8. Variational principles are a powerful tool also for formulating field theories

    OpenAIRE

    Dell'Isola , Francesco; Placidi , Luca

    2012-01-01

    Variational principles and calculus of variations have always been an important tool for formulating mathematical models for physical phenomena. Variational methods give an efficient and elegant way to formulate and solve mathematical problems that are of interest for scientists and engineers and are the main tool for the axiomatization of physical theories

  9. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  10. Application of the principle of maximum conformality to the hadroproduction of the Higgs boson at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Sheng-Quan; Wu, Xing-Gang; Brodsky, Stanley J.; Mojaza, Matin

    2016-09-09

    We present improved perturbative QCD (pQCD) predictions for Higgs boson hadroproduction at the LHC by applying the principle of maximum conformality (PMC), a procedure which resums the pQCD series using the renormalization group (RG), thereby eliminating the dependence of the predictions on the choice of the renormalization scheme while minimizing sensitivity to the initial choice of the renormalization scale. In previous pQCD predictions for Higgs boson hadroproduction, it has been conventional to assume that the renormalization scale μ r of the QCD coupling α s ( μ r ) is the Higgs mass and then to vary this choice over the range 1 / 2 m H < μ r < 2 m H in order to estimate the theory uncertainty. However, this error estimate is only sensitive to the nonconformal β terms in the pQCD series, and thus it fails to correctly estimate the theory uncertainty in cases where a pQCD series has large higher-order contributions, as is the case for Higgs boson hadroproduction. Furthermore, this ad hoc choice of scale and range gives pQCD predictions which depend on the renormalization scheme being used, in contradiction to basic RG principles. In contrast, after applying the PMC, we obtain next-to-next-to-leading-order RG resummed pQCD predictions for Higgs boson hadroproduction which are renormalization-scheme independent and have minimal sensitivity to the choice of the initial renormalization scale. Taking m H = 125 GeV , the PMC predictions for the p p → H X Higgs inclusive hadroproduction cross sections for various LHC center-of-mass energies are σ Incl | 7 TeV = 21.2 1 + 1.36 - 1.32 pb , σ Incl | 8 TeV = 27.3 7 + 1.65 - 1.59 pb , and σ Incl | 13 TeV = 65.7 2 + 3.46 - 3.0 pb . We also predict the fiducial cross section σ fid ( p p → H → γ γ ) : σ fid | 7 TeV = 30.1 + 2.3 - 2.2 fb , σ fid | 8 TeV = 38.3 + 2.9 - 2.8 fb , and σ fid | 13 TeV = 85.8 + 5.7 - 5.3 fb . The error limits in these predictions include the small residual high

  11. Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO

    Directory of Open Access Journals (Sweden)

    Lo C. Y.

    2006-04-01

    Full Text Available The theoretical foundation of LIGO's design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein's theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein's equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne's formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein's equivalence principle is not important or even irrelevant.

  12. A parametrization of two-dimensional turbulence based on a maximum entropy production principle with a local conservation of energy

    International Nuclear Information System (INIS)

    Chavanis, Pierre-Henri

    2014-01-01

    In the context of two-dimensional (2D) turbulence, we apply the maximum entropy production principle (MEPP) by enforcing a local conservation of energy. This leads to an equation for the vorticity distribution that conserves all the Casimirs, the energy, and that increases monotonically the mixing entropy (H-theorem). Furthermore, the equation for the coarse-grained vorticity dissipates monotonically all the generalized enstrophies. These equations may provide a parametrization of 2D turbulence. They do not generally relax towards the maximum entropy state. The vorticity current vanishes for any steady state of the 2D Euler equation. Interestingly, the equation for the coarse-grained vorticity obtained from the MEPP turns out to coincide, after some algebraic manipulations, with the one obtained with the anticipated vorticity method. This shows a connection between these two approaches when the conservation of energy is treated locally. Furthermore, the newly derived equation, which incorporates a diffusion term and a drift term, has a nice physical interpretation in terms of a selective decay principle. This sheds new light on both the MEPP and the anticipated vorticity method. (paper)

  13. Chemical hardness and density functional theory

    Indian Academy of Sciences (India)

    Unknown

    RALPH G PEARSON. Chemistry Department, University of California, Santa Barbara, CA 93106, USA. Abstract. The concept of chemical hardness is reviewed from a personal point of view. Keywords. Hardness; softness; hard & soft acids bases (HSAB); principle of maximum hardness. (PMH) density functional theory (DFT) ...

  14. Introduction to optimal control theory

    International Nuclear Information System (INIS)

    Agrachev, A.A.

    2002-01-01

    These are lecture notes of the introductory course in Optimal Control theory treated from the geometric point of view. Optimal Control Problem is reduced to the study of controls (and corresponding trajectories) leading to the boundary of attainable sets. We discuss Pontryagin Maximum Principle, basic existence results, and apply these tools to concrete simple optimal control problems. Special sections are devoted to the general theory of linear time-optimal problems and linear-quadratic problems. (author)

  15. Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states

    CERN Document Server

    Dewar, R

    2003-01-01

    Jaynes' information theory formalism of statistical mechanics is applied to the stationary states of open, non-equilibrium systems. First, it is shown that the probability distribution p subGAMMA of the underlying microscopic phase space trajectories GAMMA over a time interval of length tau satisfies p subGAMMA propor to exp(tau sigma subGAMMA/2k sub B) where sigma subGAMMA is the time-averaged rate of entropy production of GAMMA. Three consequences of this result are then derived: (1) the fluctuation theorem, which describes the exponentially declining probability of deviations from the second law of thermodynamics as tau -> infinity; (2) the selection principle of maximum entropy production for non-equilibrium stationary states, empirical support for which has been found in studies of phenomena as diverse as the Earth's climate and crystal growth morphology; and (3) the emergence of self-organized criticality for flux-driven systems in the slowly-driven limit. The explanation of these results on general inf...

  16. On the foundations of special theory of relativity - II. (The principle of covariance and a basic inertial frame)

    International Nuclear Information System (INIS)

    Gulati, S.P.; Gulati, S.

    1979-01-01

    An attempt has been made to replace the principle of relativity with the principle of covariance. This amounts to modification of the theory of relativity based on the two postulates (i) the principle of covariance and (ii) the light principle. Some of the fundamental results and the laws of relativistic mechanics, electromagnetodynamics and quantum mechanics are re-examined. The principle of invariance is questioned. (A.K.)

  17. Physical Premium Principle: A New Way for Insurance Pricing

    Science.gov (United States)

    Darooneh, Amir H.

    2005-03-01

    In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical) definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.

  18. A Second-Order Maximum Principle Preserving Lagrange Finite Element Technique for Nonlinear Scalar Conservation Equations

    KAUST Repository

    Guermond, Jean-Luc; Nazarov, Murtazo; Popov, Bojan; Yang, Yong

    2014-01-01

    © 2014 Society for Industrial and Applied Mathematics. This paper proposes an explicit, (at least) second-order, maximum principle satisfying, Lagrange finite element method for solving nonlinear scalar conservation equations. The technique is based on a new viscous bilinear form introduced in Guermond and Nazarov [Comput. Methods Appl. Mech. Engrg., 272 (2014), pp. 198-213], a high-order entropy viscosity method, and the Boris-Book-Zalesak flux correction technique. The algorithm works for arbitrary meshes in any space dimension and for all Lipschitz fluxes. The formal second-order accuracy of the method and its convergence properties are tested on a series of linear and nonlinear benchmark problems.

  19. Theory-generating practice. Proposing a principle for learning design

    DEFF Research Database (Denmark)

    Buhl, Mie

    2016-01-01

    This contribution proposes a principle for learning design – Theory-Generating Practice (TGP) – as an alternative to the way university courses are traditionally taught and structured, with a series of theoretical lectures isolated from practical experience and concluding with an exam or a project...... building, and takes tacit knowledge into account. The article introduces TGP, contextualizes it to a Danish tradition of didactics, and discusses it in relation to contemporary conceptual currents of didactic design and learning design. This is followed by a theoretical framing of TGP. Finally, three...

  20. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    Science.gov (United States)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  1. The generally covariant locality principle - a new paradigm for local quantum field theory

    International Nuclear Information System (INIS)

    Brunetti, R.; Fredenhagen, K.; Verch, R.

    2002-05-01

    A new approach to the model-independent description of quantum field theories will be introduced in the present work. The main feature of this new approach is to incorporate in a local sense the principle of general covariance of general relativity, thus giving rise to the concept of a locally covariant quantum field theory. Such locally covariant quantum field theories will be described mathematically in terms of covariant functors between the categories, on one side, of globally hyperbolic spacetimes with isometric embeddings as morphisms and, on the other side, of *-algebras with unital injective *-endomorphisms as morphisms. Moreover, locally covariant quantum fields can be described in this framework as natural transformations between certain functors. The usual Haag-Kastler framework of nets of operator-algebras over a fixed spacetime background-manifold, together with covariant automorphic actions of the isometry-group of the background spacetime, can be re-gained from this new approach as a special case. Examples of this new approach are also outlined. In case that a locally covariant quantum field theory obeys the time-slice axiom, one can naturally associate to it certain automorphic actions, called ''relative Cauchy-evolutions'', which describe the dynamical reaction of the quantum field theory to a local change of spacetime background metrics. The functional derivative of a relative Cauchy-evolution with respect to the spacetime metric is found to be a divergence-free quantity which has, as will be demonstrated in an example, the significance of an energy-momentum tensor for the locally covariant quantum field theory. Furthermore, we discuss the functorial properties of state spaces of locally covariant quantum field theories that entail the validity of the principle of local definiteness. (orig.)

  2. On the relativity and equivalence principles in the gauge theory of gravitation

    International Nuclear Information System (INIS)

    Ivanenko, D.; Sardanashvily, G.

    1981-01-01

    One sees the basic ideas of the gauge gravitation theory still not generally accepted in spite of more than twenty years of its history. The chief reason lies in the fact that the gauge character of gravity is connected with the whole complex of problems of Einstein General Relativity: about the reference system definition, on the (3+1)-splitting, on the presence (or absence) of symmetries in GR, on the necessity (or triviality) of general covariance, on the meaning of equivalence principle, which led Einstein from Special to General Relativity |1|. The real actuality of this complex of interconnected problems is demonstrated by the well-known work of V. Fock, who saw no symmetries in General Relativity, declared the unnecessary Equivalence principle and proposed even to substitute the designation ''chronogeometry'' instead of ''general relativity'' (see also P. Havas). Developing this line, H. Bondi quite recently also expressed doubts about the ''relativity'' in Einstein theory of gravitation. All proposed versions of the gauge gravitation theory must clarify the discrepancy between Einstein gravitational field being a pseudo-Riemannian metric field, and the gauge potentials representing connections on some fiber bundles and there exists no group, whose gauging would lead to the purely gravitational part of connection (Christoffel symbols or Fock-Ivenenko-Weyl spinorial coefficients). (author)

  3. Principled Missing Data Treatments.

    Science.gov (United States)

    Lang, Kyle M; Little, Todd D

    2018-04-01

    We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.

  4. Ground-state densities from the Rayleigh-Ritz variation principle and from density-functional theory.

    Science.gov (United States)

    Kvaal, Simen; Helgaker, Trygve

    2015-11-14

    The relationship between the densities of ground-state wave functions (i.e., the minimizers of the Rayleigh-Ritz variation principle) and the ground-state densities in density-functional theory (i.e., the minimizers of the Hohenberg-Kohn variation principle) is studied within the framework of convex conjugation, in a generic setting covering molecular systems, solid-state systems, and more. Having introduced admissible density functionals as functionals that produce the exact ground-state energy for a given external potential by minimizing over densities in the Hohenberg-Kohn variation principle, necessary and sufficient conditions on such functionals are established to ensure that the Rayleigh-Ritz ground-state densities and the Hohenberg-Kohn ground-state densities are identical. We apply the results to molecular systems in the Born-Oppenheimer approximation. For any given potential v ∈ L(3/2)(ℝ(3)) + L(∞)(ℝ(3)), we establish a one-to-one correspondence between the mixed ground-state densities of the Rayleigh-Ritz variation principle and the mixed ground-state densities of the Hohenberg-Kohn variation principle when the Lieb density-matrix constrained-search universal density functional is taken as the admissible functional. A similar one-to-one correspondence is established between the pure ground-state densities of the Rayleigh-Ritz variation principle and the pure ground-state densities obtained using the Hohenberg-Kohn variation principle with the Levy-Lieb pure-state constrained-search functional. In other words, all physical ground-state densities (pure or mixed) are recovered with these functionals and no false densities (i.e., minimizing densities that are not physical) exist. The importance of topology (i.e., choice of Banach space of densities and potentials) is emphasized and illustrated. The relevance of these results for current-density-functional theory is examined.

  5. A general maximum entropy framework for thermodynamic variational principles

    International Nuclear Information System (INIS)

    Dewar, Roderick C.

    2014-01-01

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law

  6. A general maximum entropy framework for thermodynamic variational principles

    Energy Technology Data Exchange (ETDEWEB)

    Dewar, Roderick C., E-mail: roderick.dewar@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.

  7. Optimal control of algae growth by controlling CO 2 and nutrition flow using Pontryagin Maximum Principle

    Science.gov (United States)

    Mardlijah; Jamil, Ahmad; Hanafi, Lukman; Sanjaya, Suharmadi

    2017-09-01

    There are so many benefit of algae. One of them is using for renewable energy and sustainable in the future. The greater growth of algae will increasing biodiesel production and the increase of algae growth is influenced by glucose, nutrients and photosynthesis process. In this paper, the optimal control problem of the growth of algae is discussed. The objective function is to maximize the concentration of dry algae while the control is the flow of carbon dioxide and the nutrition. The solution is obtained by applying the Pontryagin Maximum Principle. and the result show that the concentration of algae increased more than 15 %.

  8. Approach to the nonrelatiVistic scattering theory based on the causality superposition and unitarity principles

    International Nuclear Information System (INIS)

    Gajnutdinov, R.Kh.

    1983-01-01

    Possibility is studied to build the nonrelativistic scattering theory on the base of the general physical principles: causality, superposition, and unitarity, making no use of the Schroedinger formalism. The suggested approach is shown to be more general than the nonrelativistic scattering theory based on the Schroedinger equation. The approach is applied to build a model ofthe scattering theory for a system which consists of heavy nonrelativistic particles and a light relativistic particle

  9. Transferring and practicing the correspondence principle in the old quantum theory: Franck, Hund and the Ramsauer effect

    Energy Technology Data Exchange (ETDEWEB)

    Jaehnert, Martin [MPIWG, Berlin (Germany)

    2013-07-01

    In 1922 Niels Bohr wrote a letter to Arnold Sommerfeld complaining that: ''[i]n the last years my attempts to develop the principles of quantum theory were met with very little understanding.'' Looking for the correspondence idea in publications, one finds that the principle was indeed hardly applied by physicists outside of Copenhagen. Only by 1922 physicists from wider research networks of quantum theory started to transfer the principle into their research fields, often far removed from its initial realm of atomic spectroscopy. How and why did physicists suddenly become interested in the idea that Bohr*s writings had been promoting since 1918? How was the correspondence principle transferred to these fields and how did its transfer affect these fields and likewise the correspondence principle itself? To discuss these questions, my talk focuses on the work of James Franck and Friedrich Hund on the Ramsauer effect in 1922 and follows the interrelation of the developing understanding of a newly found effect and the adaptation of the correspondence idea in a new conceptual and sociological context.

  10. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    International Nuclear Information System (INIS)

    2000-01-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance

  11. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.

  12. The principle of general tovariance

    NARCIS (Netherlands)

    Heunen, C.; Landsman, N.P.; Spitters, B.A.W.; Loja Fernandes, R.; Picken, R.

    2008-01-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance

  13. Quantum Field Theoretic Derivation of the Einstein Weak Equivalence Principle Using Emqg Theory

    OpenAIRE

    Ostoma, Tom; Trushyk, Mike

    1999-01-01

    We provide a quantum field theoretic derivation of Einstein's Weak Equivalence Principle of general relativity using a new quantum gravity theory proposed by the authors called Electro-Magnetic Quantum Gravity or EMQG (ref. 1). EMQG is based on a new theory of inertia (ref. 5) proposed by R. Haisch, A. Rueda, and H. Puthoff (which we modified and called Quantum Inertia). Quantum Inertia states that classical Newtonian Inertia is a property of matter due to the strictly local electrical force ...

  14. Physical Premium Principle: A New Way for Insurance Pricing

    Directory of Open Access Journals (Sweden)

    Amir H. Darooneh

    2005-02-01

    Full Text Available Abstract: In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.

  15. Principle of maximum entropy for reliability analysis in the design of machine components

    Science.gov (United States)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  16. The effect of implementing cognitive load theory-based design principles in virtual reality simulation training of surgical skills: a randomized controlled trial

    DEFF Research Database (Denmark)

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars

    2016-01-01

    Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation training...

  17. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  18. Schwinger variational principle in the nuclear two-body problem and multichannel theory

    International Nuclear Information System (INIS)

    Zubarev, A.L.; Podkopaev, A.P.

    1978-01-01

    The aim of the investigation is to study the Schwinger variational principle in the nuclear two-body problem and the multichannel theory. An approach is proposed to problems of the potential scattering based on the substitution of the exact potential operator V by the finite rank operator Vsup((n)) with which the dynamic equations are solved exactly. The functionals obtained for observed values coincide with corresponding expressions derived by the Schwinger variational principle with the set of test functions. The determination of the Schwinger variational principle is given. The method is given for finding amplitude of the double-particle scattering with the potential Vsup((n)). The corresponding amplitudes are constructed within the framework of the multichannel potential model. Interpolation formula for determining amplitude, which describes with high accuracy a process of elastic scattering for any energies, is obtained. On the basis of the above method high-energy amplitude may be obtained within the range of small and large scattering angles

  19. Five-dimensional projective unified theory and the principle of equivalence

    International Nuclear Information System (INIS)

    De Sabbata, V.; Gasperini, M.

    1984-01-01

    We investigate the physical consequences of a new five-dimensional projective theory unifying gravitation and electromagnetism. Solving the field equations in the linear approximation and in the static limit, we find that a celestial body would act as a source of a long-range scalar field, and that macroscopic test bodies with different internal structure would accelerate differently in the solar gravitational field; this seems to be in disagreement with the equivalence principle. To avoid this contradiction, we suggest a possible modification of the geometrical structure of the five-dimensional projective space

  20. Least action principle with unilateral constraints on the velocity in the special theory of relativity

    International Nuclear Information System (INIS)

    Blaquiere, Augustin

    1981-01-01

    A least action principle with unilateral constraints on the velocity is applied to an example in the area of the special theory of relativity. Equations obtained for a particle with non-zero rest-mass, and speed c the speed of light, are those which are usually associated with the photon, namely: the equation of eikonale and the wave equation of d'Alembert. Extension of the theory [fr

  1. A Critique of Social Bonding and Control Theory of Delinquency Using the Principles of Psychology of Mind.

    Science.gov (United States)

    Kelley, Thomas M.

    1996-01-01

    Describes the refined principles of Psychology of Mind and shows how their logical interaction can help explain the comparative amounts of deviant and conforming behavior of youthful offenders. The logic of these principles is used to examine the major assumptions of social bonding and control theory of delinquency focusing predominantly on the…

  2. Twist operators in N=4 beta-deformed theory

    NARCIS (Netherlands)

    de Leeuw, M.; Łukowski, T.

    2010-01-01

    In this paper we derive both the leading order finite size corrections for twist-2 and twist-3 operators and the next-to-leading order finite-size correction for twist-2 operators in beta-deformed SYM theory. The obtained results respect the principle of maximum transcendentality as well as

  3. The Principle of General Tovariance

    Science.gov (United States)

    Heunen, C.; Landsman, N. P.; Spitters, B.

    2008-06-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.

  4. A finite state, finite memory minimum principle, part 2. [a discussion of game theory, signaling, stochastic processes, and control theory

    Science.gov (United States)

    Sandell, N. R., Jr.; Athans, M.

    1975-01-01

    The development of the theory of the finite - state, finite - memory (FSFM) stochastic control problem is discussed. The sufficiency of the FSFM minimum principle (which is in general only a necessary condition) was investigated. By introducing the notion of a signaling strategy as defined in the literature on games, conditions under which the FSFM minimum principle is sufficient were determined. This result explicitly interconnects the information structure of the FSFM problem with its optimality conditions. The min-H algorithm for the FSFM problem was studied. It is demonstrated that a version of the algorithm always converges to a particular type of local minimum termed a person - by - person extremal.

  5. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  6. Unifying generative and discriminative learning principles

    Directory of Open Access Journals (Sweden)

    Strickert Marc

    2010-02-01

    Full Text Available Abstract Background The recognition of functional binding sites in genomic DNA remains one of the fundamental challenges of genome research. During the last decades, a plethora of different and well-adapted models has been developed, but only little attention has been payed to the development of different and similarly well-adapted learning principles. Only recently it was noticed that discriminative learning principles can be superior over generative ones in diverse bioinformatics applications, too. Results Here, we propose a generalization of generative and discriminative learning principles containing the maximum likelihood, maximum a posteriori, maximum conditional likelihood, maximum supervised posterior, generative-discriminative trade-off, and penalized generative-discriminative trade-off learning principles as special cases, and we illustrate its efficacy for the recognition of vertebrate transcription factor binding sites. Conclusions We find that the proposed learning principle helps to improve the recognition of transcription factor binding sites, enabling better computational approaches for extracting as much information as possible from valuable wet-lab data. We make all implementations available in the open-source library Jstacs so that this learning principle can be easily applied to other classification problems in the field of genome and epigenome analysis.

  7. The principle of general covariance and the principle of equivalence: two distinct concepts

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    It is shown how to construct a theory with general covariance but without the equivalence principle. Such a theory is in disagreement with experiment, but it serves to illustrate the independence of the former principle from the latter one [pt

  8. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  9. COIN Goes GLOCAL: Traditional COIN With a Global Perspective: Does the Current US Strategy Reflect COIN Theory, Doctrine and Principles

    Science.gov (United States)

    2007-05-17

    COIN goes “ GLOCAL ”: Traditional COIN with a Global Perspective: Does the Current US Strategy Reflect COIN Theory, Doctrine and Principles? A...TITLE AND SUBTITLE COIN goes “ GLOCAL ”: Traditional COIN with a Global P ti D th C t US St t R fl t COIN 5a. CONTRACT NUMBER Perspective: Does...Monograph: COIN goes “ GLOCAL ”: Traditional COIN with a Global Perspective: Does the Current US Strategy Reflect COIN Theory, Doctrine and Principles

  10. Limitations of Boltzmann's principle

    International Nuclear Information System (INIS)

    Lavenda, B.H.

    1995-01-01

    The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2

  11. Schwinger's quantum action principle from Dirac’s formulation through Feynman’s path integrals, the Schwinger-Keldysh method, quantum field theory, to source theory

    CERN Document Server

    Milton, Kimball A

    2015-01-01

    Starting from the earlier notions of stationary action principles, these tutorial notes shows how Schwinger’s Quantum Action Principle descended from Dirac’s formulation, which independently led Feynman to his path-integral formulation of quantum mechanics. Part I brings out in more detail the connection between the two formulations, and applications are discussed. Then, the Keldysh-Schwinger time-cycle method of extracting matrix elements is described. Part II will discuss the variational formulation of quantum electrodynamics and the development of source theory.

  12. A New Principle in Physics: the Principle 'Finiteness', and Some Consequences

    International Nuclear Information System (INIS)

    Sternlieb, Abraham

    2010-01-01

    In this paper I propose a new principle in physics: the principle of 'finiteness'. It stems from the definition of physics as a science that deals (among other things) with measurable dimensional physical quantities. Since measurement results, including their errors, are always finite, the principle of finiteness postulates that the mathematical formulation of 'legitimate' laws of physics should prevent exactly zero or infinite solutions. Some consequences of the principle of finiteness are discussed, in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The consequences are derived independently of any other theory or principle in physics. I propose 'finiteness' as a postulate (like the constancy of the speed of light in vacuum, 'c'), as opposed to a notion whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories, or principles.

  13. Adapting evidence-based interventions using a common theory, practices, and principles.

    Science.gov (United States)

    Rotheram-Borus, Mary Jane; Swendeman, Dallas; Becker, Kimberly D

    2014-01-01

    Hundreds of validated evidence-based intervention programs (EBIP) aim to improve families' well-being; however, most are not broadly adopted. As an alternative diffusion strategy, we created wellness centers to reach families' everyday lives with a prevention framework. At two wellness centers, one in a middle-class neighborhood and one in a low-income neighborhood, popular local activity leaders (instructors of martial arts, yoga, sports, music, dancing, Zumba), and motivated parents were trained to be Family Mentors. Trainings focused on a framework that taught synthesized, foundational prevention science theory, practice elements, and principles, applied to specific content areas (parenting, social skills, and obesity). Family Mentors were then allowed to adapt scripts and activities based on their cultural experiences but were closely monitored and supervised over time. The framework was implemented in a range of activities (summer camps, coaching) aimed at improving social, emotional, and behavioral outcomes. Successes and challenges are discussed for (a) engaging parents and communities; (b) identifying and training Family Mentors to promote children and families' well-being; and (c) gathering data for supervision, outcome evaluation, and continuous quality improvement. To broadly diffuse prevention to families, far more experimentation is needed with alternative and engaging implementation strategies that are enhanced with knowledge harvested from researchers' past 30 years of experience creating EBIP. One strategy is to train local parents and popular activity leaders in applying robust prevention science theory, common practice elements, and principles of EBIP. More systematic evaluation of such innovations is needed.

  14. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  15. Principles of dynamics

    CERN Document Server

    Hill, Rodney

    2013-01-01

    Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics

  16. At the frontier of spacetime scalar-tensor theory, Bells inequality, Machs principle, exotic smoothness

    CERN Document Server

    Asselmeyer-Maluga, Torsten

    2016-01-01

    In this book, leading theorists present new contributions and reviews addressing longstanding challenges and ongoing progress in spacetime physics. In the anniversary year of Einstein's General Theory of Relativity, developed 100 years ago, this collection reflects the subsequent and continuing fruitful development of spacetime theories. The volume is published in honour of Carl Brans on the occasion of his 80th birthday. Carl H. Brans, who also contributes personally, is a creative and independent researcher and one of the founders of the scalar-tensor theory, also known as Jordan-Brans-Dicke theory. In the present book, much space is devoted to scalar-tensor theories. Since the beginning of the 1990s, Brans has worked on new models of spacetime, collectively known as exotic smoothness, a field largely established by him. In this Festschrift, one finds an outstanding and unique collection of articles about exotic smoothness. Also featured are Bell's inequality and Mach's principle. Personal memories and hist...

  17. A least squares principle unifying finite element, finite difference and nodal methods for diffusion theory

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1987-01-01

    A least squares principle is described which uses a penalty function treatment of boundary and interface conditions. Appropriate choices of the trial functions and vectors employed in a dual representation of an approximate solution established complementary principles for the diffusion equation. A geometrical interpretation of the principles provides weighted residual methods for diffusion theory, thus establishing a unification of least squares, variational and weighted residual methods. The complementary principles are used with either a trial function for the flux or a trial vector for the current to establish for regular meshes a connection between finite element, finite difference and nodal methods, which can be exact if the mesh pitches are chosen appropriately. Whereas the coefficients in the usual nodal equations have to be determined iteratively, those derived via the complementary principles are given explicitly in terms of the data. For the further development of the connection between finite element, finite difference and nodal methods, some hybrid variational methods are described which employ both a trial function and a trial vector. (author)

  18. Developing an Asteroid Rotational Theory

    Science.gov (United States)

    Geis, Gena; Williams, Miguel; Linder, Tyler; Pakey, Donald

    2018-01-01

    The goal of this project is to develop a theoretical asteroid rotational theory from first principles. Starting at first principles provides a firm foundation for computer simulations which can be used to analyze multiple variables at once such as size, rotation period, tensile strength, and density. The initial theory will be presented along with early models of applying the theory to the asteroid population. Early results confirm previous work by Pravec et al. (2002) that show the majority of the asteroids larger than 200m have negligible tensile strength and have spin rates close to their critical breakup point. Additionally, results show that an object with zero tensile strength has a maximum rotational rate determined by the object’s density, not size. Therefore, an iron asteroid with a density of 8000 kg/m^3 would have a minimum spin period of 1.16h if the only forces were gravitational and centrifugal. The short-term goal is to include material forces in the simulations to determine what tensile strength will allow the high spin rates of asteroids smaller than 150m.

  19. The principle of finiteness – a guideline for physical laws

    International Nuclear Information System (INIS)

    Sternlieb, Abraham

    2013-01-01

    I propose a new principle in physics-the principle of finiteness (FP). It stems from the definition of physics as a science that deals with measurable dimensional physical quantities. Since measurement results including their errors, are always finite, FP postulates that the mathematical formulation of legitimate laws in physics should prevent exactly zero or infinite solutions. I propose finiteness as a postulate, as opposed to a statement whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories or principles. Some consequences of FP are discussed, first in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The corrected Lorentz transformations include an additional translation term depending on the minimum length epsilon. The relativistic gamma is replaced by a corrected gamma, that is finite for v=c. To comply with FP, physical laws should include the relevant extremum finite values in their mathematical formulation. An important prediction of FP is that there is a maximum attainable relativistic mass/energy which is the same for all subatomic particles, meaning that there is a maximum theoretical value for cosmic rays energy. The Generalized Uncertainty Principle required by Quantum Gravity is actually a necessary consequence of FP at Planck's scale. Therefore, FP may possibly contribute to the axiomatic foundation of Quantum Gravity.

  20. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  1. Introductory remote sensing principles and concepts principles and concepts

    CERN Document Server

    Gibson, Paul

    2013-01-01

    Introduction to Remote Sensing Principles and Concepts provides a comprehensive student introduction to both the theory and application of remote sensing. This textbook* introduces the field of remote sensing and traces its historical development and evolution* presents detailed explanations of core remote sensing principles and concepts providing the theory required for a clear understanding of remotely sensed images.* describes important remote sensing platforms - including Landsat, SPOT and NOAA * examines and illustrates many of the applications of remotely sensed images in various fields.

  2. From Entropic Dynamics to Quantum Theory

    International Nuclear Information System (INIS)

    Caticha, Ariel

    2009-01-01

    Non-relativistic quantum theory is derived from information codified into an appropriate statistical model. The basic assumption is that there is an irreducible uncertainty in the location of particles so that the configuration space is a statistical manifold. The dynamics then follows from a principle of inference, the method of Maximum Entropy. The concept of time is introduced as a convenient way to keep track of change. The resulting theory resembles both Nelson's stochastic mechanics and general relativity. The statistical manifold is a dynamical entity: its geometry determines the evolution of the probability distribution which, in its turn, reacts back and determines the evolution of the geometry. There is a new quantum version of the equivalence principle: 'osmotic' mass equals inertial mass. Mass and the phase of the wave function are explained as features of purely statistical origin.

  3. Toward Principles of Construct Clarity: Exploring the Usefulness of Facet Theory in Guiding Conceptualization

    Directory of Open Access Journals (Sweden)

    Meng Zhang

    2016-02-01

    Full Text Available Conceptualization in theory development has received limited consideration despite its frequently stressed importance in Information Systems research. This paper focuses on the role of construct clarity in conceptualization, arguing that construct clarity should be considered an essential criterion for evaluating conceptualization and that a focus on construct clarity can advance conceptualization methodology. Drawing from Facet Theory literature, we formulate a set of principles for assessing construct clarity, particularly regarding a construct’s relationships to its extant related constructs. Conscious and targeted attention to this criterion can promote a research ecosystem more supportive of knowledge accumulation.

  4. On the application of motivation theory to human factors/ergonomics: motivational design principles for human-technology interaction.

    Science.gov (United States)

    Szalma, James L

    2014-12-01

    Motivation is a driving force in human-technology interaction. This paper represents an effort to (a) describe a theoretical model of motivation in human technology interaction, (b) provide design principles and guidelines based on this theory, and (c) describe a sequence of steps for the. evaluation of motivational factors in human-technology interaction. Motivation theory has been relatively neglected in human factors/ergonomics (HF/E). In both research and practice, the (implicit) assumption has been that the operator is already motivated or that motivation is an organizational concern and beyond the purview of HF/E. However, technology can induce task-related boredom (e.g., automation) that can be stressful and also increase system vulnerability to performance failures. A theoretical model of motivation in human-technology interaction is proposed, based on extension of the self-determination theory of motivation to HF/E. This model provides the basis for both future research and for development of practical recommendations for design. General principles and guidelines for motivational design are described as well as a sequence of steps for the design process. Human motivation is an important concern for HF/E research and practice. Procedures in the design of both simple and complex technologies can, and should, include the evaluation of motivational characteristics of the task, interface, or system. In addition, researchers should investigate these factors in specific human-technology domains. The theory, principles, and guidelines described here can be incorporated into existing techniques for task analysis and for interface and system design.

  5. Precautionary discourse. Thinking through the distinction between the precautionary principle and the precautionary approach in theory and practice.

    Science.gov (United States)

    Dinneen, Nathan

    2013-01-01

    This paper addresses the distinction, arising from the different ways the European Union and United States have come to adopt precaution regarding various environmental and health-related risks, between the precautionary principle and the precautionary approach in both theory and practice. First, this paper addresses how the precautionary principle has been variously defined, along with an exploration of some of the concepts with which it has been associated. Next, it addresses how the distinction between the precautionary principle and precautionary approach manifested itself within the political realm. Last, it considers the theoretical foundation of the precautionary principle in the philosophy of Hans Jonas, considering whether the principled-pragmatic distinction regarding precaution does or doesn't hold up in Jonas' thought.

  6. On the invariance principle

    Energy Technology Data Exchange (ETDEWEB)

    Moller-Nielsen, Thomas [University of Oxford (United Kingdom)

    2014-07-01

    Physicists and philosophers have long claimed that the symmetries of our physical theories - roughly speaking, those transformations which map solutions of the theory into solutions - can provide us with genuine insight into what the world is really like. According to this 'Invariance Principle', only those quantities which are invariant under a theory's symmetries should be taken to be physically real, while those quantities which vary under its symmetries should not. Physicists and philosophers, however, are generally divided (or, indeed, silent) when it comes to explaining how such a principle is to be justified. In this paper, I spell out some of the problems inherent in other theorists' attempts to justify this principle, and sketch my own proposed general schema for explaining how - and when - the Invariance Principle can indeed be used as a legitimate tool of metaphysical inference.

  7. First-principles study of thermoelectric properties of CuI

    International Nuclear Information System (INIS)

    Yadav, Manoj K; Sanyal, Biplab

    2014-01-01

    Theoretical investigations of the thermoelectric properties of CuI have been carried out employing first-principles calculations followed by the calculations of transport coefficients based on Boltzmann transport theory. Among the three different phases of CuI, viz. zinc-blende, wurtzite and rock salt, the thermoelectric power factor is found to be the maximum for the rock salt phase. We have analysed the variations of Seebeck coefficients and thermoelectric power factors on the basis of calculated electronic structures near the valence band maxima of these phases. (papers)

  8. Analytical study of Yang–Mills theory in the infrared from first principles

    Energy Technology Data Exchange (ETDEWEB)

    Siringo, Fabio, E-mail: fabio.siringo@ct.infn.it

    2016-06-15

    Pure Yang–Mills SU(N) theory is studied in the Landau gauge and four dimensional space. While leaving the original Lagrangian unmodified, a double perturbative expansion is devised, based on a massive free-particle propagator. In dimensional regularization, all diverging mass terms cancel exactly in the double expansion, without the need to include mass counterterms that would spoil the symmetry of the Lagrangian. No free parameters are included that were not in the original theory, yielding a fully analytical approach from first principles. The expansion is safe in the infrared and is equivalent to the standard perturbation theory in the UV. At one-loop, explicit analytical expressions are given for the propagators and the running coupling and are found in excellent agreement with the data of lattice simulations. A universal scaling property is predicted for the inverse propagators and shown to be satisfied by the lattice data. Higher loops are found to be negligible in the infrared below 300 MeV where the coupling becomes small and the one-loop approximation is under full control.

  9. On the theory of optimal processes

    International Nuclear Information System (INIS)

    Goldenberg, P.; Provenzano, V.

    1975-01-01

    The theory of optimal processes is a recent mathematical formalism that is used to solve an important class of problems in science and in technology, that cannot be solved by classical variational techniques. An example of such processes would be the control of a nuclear reactor. Certain features of the theory of optimal processes are discussed, emphasizing the central contribution of Pontryagin with his formulation of the maximum principle. An application of the theory of optimum control is presented. The example is a time optimum problem applied to a simplified model of a nuclear reactor. It deals with the question of changing the equilibrium power level of the reactor in an optimum time

  10. A Principle of Intentionality.

    Science.gov (United States)

    Turner, Charles K

    2017-01-01

    The mainstream theories and models of the physical sciences, including neuroscience, are all consistent with the principle of causality. Wholly causal explanations make sense of how things go, but are inherently value-neutral, providing no objective basis for true beliefs being better than false beliefs, nor for it being better to intend wisely than foolishly. Dennett (1987) makes a related point in calling the brain a syntactic (procedure-based) engine. He says that you cannot get to a semantic (meaning-based) engine from there. He suggests that folk psychology revolves around an intentional stance that is independent of the causal theories of the brain, and accounts for constructs such as meanings, agency, true belief, and wise desire. Dennett proposes that the intentional stance is so powerful that it can be developed into a valid intentional theory. This article expands Dennett's model into a principle of intentionality that revolves around the construct of objective wisdom. This principle provides a structure that can account for all mental processes, and for the scientific understanding of objective value. It is suggested that science can develop a far more complete worldview with a combination of the principles of causality and intentionality than would be possible with scientific theories that are consistent with the principle of causality alone.

  11. Cognitive Theory of Multimedia Learning, Instructional Design Principles, and Students with Learning Disabilities in Computer-Based and Online Learning Environments

    Science.gov (United States)

    Greer, Diana L.; Crutchfield, Stephen A.; Woods, Kari L.

    2013-01-01

    Struggling learners and students with Learning Disabilities often exhibit unique cognitive processing and working memory characteristics that may not align with instructional design principles developed with typically developing learners. This paper explains the Cognitive Theory of Multimedia Learning and underlying Cognitive Load Theory, and…

  12. Elements of a compatible optimization theory for coupled systems; Elements d'une theorie de l'optimisation compatible de systemes couples

    Energy Technology Data Exchange (ETDEWEB)

    Bonnemay, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1969-07-01

    The first theory deals with the compatible optimization in coupled systems. A game theory for two players and with a non-zero sum is first developed. The conclusions are then extended to the case of a game with any finite number of players. After this essentially static study, the dynamic aspect of the problem is applied to the case of games which evolve. By applying PONTRYAGIN maximum principle it is possible to derive a compatible optimisation theorem which constitutes a necessary condition. (author) [French] La premiere these traite de l'optimalisation compatible des systemes couples. Une theorie du jeu a deux joueurs et a somme non nulle est d'abord developpee. Ses conclusions sont etendues ensuite au jeu a un nombre fini quelconque de joueurs. Apres cette etude essentiellement statique, l'aspect dynamique du probleme est introduit dans les jeux evolutifs. L'application du principe du maximum de PONTRYAGIN permet d'enoncer un theoreme d'optimalite compatible qui constitue une condition necessaire. (auteur)

  13. Elements of a compatible optimization theory for coupled systems

    International Nuclear Information System (INIS)

    Bonnemay, A.

    1969-01-01

    The first theory deals with the compatible optimization in coupled systems. A game theory for two players and with a non-zero sum is first developed. The conclusions are then extended to the case of a game with any finite number of players. After this essentially static study, the dynamic aspect of the problem is applied to the case of games which evolve. By applying PONTRYAGIN maximum principle it is possible to derive a compatible optimisation theorem which constitutes a necessary condition. (author) [fr

  14. The Context and Values Inherent in Human Capital as Core Principles for New Economic Theory

    Directory of Open Access Journals (Sweden)

    Winston P. Nagan

    2018-05-01

    Full Text Available This paper has a specific focus on the core foundation of New Economic Theory. That is, the focus on human capital and its implications for the theory and method of the new form of political economy. The central issue that is underlined is the importance of scientific and technological innovation and its necessary interdependence on global values and value analysis. The paper discusses the issue of scientific consciousness as a generator of technological value, and places scientific process at the heart of human consciousness. It discusses the complex interdependence of human relational subjectivity, scientific consciousness, and modern science. The paper draws attention to the problems of observation and participation, and the influence of modern quantum physics in drawing attention to aspects of human consciousness that go beyond the points of conventional science, and open up concern for the principle of non-locality. It explores human subjectivity in terms of the way in which “emotionalized behaviors” have effects on scientific objectivity. It also briefly touches on consciousness and its observable scientific role in the possible reconstruction of some aspects of reality. Mention is made of the Copenhagen perspective, the Many Worlds perspective, and the Penrose interpretation. These insights challenge us to explore human consciousness and innovation in economic organization. The discussion also brings in the principle of relational inter-subjectivity, emotion, and consciousness as a potential driver of human capital and value. In short, positive emotions can influence economic decision-making, as can negative emotions. These challenges stress the problem of human relational subjectivity, values, and technology as the tools to better understand the conflicts and potentials of human capital for New Economic Theory. The issue of value-analysis has both a descriptive and normative dimension. Both of these aspects raise important challenges

  15. The principles of analysis of competitiveness and control schemes in transport services

    Directory of Open Access Journals (Sweden)

    A. Žvirblis

    2003-04-01

    Full Text Available Under the conditions of constantly growing competition transportation companies are faced with theoretical and practical problems associated with the quality and stability of transport services, competitiveness on the market and marketing problems. Road transport services are considered in the terms of value analysis while the assessment of their competitiveness is based on the Pontriagin maximum principle. A model for transport risk analysis is constructed, taking into account the principles of correlation and co - variation of transport services. Formalized models of automated control of services in the system of marketing allowing the analysis of stability and other parameters to be made in the framework of automatic control theory are offered.

  16. Equivalence Principle, Higgs Boson and Cosmology

    Directory of Open Access Journals (Sweden)

    Mauro Francaviglia

    2013-05-01

    Full Text Available We discuss here possible tests for Palatini f(R-theories together with their implications for different formulations of the Equivalence Principle. We shall show that Palatini f(R-theories obey the Weak Equivalence Principle and violate the Strong Equivalence Principle. The violations of the Strong Equivalence Principle vanish in vacuum (and purely electromagnetic solutions as well as on short time scales with respect to the age of the universe. However, we suggest that a framework based on Palatini f(R-theories is more general than standard General Relativity (GR and it sheds light on the interpretation of data and results in a way which is more model independent than standard GR itself.

  17. Gyro precession and Mach's principle

    International Nuclear Information System (INIS)

    Eby, P.

    1979-01-01

    The precession of a gyroscope is calculated in a nonrelativistic theory due to Barbour which satisfies Mach's principle. It is shown that the theory predicts both the geodetic and motional precession of general relativity to within factors of order 1. The significance of the gyro experiment is discussed from the point of view of metric theories of gravity and this is contrasted with its significance from the point of view of Mach's principle. (author)

  18. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...

  19. Spectral analysis and filter theory in applied geophysics

    CERN Document Server

    Buttkus, Burkhard

    2000-01-01

    This book is intended to be an introduction to the fundamentals and methods of spectral analysis and filter theory and their appli­ cations in geophysics. The principles and theoretical basis of the various methods are described, their efficiency and effectiveness eval­ uated, and instructions provided for their practical application. Be­ sides the conventional methods, newer methods arediscussed, such as the spectral analysis ofrandom processes by fitting models to the ob­ served data, maximum-entropy spectral analysis and maximum-like­ lihood spectral analysis, the Wiener and Kalman filtering methods, homomorphic deconvolution, and adaptive methods for nonstation­ ary processes. Multidimensional spectral analysis and filtering, as well as multichannel filters, are given extensive treatment. The book provides a survey of the state-of-the-art of spectral analysis and fil­ ter theory. The importance and possibilities ofspectral analysis and filter theory in geophysics for data acquisition, processing an...

  20. Development by Niels Bohr of the quantum theory of the atom and of the correspondence principle

    International Nuclear Information System (INIS)

    El'yashevich, M.A.

    1985-01-01

    Bohr's investigations in 1912-1923 on the quantum theory of atoms are considered. The sources of N. Bohr's works on this subject are analyzed, and the beginning of his quantum research in 1912 is described. A detailed analysis is given of N. Bohr's famous paper on the hydrogen atom theory and on the origin of spectra. The further development of Bohr's ideas on atomic structure is considered, special attention is being payed to his postulates of stationary states and of radiation transitions and to the development of the correspondence principle. It is shown how well N. Bohr understood the difficulties of the model theory and how be tried to obtain a deep understanding of quantum phenomena

  1. Extremum principles for irreversible processes

    International Nuclear Information System (INIS)

    Hillert, M.; Agren, J.

    2006-01-01

    Hamilton's extremum principle is a powerful mathematical tool in classical mechanics. Onsager's extremum principle may play a similar role in irreversible thermodynamics and may also become a valuable tool. His principle may formally be regarded as a principle of maximum rate of entropy production but does not have a clear physical interpretation. Prigogine's principle of minimum rate of entropy production has a physical interpretation when it applies, but is not strictly valid except for a very special case

  2. Maximum Correntropy Unscented Kalman Filter for Ballistic Missile Navigation System based on SINS/CNS Deeply Integrated Mode.

    Science.gov (United States)

    Hou, Bowen; He, Zhangming; Li, Dong; Zhou, Haiyin; Wang, Jiongqi

    2018-05-27

    Strap-down inertial navigation system/celestial navigation system ( SINS/CNS) integrated navigation is a high precision navigation technique for ballistic missiles. The traditional navigation method has a divergence in the position error. A deeply integrated mode for SINS/CNS navigation system is proposed to improve the navigation accuracy of ballistic missile. The deeply integrated navigation principle is described and the observability of the navigation system is analyzed. The nonlinearity, as well as the large outliers and the Gaussian mixture noises, often exists during the actual navigation process, leading to the divergence phenomenon of the navigation filter. The new nonlinear Kalman filter on the basis of the maximum correntropy theory and unscented transformation, named the maximum correntropy unscented Kalman filter, is deduced, and the computational complexity is analyzed. The unscented transformation is used for restricting the nonlinearity of the system equation, and the maximum correntropy theory is used to deal with the non-Gaussian noises. Finally, numerical simulation illustrates the superiority of the proposed filter compared with the traditional unscented Kalman filter. The comparison results show that the large outliers and the influence of non-Gaussian noises for SINS/CNS deeply integrated navigation is significantly reduced through the proposed filter.

  3. Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.

    Science.gov (United States)

    O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao

    2017-07-01

    Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.

  4. Effective medium theory principles and applications

    CERN Document Server

    Choy, Tuck C

    2015-01-01

    Effective medium theory dates back to the early days of the theory of electricity. Faraday in 1837 proposed one of the earliest models for a composite metal-insulator dielectric and around 1870 Maxwell and later Garnett (1904) developed models to describe a composite or mixed material medium. The subject has been developed considerably since and while the results are useful for predicting materials performance, the theory can also be used in a wide range of problems in physics and materials engineering. This book develops the topic of effective medium theory by bringing together the essentials of both the static and the dynamical theory. Electromagnetic systems are thoroughly dealt with, as well as related areas such as the CPA theory of alloys, liquids, the density functional theory etc., with applications to ultrasonics, hydrodynamics, superconductors, porous media and others, where the unifying aspects of the effective medium concept are emphasized. In this new second edition two further chapters have been...

  5. Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price

    Directory of Open Access Journals (Sweden)

    M. E. Haji Abadi

    2013-09-01

    Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.

  6. Theory-Generating Practice: Proposing a principle for learning design

    Directory of Open Access Journals (Sweden)

    Mie Buhl

    2016-06-01

    Full Text Available This contribution proposes a principle for learning design: Theory-Generating Practice (TGP as an alternative to the way university courses often are taught and structured with a series of theoretical lectures separate from practical experience and concluding with an exam or a project. The aim is to contribute to a development of theoretical frameworks for learning designs by suggesting TGP which may lead to new practices and turn the traditional dramaturgy for teaching upside down. TGP focuses on embodied experience prior to text reading and lectures to enhance theoretical knowledge building and takes tacit knowledge into account. The article introduces TGP and contextualizes it to a Danish tradition of didactics as well as discusses it in relation to contemporary conceptual currents of didactic design and learning design. This is followed by a theoretical framing of TGP, and is discussed through three empirical examples from bachelor and master programs involving technology, and showing three ways of practicing it.

  7. Theory-Generating Practice: Proposing a principle for learning design

    Directory of Open Access Journals (Sweden)

    Mie Buhl

    2016-05-01

    Full Text Available This contribution proposes a principle for learning design: Theory-Generating Practice (TGP as an alternative to the way university courses often are taught and structured with a series of theoretical lectures separate from practical experience and concluding with an exam or a project. The aim is to contribute to a development of theoretical frameworks for learning designs by suggesting TGP which may lead to new practices and turn the traditional dramaturgy for teaching upside down. TGP focuses on embodied experience prior to text reading and lectures to enhance theoretical knowledge building and takes tacit knowledge into account. The article introduces TGP and contextualizes it to a Danish tradition of didactics as well as discusses it in relation to contemporary conceptual currents of didactic design and learning design. This is followed by a theoretical framing of TGP, and is discussed through three empirical examples from bachelor and master programs involving technology, and showing three ways of practicing it.

  8. Optimal Control of Hypersonic Planning Maneuvers Based on Pontryagin’s Maximum Principle

    Directory of Open Access Journals (Sweden)

    A. Yu. Melnikov

    2015-01-01

    Full Text Available The work objective is the synthesis of simple analytical formula of the optimal roll angle of hypersonic gliding vehicles for conditions of quasi-horizontal motion, allowing its practical implementation in onboard control algorithms.The introduction justifies relevance, formulates basic control tasks, and describes a history of scientific research and achievements in the field concerned. The author reveals a common disadvantage of the other authors’ methods, i.e. the problem of practical implementation in onboard control algorithms.The similar tasks of hypersonic maneuvers are systemized according to the type of maneuver, control parameters and limitations.In the statement of the problem the glider launched horizontally with a suborbital speed glides passive in the static Atmosphere on a spherical surface of constant radius in the Central field of gravitation.The work specifies a system of equations of motion in the inertial spherical coordinate system, sets the limits on the roll angle and optimization criteria at the end of the flight: high speed or azimuth and the minimum distances to the specified geocentric points.The solution.1 A system of equations of motion is transformed by replacing the time argument with another independent argument – the normal equilibrium overload. The Hamiltonian and the equations of mated parameters are obtained using the Pontryagin’s maximum principle. The number of equations of motion and mated vector is reduced.2 The mated parameters were expressed by formulas using current movement parameters. The formulas are proved through differentiation and substitution in the equations of motion.3 The Formula of optimal roll-position control by condition of maximum is obtained. After substitution of mated parameters, the insertion of constants, and trigonometric transformations the Formula of the optimal roll angle is obtained as functions of the current parameters of motion.The roll angle is expressed as the ratio

  9. On the impossibility of a small violation of the Pauli principle within the local quantum field theory

    International Nuclear Information System (INIS)

    Govorkov, A.B.

    1988-01-01

    It is shown that the local quantum field theory of free fields allows only the generalizations of the conventional quantizations (corresponding to the Fermi and Bose statistics) that correspond to the para-Fermi and para-Bose statistics and does not permit ''small'' violation of the Pauli principle

  10. [Inheritance on and innovation of traditional Chinese medicine (TCM) flavor theory and TCM flavor standardization principle flavor theory in Compendium of Materia Medica].

    Science.gov (United States)

    Zhang, Wei; Zhang, Rui-xian; Li, Jian

    2015-12-01

    All previous literatures about Chinese herbal medicines show distinctive traditional Chinese medicine (TCM) flavors. Compendium of Materia Medica is an influential book in TCM history. The TCM flavor theory and flavor standardization principle in this book has important significance for modern TCM flavor standardization. Compendium of Materia Medica pays attention to the flavor theory, explain the relations between the flavor of medicine and its therapeutic effects by means of Neo-Confucianism of the Song and Ming Dynasties. However,the book has not reflected and further developed the systemic theory, which originated in the Jin and Yuan dynasty. In Compendium of Materia Medica , flavor are standardized just by tasting medicines, instead of deducing flavors. Therefore, medicine tasting should be adopted as the major method to standardize the flavor of medicine.

  11. The effect of implementing cognitive load theory-based design principles in virtual reality simulation training of surgical skills: a randomized controlled trial

    DEFF Research Database (Denmark)

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars

    2016-01-01

    training of mastoidectomy. Methods Eighteen novice medical students received 1 h of self-directed virtual reality simulation training of the mastoidectomy procedure randomized for standard instructions (control) or cognitive load theory-based instructions with a worked example followed by a problem......Background Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation....... Increased cognitive load when part tasks needed to be integrated in the post-training procedures could be a possible explanation for this. Other instructional designs and methods are needed to lower the cognitive load and improve the performance in virtual reality surgical simulation training of novices....

  12. Improving the Quality of Online Discussion: The Effects of Strategies Designed Based on Cognitive Load Theory Principles

    Science.gov (United States)

    Darabi, Aubteen; Jin, Li

    2013-01-01

    This article focuses on heavy cognitive load as the reason for the lack of quality associated with conventional online discussion. Using the principles of cognitive load theory, four online discussion strategies were designed specifically aiming at reducing the discussants' cognitive load and thus enhancing the quality of their online discussion.…

  13. General principles of quantum mechanics

    International Nuclear Information System (INIS)

    Pauli, W.

    1980-01-01

    This book is a textbook for a course in quantum mechanics. Starting from the complementarity and the uncertainty principle Schroedingers equation is introduced together with the operator calculus. Then stationary states are treated as eigenvalue problems. Furthermore matrix mechanics are briefly discussed. Thereafter the theory of measurements is considered. Then as approximation methods perturbation theory and the WKB approximation are introduced. Then identical particles, spin, and the exclusion principle are discussed. There after the semiclassical theory of radiation and the relativistic one-particle problem are discussed. Finally an introduction is given into quantum electrodynamics. (HSI)

  14. Experiences from Participants in Large-Scale Group Practice of the Maharishi Transcendental Meditation and TM-Sidhi Programs and Parallel Principles of Quantum Theory, Astrophysics, Quantum Cosmology, and String Theory: Interdisciplinary Qualitative Correspondences

    Science.gov (United States)

    Svenson, Eric Johan

    Participants on the Invincible America Assembly in Fairfield, Iowa, and neighboring Maharishi Vedic City, Iowa, practicing Maharishi Transcendental Meditation(TM) (TM) and the TM-Sidhi(TM) programs in large groups, submitted written experiences that they had had during, and in some cases shortly after, their daily practice of the TM and TM-Sidhi programs. Participants were instructed to include in their written experiences only what they observed and to leave out interpretation and analysis. These experiences were then read by the author and compared with principles and phenomena of modern physics, particularly with quantum theory, astrophysics, quantum cosmology, and string theory as well as defining characteristics of higher states of consciousness as described by Maharishi Vedic Science. In all cases, particular principles or phenomena of physics and qualities of higher states of consciousness appeared qualitatively quite similar to the content of the given experience. These experiences are presented in an Appendix, in which the corresponding principles and phenomena of physics are also presented. These physics "commentaries" on the experiences were written largely in layman's terms, without equations, and, in nearly every case, with clear reference to the corresponding sections of the experiences to which a given principle appears to relate. An abundance of similarities were apparent between the subjective experiences during meditation and principles of modern physics. A theoretic framework for understanding these rich similarities may begin with Maharishi's theory of higher states of consciousness provided herein. We conclude that the consistency and richness of detail found in these abundant similarities warrants the further pursuit and development of such a framework.

  15. Principles of Fourier analysis

    CERN Document Server

    Howell, Kenneth B

    2001-01-01

    Fourier analysis is one of the most useful and widely employed sets of tools for the engineer, the scientist, and the applied mathematician. As such, students and practitioners in these disciplines need a practical and mathematically solid introduction to its principles. They need straightforward verifications of its results and formulas, and they need clear indications of the limitations of those results and formulas.Principles of Fourier Analysis furnishes all this and more. It provides a comprehensive overview of the mathematical theory of Fourier analysis, including the development of Fourier series, "classical" Fourier transforms, generalized Fourier transforms and analysis, and the discrete theory. Much of the author''s development is strikingly different from typical presentations. His approach to defining the classical Fourier transform results in a much cleaner, more coherent theory that leads naturally to a starting point for the generalized theory. He also introduces a new generalized theory based ...

  16. Bose-Einstein condensation of light: general theory.

    Science.gov (United States)

    Sob'yanin, Denis Nikolaevich

    2013-08-01

    A theory of Bose-Einstein condensation of light in a dye-filled optical microcavity is presented. The theory is based on the hierarchical maximum entropy principle and allows one to investigate the fluctuating behavior of the photon gas in the microcavity for all numbers of photons, dye molecules, and excitations at all temperatures, including the whole critical region. The master equation describing the interaction between photons and dye molecules in the microcavity is derived and the equivalence between the hierarchical maximum entropy principle and the master equation approach is shown. The cases of a fixed mean total photon number and a fixed total excitation number are considered, and a much sharper, nonparabolic onset of a macroscopic Bose-Einstein condensation of light in the latter case is demonstrated. The theory does not use the grand canonical approximation, takes into account the photon polarization degeneracy, and exactly describes the microscopic, mesoscopic, and macroscopic Bose-Einstein condensation of light. Under certain conditions, it predicts sub-Poissonian statistics of the photon condensate and the polarized photon condensate, and a universal relation takes place between the degrees of second-order coherence for these condensates. In the macroscopic case, there appear a sharp jump in the degrees of second-order coherence, a sharp jump and kink in the reduced standard deviations of the fluctuating numbers of photons in the polarized and whole condensates, and a sharp peak, a cusp, of the Mandel parameter for the whole condensate in the critical region. The possibility of nonclassical light generation in the microcavity with the photon Bose-Einstein condensate is predicted.

  17. J/ψ+χcJ production at the B factories under the principle of maximum conformality

    International Nuclear Information System (INIS)

    Wang, Sheng-Quan; Wu, Xing-Gang; Zheng, Xu-Chang; Shen, Jian-Ming; Zhang, Qiong-Lian

    2013-01-01

    Under the conventional scale setting, the renormalization scale uncertainty usually constitutes a systematic error for a fixed-order perturbative QCD estimation. The recently suggested principle of maximum conformality (PMC) provides a principle to eliminate such scale ambiguity in a step-by-step way. Using the PMC, all non-conformal terms in perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. In the paper, we make a detailed PMC analysis for both the polarized and the unpolarized cross sections for the double charmonium production process, e + +e − →J/ψ(ψ ′ )+χ cJ with (J=0,1,2). The running behavior for the coupling constant, governed by the PMC scales, are determined exactly for the specific processes. We compare our predictions with the measurements at the B factories, BaBar and Belle, and the theoretical estimations obtained in the literature. Because the non-conformal terms are different for various polarized and unpolarized cross sections, the PMC scales of these cross sections are different in principle. It is found that all the PMC scales are almost independent of the initial choice of renormalization scale. Thus, the large renormalization scale uncertainty usually adopted in the literature up to ∼40% at the NLO level, obtained from the conventional scale setting, for both the polarized and the unpolarized cross sections are greatly suppressed. It is found that the charmonium production is dominated by J=0 channel. After PMC scale setting, we obtain σ(J/ψ+χ c0 )=12.25 −3.13 +3.70 fb and σ(ψ ′ +χ c0 )=5.23 −1.32 +1.56 fb, where the squared average errors are caused by bound state parameters as m c , |R J/ψ (0)| and |R χ cJ ′ (0)|, which are non-perturbative error sources in different to the QCD scale setting problem. In comparison to the experimental data, a more accurate theoretical estimation shall be helpful for a precise

  18. Mach's principle and space-time structure

    International Nuclear Information System (INIS)

    Raine, D.J.

    1981-01-01

    Mach's principle, that inertial forces should be generated by the motion of a body relative to the bulk of matter in the universe, is shown to be related to the structure imposed on space-time by dynamical theories. General relativity theory and Mach's principle are both shown to be well supported by observations. Since Mach's principle is not contained in general relativity this leads to a discussion of attempts to derive Machian theories. The most promising of these appears to be a selection rule for solutions of the general relativistic field equations, in which the space-time metric structure is generated by the matter content of the universe only in a well-defined way. (author)

  19. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    Science.gov (United States)

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  20. Coexistence of an unstirred chemostat model with B-D functional response by fixed point index theory

    Directory of Open Access Journals (Sweden)

    Xiao-zhou Feng

    2016-11-01

    Full Text Available Abstract This paper deals with an unstirred chemostat model with the Beddington-DeAngelis functional response. First, some prior estimates for positive solutions are proved by the maximum principle and the method of upper and lower solutions. Second, the calculation on the fixed point index of chemostat model is obtained by degree theory and the homotopy invariance theorem. Finally, some sufficient condition on the existence of positive steady-state solutions is established by fixed point index theory and bifurcation theory.

  1. The principle of locality: Effectiveness, fate, and challenges

    International Nuclear Information System (INIS)

    Doplicher, Sergio

    2010-01-01

    The special theory of relativity and quantum mechanics merge in the key principle of quantum field theory, the principle of locality. We review some examples of its 'unreasonable effectiveness' in giving rise to most of the conceptual and structural frame of quantum field theory, especially in the absence of massless particles. This effectiveness shows up best in the formulation of quantum field theory in terms of operator algebras of local observables; this formulation is successful in digging out the roots of global gauge invariance, through the analysis of superselection structure and statistics, in the structure of the local observable quantities alone, at least for purely massive theories; but so far it seems unfit to cope with the principle of local gauge invariance. This problem emerges also if one attempts to figure out the fate of the principle of locality in theories describing the gravitational forces between elementary particles as well. An approach based on the need to keep an operational meaning, in terms of localization of events, of the notion of space-time, shows that, in the small, the latter must loose any meaning as a classical pseudo-Riemannian manifold, locally based on Minkowski space, but should acquire a quantum structure at the Planck scale. We review the geometry of a basic model of quantum space-time and some attempts to formulate interaction of quantum fields on quantum space-time. The principle of locality is necessarily lost at the Planck scale, and it is a crucial open problem to unravel a replacement in such theories which is equally mathematically sharp, namely, a principle where the general theory of relativity and quantum mechanics merge, which reduces to the principle of locality at larger scales. Besides exploring its fate, many challenges for the principle of locality remain; among them, the analysis of superselection structure and statistics also in the presence of massless particles, and to give a precise mathematical

  2. Venus atmosphere profile from a maximum entropy principle

    Directory of Open Access Journals (Sweden)

    L. N. Epele

    2007-10-01

    Full Text Available The variational method with constraints recently developed by Verkley and Gerkema to describe maximum-entropy atmospheric profiles is generalized to ideal gases but with temperature-dependent specific heats. In so doing, an extended and non standard potential temperature is introduced that is well suited for tackling the problem under consideration. This new formalism is successfully applied to the atmosphere of Venus. Three well defined regions emerge in this atmosphere up to a height of 100 km from the surface: the lowest one up to about 35 km is adiabatic, a transition layer located at the height of the cloud deck and finally a third region which is practically isothermal.

  3. Maximum Correntropy Unscented Kalman Filter for Ballistic Missile Navigation System based on SINS/CNS Deeply Integrated Mode

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2018-05-01

    Full Text Available Strap-down inertial navigation system/celestial navigation system ( SINS/CNS integrated navigation is a high precision navigation technique for ballistic missiles. The traditional navigation method has a divergence in the position error. A deeply integrated mode for SINS/CNS navigation system is proposed to improve the navigation accuracy of ballistic missile. The deeply integrated navigation principle is described and the observability of the navigation system is analyzed. The nonlinearity, as well as the large outliers and the Gaussian mixture noises, often exists during the actual navigation process, leading to the divergence phenomenon of the navigation filter. The new nonlinear Kalman filter on the basis of the maximum correntropy theory and unscented transformation, named the maximum correntropy unscented Kalman filter, is deduced, and the computational complexity is analyzed. The unscented transformation is used for restricting the nonlinearity of the system equation, and the maximum correntropy theory is used to deal with the non-Gaussian noises. Finally, numerical simulation illustrates the superiority of the proposed filter compared with the traditional unscented Kalman filter. The comparison results show that the large outliers and the influence of non-Gaussian noises for SINS/CNS deeply integrated navigation is significantly reduced through the proposed filter.

  4. Principles of e-learning systems engineering

    CERN Document Server

    Gilbert, Lester

    2008-01-01

    The book integrates the principles of software engineering with the principles of educational theory, and applies them to the problems of e-learning development, thus establishing the discipline of E-learning systems engineering. For the first time, these principles are collected and organised into the coherent framework that this book provides. Both newcomers to and established practitioners in the field are provided with integrated and grounded advice on theory and practice. The book presents strong practical and theoretical frameworks for the design and development of technology-based mater

  5. Principle of coincidence method and application in activity measurement

    International Nuclear Information System (INIS)

    Li Mou; Dai Yihua; Ni Jianzhong

    2008-01-01

    The basic principle of coincidence method was discussed. The basic principle was generalized by analysing the actual example, and the condition in theory of coincidence method was brought forward. The cause of variation of efficiency curve and the effect of dead-time in activity measurement were explained using the above principle and condition. This principle of coincidence method provides the foundation in theory for activity measurement. (authors)

  6. String field theory

    International Nuclear Information System (INIS)

    Kaku, M.

    1987-01-01

    In this article, the authors summarize the rapid progress in constructing string field theory actions, such as the development of the covariant BRST theory. They also present the newer geometric formulation of string field theory, from which the BRST theory and the older light cone theory can be derived from first principles. This geometric formulation allows us to derive the complete field theory of strings from two geometric principles, in the same way that general relativity and Yang-Mills theory can be derived from two principles based on global and local symmetry. The geometric formalism therefore reduces string field theory to a problem of finding an invariant under a new local gauge group they call the universal string group (USG). Thus, string field theory is the gauge theory of the universal string group in much the same way that Yang-Mills theory is the gauge theory of SU(N). The geometric formulation places superstring theory on the same rigorous group theoretical level as general relativity and gauge theory

  7. Elements of a compatible optimization theory for coupled systems; Elements d'une theorie de l'optimisation compatible de systemes couples

    Energy Technology Data Exchange (ETDEWEB)

    Bonnemay, A. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1969-07-01

    The first theory deals with the compatible optimization in coupled systems. A game theory for two players and with a non-zero sum is first developed. The conclusions are then extended to the case of a game with any finite number of players. After this essentially static study, the dynamic aspect of the problem is applied to the case of games which evolve. By applying PONTRYAGIN maximum principle it is possible to derive a compatible optimisation theorem which constitutes a necessary condition. (author) [French] La premiere these traite de l'optimalisation compatible des systemes couples. Une theorie du jeu a deux joueurs et a somme non nulle est d'abord developpee. Ses conclusions sont etendues ensuite au jeu a un nombre fini quelconque de joueurs. Apres cette etude essentiellement statique, l'aspect dynamique du probleme est introduit dans les jeux evolutifs. L'application du principe du maximum de PONTRYAGIN permet d'enoncer un theoreme d'optimalite compatible qui constitue une condition necessaire. (auteur)

  8. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  9. Reformulation of a stochastic action principle for irregular dynamics

    International Nuclear Information System (INIS)

    Wang, Q.A.; Bangoup, S.; Dzangue, F.; Jeatsa, A.; Tsobnang, F.; Le Mehaute, A.

    2009-01-01

    A stochastic action principle for random dynamics is revisited. Numerical diffusion experiments are carried out to show that the diffusion path probability depends exponentially on the Lagrangian action A=∫ a b Ldt. This result is then used to derive the Shannon measure for path uncertainty. It is shown that the maximum entropy principle and the least action principle of classical mechanics can be unified into δA-bar=0 where the average is calculated over all possible paths of the stochastic motion between two configuration points a and b. It is argued that this action principle and the maximum entropy principle are a consequence of the mechanical equilibrium condition extended to the case of stochastic dynamics.

  10. Principles of physics from quantum field theory to classical mechanics

    CERN Document Server

    Jun, Ni

    2014-01-01

    This book starts from a set of common basic principles to establish the formalisms in all areas of fundamental physics, including quantum field theory, quantum mechanics, statistical mechanics, thermodynamics, general relativity, electromagnetic field, and classical mechanics. Instead of the traditional pedagogic way, the author arranges the subjects and formalisms in a logical-sequential way, i.e. all the formulas are derived from the formulas before them. The formalisms are also kept self-contained. Most of the required mathematical tools are also given in the appendices. Although this book covers all the disciplines of fundamental physics, the book is concise and can be treated as an integrated entity. This is consistent with the aphorism that simplicity is beauty, unification is beauty, and thus physics is beauty. The book may be used as an advanced textbook by graduate students. It is also suitable for physicists who wish to have an overview of fundamental physics. Readership: This is an advanced gradua...

  11. Principle of minimum distance in space of states as new principle in quantum physics

    International Nuclear Information System (INIS)

    Ion, D. B.; Ion, M. L. D.

    2007-01-01

    The mathematician Leonhard Euler (1707-1783) appears to have been a philosophical optimist having written: 'Since the fabric of universe is the most perfect and is the work of the most wise Creator, nothing whatsoever take place in this universe in which some relation of maximum or minimum does not appear. Wherefore, there is absolutely no doubt that every effect in universe can be explained as satisfactory from final causes themselves the aid of the method of Maxima and Minima, as can from the effective causes'. Having in mind this kind of optimism in the papers mentioned in this work we introduced and investigated the possibility to construct a predictive analytic theory of the elementary particle interaction based on the principle of minimum distance in the space of quantum states (PMD-SQS). So, choosing the partial transition amplitudes as the system variational variables and the distance in the space of the quantum states as a measure of the system effectiveness, we obtained the results presented in this paper. These results proved that the principle of minimum distance in space of quantum states (PMD-SQS) can be chosen as variational principle by which we can find the analytic expressions of the partial transition amplitudes. In this paper we present a description of hadron-hadron scattering via principle of minimum distance PMD-SQS when the distance in space of states is minimized with two directional constraints: dσ/dΩ(±1) = fixed. Then by using the available experimental (pion-nucleon and kaon-nucleon) phase shifts we obtained not only consistent experimental tests of the PMD-SQS optimality, but also strong experimental evidences for new principles in hadronic physics such as: Principle of nonextensivity conjugation via the Riesz-Thorin relation (1/2p + 1/2q = 1) and a new Principle of limited uncertainty in nonextensive quantum physics. The strong experimental evidence obtained here for the nonextensive statistical behavior of the [J,

  12. Right to Place: A Political Theory of Animal Rights in Harmony with Environmental and Ecological Principles

    Directory of Open Access Journals (Sweden)

    Eleni Panagiotarakou

    2014-09-01

    Full Text Available The focus of this paper is on the “right to place” as a political theory of wild animal rights. Out of the debate between terrestrial cosmopolitans inspired by Kant and Arendt and rooted cosmopolitan animal right theorists, the right to place emerges from the fold of rooted cosmopolitanism in tandem with environmental and ecological principles. Contrary to terrestrial cosmopolitans—who favour extending citizenship rights to wild animals and advocate at the same time large-scale humanitarian interventions and unrestricted geographical mobility—I argue that the well-being of wild animals is best served by the right to place theory on account of its sovereignty model. The right to place theory advocates human non-interference in wildlife communities, opposing even humanitarian interventions, which carry the risk of unintended consequences. The right to place theory, with its emphasis on territorial sovereignty, bases its opposition to unrestricted geographical mobility on two considerations: (a the non-generalist nature of many species and (b the potential for abuse via human encroachment. In a broader context, the advantage of the right to place theory lies in its implicit environmental demands: human population control and sustainable lifestyles.

  13. The principle of equivalence

    International Nuclear Information System (INIS)

    Unnikrishnan, C.S.

    1994-01-01

    Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs

  14. Exact multiple scattering theory of two-nucleus collisions including the Pauli principle

    International Nuclear Information System (INIS)

    Gurvitz, S.A.

    1981-01-01

    Exact equations for two-nucleus scattering are derived in which the effects of the Pauli principle are fully included. Our method exploits a modified equation for the scattering of two identical nucleons, which is obtained at the beginning. Considering proton-nucleus scattering we found that the resulting amplitude has two components, one resembling a multiple scattering series for distinguishable particles, and the other a distorted (A-1) nucleon cluster exchange. For elastic pA scattering the multiple scattering amplitude is found in the form of an optical potential expansion. We show that the Kerman-McManus-Thaler theory of the optical potential could be easily modified to include the effects of antisymmetrization of the projectile with the target nucleons. Nucleus-nucleus scattering is studied first for distinguishable target and beam nucleus. Afterwards the Pauli principle is included, where only the case of deuteron-nucleus scattering is discussed in detail. The resulting amplitude has four components. Two of them correspond to modified multiple scattering expansions and the others are distorted (A-1)- and (A-2)- nucleon cluster exchange. The result for d-A scattering is extended to the general case of nucleus-nucleus scattering. The equations are simple to use and as such constitute an improvement over existing schemes

  15. Variational principles for locally variational forms

    International Nuclear Information System (INIS)

    Brajercik, J.; Krupka, D.

    2005-01-01

    We present the theory of higher order local variational principles in fibered manifolds, in which the fundamental global concept is a locally variational dynamical form. Any two Lepage forms, defining a local variational principle for this form, differ on intersection of their domains, by a variationally trivial form. In this sense, but in a different geometric setting, the local variational principles satisfy analogous properties as the variational functionals of the Chern-Simons type. The resulting theory of extremals and symmetries extends the first order theories of the Lagrange-Souriau form, presented by Grigore and Popp, and closed equivalents of the first order Euler-Lagrange forms of Hakova and Krupkova. Conceptually, our approach differs from Prieto, who uses the Poincare-Cartan forms, which do not have higher order global analogues

  16. Effects of Modality and Redundancy Principles on the Learning and Attitude of a Computer-Based Music Theory Lesson among Jordanian Primary Pupils

    Science.gov (United States)

    Aldalalah, Osamah Ahmad; Fong, Soon Fook

    2010-01-01

    The purpose of this study was to investigate the effects of modality and redundancy principles on the attitude and learning of music theory among primary pupils of different aptitudes in Jordan. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The…

  17. Density-functional theory for internal magnetic fields

    Science.gov (United States)

    Tellgren, Erik I.

    2018-01-01

    A density-functional theory is developed based on the Maxwell-Schrödinger equation with an internal magnetic field in addition to the external electromagnetic potentials. The basic variables of this theory are the electron density and the total magnetic field, which can equivalently be represented as a physical current density. Hence, the theory can be regarded as a physical current density-functional theory and an alternative to the paramagnetic current density-functional theory due to Vignale and Rasolt. The energy functional has strong enough convexity properties to allow a formulation that generalizes Lieb's convex analysis formulation of standard density-functional theory. Several variational principles as well as a Hohenberg-Kohn-like mapping between potentials and ground-state densities follow from the underlying convex structure. Moreover, the energy functional can be regarded as the result of a standard approximation technique (Moreau-Yosida regularization) applied to the conventional Schrödinger ground-state energy, which imposes limits on the maximum curvature of the energy (with respect to the magnetic field) and enables construction of a (Fréchet) differentiable universal density functional.

  18. The η{sub c} decays into light hadrons using the principle of maximum conformality

    Energy Technology Data Exchange (ETDEWEB)

    Du, Bo-Lun; Wu, Xing-Gang; Zeng, Jun; Bu, Shi; Shen, Jian-Ming [Chongqing University, Department of Physics, Chongqing (China)

    2018-01-15

    In the paper, we analyze the η{sub c} decays into light hadrons at the next-to-leading order QCD corrections by applying the principle of maximum conformality (PMC). The relativistic correction at the O(α{sub s}v{sup 2})-order level has been included in the discussion, which gives about 10% contribution to the ratio R. The PMC, which satisfies the renormalization group invariance, is designed to obtain a scale-fixed and scheme-independent prediction at any fixed order. To avoid the confusion of treating n{sub f}-terms, we transform the usual MS pQCD series into the one under the minimal momentum space subtraction scheme. To compare with the prediction under conventional scale setting, R{sub Conv,mMOM-r} = (4.12{sup +0.30}{sub -0.28}) x 10{sup 3}, after applying the PMC, we obtain R{sub PMC,mMOM-r} = (6.09{sup +0.62}{sub -0.55}) x 10{sup 3}, where the errors are squared averages of the ones caused by m{sub c} and Λ{sub mMOM}. The PMC prediction agrees with the recent PDG value within errors, i.e. R{sup exp} = (6.3 ± 0.5) x 10{sup 3}. Thus we think the mismatching of the prediction under conventional scale-setting with the data is due to improper choice of scale, which however can be solved by using the PMC. (orig.)

  19. Spatial data modelling and maximum entropy theory

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2005-01-01

    Roč. 51, č. 2 (2005), s. 80-83 ISSN 0139-570X Institutional research plan: CEZ:AV0Z10750506 Keywords : spatial data classification * distribution function * error distribution Subject RIV: BD - Theory of Information

  20. The harm principle as a mid-level principle?: three problems from the context of infectious disease control.

    Science.gov (United States)

    Krom, André

    2011-10-01

    Effective infectious disease control may require states to restrict the liberty of individuals. Since preventing harm to others is almost universally accepted as a legitimate (prima facie) reason for restricting the liberty of individuals, it seems plausible to employ a mid-level harm principle in infectious disease control. Moral practices like infectious disease control support - or even require - a certain level of theory-modesty. However, employing a mid-level harm principle in infectious disease control faces at least three problems. First, it is unclear what we gain by attaining convergence on a specific formulation of the harm principle. Likely candidates for convergence, a harm principle aimed at preventing harmful conduct, supplemented by considerations of effectiveness and always choosing the least intrusive means still leave ample room for normative disagreement. Second, while mid-level principles are sometimes put forward in response to the problem of normative theories attaching different weight to moral principles, employing a mid-level harm principle completely leaves open how to determine what weight to attach to it in application. Third, there appears to be a trade-off between attaining convergence and finding a formulation of the harm principle that can justify liberty-restrictions in all situations of contagion, including interventions that are commonly allowed. These are not reasons to abandon mid-level theorizing altogether. But there is no reason to be too theory-modest in applied ethics. Morally justifying e.g. if a liberty-restriction in infectious disease control is proportional to the aim of harm-prevention, promptly requires moving beyond the mid-level harm principle. © 2011 Blackwell Publishing Ltd.

  1. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  2. First-principles Theory of Magnetic Multipoles in Condensed Matter Systems

    Science.gov (United States)

    Suzuki, Michi-To; Ikeda, Hiroaki; Oppeneer, Peter M.

    2018-04-01

    The multipole concept, which characterizes the spacial distribution of scalar and vector objects by their angular dependence, has already become widely used in various areas of physics. In recent years it has become employed to systematically classify the anisotropic distribution of electrons and magnetization around atoms in solid state materials. This has been fuelled by the discovery of several physical phenomena that exhibit unusual higher rank multipole moments, beyond that of the conventional degrees of freedom as charge and magnetic dipole moment. Moreover, the higher rank electric/magnetic multipole moments have been suggested as promising order parameters in exotic hidden order phases. While the experimental investigations of such anomalous phases have provided encouraging observations of multipolar order, theoretical approaches have developed at a slower pace. In particular, a materials' specific theory has been missing. The multipole concept has furthermore been recognized as the key quantity which characterizes the resultant configuration of magnetic moments in a cluster of atomic moments. This cluster multipole moment has then been introduced as macroscopic order parameter for a noncollinear antiferromagnetic structure in crystals that can explain unusual physical phenomena whose appearance is determined by the magnetic point group symmetry. It is the purpose of this review to discuss the recent developments in the first-principles theory investigating multipolar degrees of freedom in condensed matter systems. These recent developments exemplify that ab initio electronic structure calculations can unveil detailed insight in the mechanism of physical phenomena caused by the unconventional, multipole degree of freedom.

  3. On the surprising rigidity of the Pauli exclusion principle

    International Nuclear Information System (INIS)

    Greenberg, O.W.

    1989-01-01

    I review recent attempts to construct a local quantum field theory of small violations of the Pauli exclusion principle and suggest a qualitative reason for the surprising rigidity of the Pauli principle. I suggest that small violations can occur in our four-dimensional world as a consequence of the compactification of a higher-dimensional theory in which the exclusion principle is exactly valid. I briefly mention a recent experiment which places a severe limit on possible violations of the exclusion principle. (orig.)

  4. Gauge theories

    International Nuclear Information System (INIS)

    Kenyon, I.R.

    1986-01-01

    Modern theories of the interactions between fundamental particles are all gauge theories. In the case of gravitation, application of this principle to space-time leads to Einstein's theory of general relativity. All the other interactions involve the application of the gauge principle to internal spaces. Electromagnetism serves to introduce the idea of a gauge field, in this case the electromagnetic field. The next example, the strong force, shows unique features at long and short range which have their origin in the self-coupling of the gauge fields. Finally the unification of the description of the superficially dissimilar electromagnetic and weak nuclear forces completes the picture of successes of the gauge principle. (author)

  5. Mach's holographic principle

    International Nuclear Information System (INIS)

    Khoury, Justin; Parikh, Maulik

    2009-01-01

    Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.

  6. Variational principles

    CERN Document Server

    Moiseiwitsch, B L

    2004-01-01

    This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha

  7. Using the Music Industry To Teach Economic Principles.

    Science.gov (United States)

    Stamm, K. Brad

    The key purpose of this paper is to provide economics and business professors, particularly those teaching principles courses, with concrete examples of economic theory applied to the music industry. A second objective is to further the interest in economic theory among business majors and expose non-majors to economic principles via real world…

  8. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  9. Developing principles of growth

    DEFF Research Database (Denmark)

    Neergaard, Helle; Fleck, Emma

    of the principles of growth among women-owned firms. Using an in-depth case study methodology, data was collected from women-owned firms in Denmark and Ireland, as these countries are similar in contextual terms, e.g. population and business composition, dominated by micro, small and medium-sized enterprises....... Extending on principles put forward in effectuation theory, we propose that women grow their firms according to five principles which enable women’s enterprises to survive in the face of crises such as the current financial world crisis....

  10. A new computing principle

    International Nuclear Information System (INIS)

    Fatmi, H.A.; Resconi, G.

    1988-01-01

    In 1954 while reviewing the theory of communication and cybernetics the late Professor Dennis Gabor presented a new mathematical principle for the design of advanced computers. During our work on these computers it was found that the Gabor formulation can be further advanced to include more recent developments in Lie algebras and geometric probability, giving rise to a new computing principle

  11. Engageability: a new sub-principle of the learnability principle in human-computer interaction

    Directory of Open Access Journals (Sweden)

    B Chimbo

    2011-12-01

    Full Text Available The learnability principle relates to improving the usability of software, as well as users’ performance and productivity. A gap has been identified as the current definition of the principle does not distinguish between users of different ages. To determine the extent of the gap, this article compares the ways in which two user groups, adults and children, learn how to use an unfamiliar software application. In doing this, we bring together the research areas of human-computer interaction (HCI, adult and child learning, learning theories and strategies, usability evaluation and interaction design. A literature survey conducted on learnability and learning processes considered the meaning of learnability of software applications across generations. In an empirical investigation, users aged from 9 to 12 and from 35 to 50 were observed in a usability laboratory while learning to use educational software applications. Insights that emerged from data analysis showed different tactics and approaches that children and adults use when learning unfamiliar software. Eye tracking data was also recorded. Findings indicated that subtle re- interpretation of the learnability principle and its associated sub-principles was required. An additional sub-principle, namely engageability was proposed to incorporate aspects of learnability that are not covered by the existing sub-principles. Our re-interpretation of the learnability principle and the resulting design recommendations should help designers to fulfill the varying needs of different-aged users, and improve the learnability of their designs. Keywords: Child computer interaction, Design principles, Eye tracking, Generational differences, human-computer interaction, Learning theories, Learnability, Engageability, Software applications, Uasability Disciplines: Human-Computer Interaction (HCI Studies, Computer science, Observational Studies

  12. Toward a Principled Sampling Theory for Quasi-Orders.

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  13. Toward a Principled Sampling Theory for Quasi-Orders

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  14. Principle of least action; some possible generalizations

    International Nuclear Information System (INIS)

    Broucke, R.

    1982-01-01

    In this article we draw the attention to an important variational principle in dynamics: the Maupertuis-Jacobi Least Action Principle (MJLAP). This principle compares varied paths with the same energy h. We give two new proofs of the MJLAP (Sections 3 and 8) as well as a new unified variational principle which contains both Hamilton's Principle (HP) and the MJLAP as particular cases (Sections 4 and 9). The article also shows several new methods for the construction of a Lagrangian for a conservative dynamical system. As an example, we illustrate the theory with the classical Harmonic Oscillator Problem (Section 10). Our method is based on the theory of changes of independent variables in a dynamical system. It indirectly shows how a change of independent variable affects the self-adjointness of a dynamical system (Sections 5, 6, 7). Our new Lagrangians contain an arbitrary constant α, whose meaning needs to be studied, eventually in relation to the concepts of quantification or gauge transformations. The two important values of the constant α are 1 (Hamilton's principle) and 1/2 (Maupertuis-Jacobi Least Action Principle)

  15. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  16. Principlism and its alleged competitors.

    Science.gov (United States)

    Beauchamp, Tom L

    1995-09-01

    Principles that provide general normative frameworks in bioethics have been criticized since the late 1980s, when several different methods and types of moral philosophy began to be proposed as alternatives or substitutes. Several accounts have emerged in recent years, including: (1) Impartial Rule Theory (supported in this issue by K. Danner Clouser), (2) Casuistry (supported in this issue by Albert Jonsen), and (3) Virtue Ethics (supported in this issue by Edmund D. Pellegrino). Although often presented as rival methods or theories, these approaches are consistent with and should not be considered adversaries of a principle-based account.

  17. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    In this article survey of variational principles has been given. Variational principles play a significant role in mathematical theory with emphasis on the physical aspects. There are two principals used i.e. to represent the equation of the system in a succinct way and to enable a particular computation in the system to be carried out with greater accuracy. The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basic finite element methods on variational principles. (A.B.)

  18. Mach's principle and rotating universes

    International Nuclear Information System (INIS)

    King, D.H.

    1990-01-01

    It is shown that the Bianchi 9 model universe satisfies the Mach principle. These closed rotating universes were previously thought to be counter-examples to the principle. The Mach principle is satisfied because the angular momentum of the rotating matter is compensated by the effective angular momentum of gravitational waves. A new formulation of the Mach principle is given that is based on the field theory interpretation of general relativity. Every closed universe with 3-sphere topology is shown to satisfy this formulation of the Mach principle. It is shown that the total angular momentum of the matter and gravitational waves in a closed 3-sphere topology universe is zero

  19. RENEWAL OF BASIC LAWS AND PRINCIPLES FOR POLAR CONTINUUM THEORIES (Ⅵ)-CONSERVATION LAWS OF MASS AND INERTIA

    Institute of Scientific and Technical Information of China (English)

    戴安民

    2003-01-01

    The purpose is to reestablish the coupled conservation laws, the local conservation equations and the jump conditions of mass and inertia for polar continuum theories. In this connection the new material derivatives of the deformation gradient, the line element, the surface element and the volume element were derived and the generalized Reynolds transport theorem was presented. Combining these conservation laws of mass and inertia with the balance laws of momentum, angular momentum and energy derived in our previous papers of this series, a rather complete system of coupled basic laws and principles for polar continuum theories is constituted on the whole. From this system the coupled nonlocal balance equations of mass, inertia, momentum, angular momentum and energy may be obtained by the usual localization.

  20. Energy conservation and the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1979-01-01

    If the equivalence principle is violated, then observers performing local experiments can detect effects due to their position in an external gravitational environment (preferred-location effects) or can detect effects due to their velocity through some preferred frame (preferred frame effects). We show that the principle of energy conservation implies a quantitative connection between such effects and structure-dependence of the gravitational acceleration of test bodies (violation of the Weak Equivalence Principle). We analyze this connection within a general theoretical framework that encompasses both non-gravitational local experiments and test bodies as well as gravitational experiments and test bodies, and we use it to discuss specific experimental tests of the equivalence principle, including non-gravitational tests such as gravitational redshift experiments, Eoetvoes experiments, the Hughes-Drever experiment, and the Turner-Hill experiment, and gravitational tests such as the lunar-laser-ranging ''Eoetvoes'' experiment, and measurements of anisotropies and variations in the gravitational constant. This framework is illustrated by analyses within two theoretical formalisms for studying gravitational theories: the PPN formalism, which deals with the motion of gravitating bodies within metric theories of gravity, and the THepsilonμ formalism that deals with the motion of charged particles within all metric theories and a broad class of non-metric theories of gravity

  1. Coexisting principles and logics of elder care

    DEFF Research Database (Denmark)

    Dahl, Hanne Marlene; Eskelinen, Leena; Boll Hansen, Eigil

    2015-01-01

    Healthy and active ageing has become an ideal in Western societies. In the Nordic countries, this ideal has been supported through a policy of help to self-help in elder care since the 1980s. However, reforms inspired by New Public Management (NPM) have introduced a new policy principle of consumer......-oriented service that stresses the wishes and priorities of older people. We have studied how these two principles are applied by care workers in Denmark. Is one principle or logic replacing the other, or do they coexist? Do they create tensions between professional knowledge and the autonomy of older people......? Using neo-institutional theory and feminist care theory, we analysed the articulation of the two policy principles in interviews and their logics in observations in four local authorities. We conclude that help to self-help is the dominant principle, that it is deeply entrenched in the identity...

  2. Exact thermodynamic principles for dynamic order existence and evolution in chaos

    International Nuclear Information System (INIS)

    Mahulikar, Shripad P.; Herwig, Heinz

    2009-01-01

    The negentropy proposed first by Schroedinger is re-examined, and its conceptual and mathematical definitions are introduced. This re-definition of negentropy integrates Schroedinger's intention of its introduction, and the subsequent diverse notions in literature. This negentropy is further corroborated by its ability to state the two exact thermodynamic principles: negentropy principle for dynamic order existence and principle of maximum negentropy production (PMNEP) for dynamic order evolution. These principles are the counterparts of the existing entropy principle and the law of maximum entropy production, respectively. The PMNEP encompasses the basic concepts in the evolution postulates by Darwin and de Vries. Perspectives of dynamic order evolution in literature point to the validity of PMNEP as the law of evolution. These two additional principles now enable unified explanation of order creation, existence, evolution, and destruction; using thermodynamics.

  3. Three principles of competitive nonlinear pricing

    OpenAIRE

    Page Junior, Frank H.; Monteiro, P. K.

    2002-01-01

    We make three contributions to the theory of contracting under asymmetric information. First , we establish a competitive analog to the revelation principle which we call the implementation principle. This principle provides a complete characterization of all incentive compatible, indirect contracting mechanisms in terms of contract catalogs (or menus), and allows us to conclude that in competitive contracting situations, firms in choosing their contracting strategies can restrict attention, ...

  4. A philosophy of rivers: Equilibrium states, channel evolution, teleomatic change and least action principle

    Science.gov (United States)

    Nanson, Gerald C.; Huang, He Qing

    2018-02-01

    Until recently no universal agreement as to a philosophical or scientific methodological framework has been proposed to guide the study of fluvial geomorphology. An understanding of river form and process requires an understanding of the principles that govern the behaviour and evolution of alluvial rivers at the most fundamental level. To date, the investigations of such principles have followed four approaches: develop qualitative unifying theories that are usually untested; collect and examine data visually and statistically to define semi-quantitative relationships among variables; apply Newtonian theoretical and empirical mechanics in a reductionist manner; resolve the primary flow equations theoretically by assuming maximum or minimum outputs. Here we recommend not a fifth but an overarching philosophy to embrace all four: clarifying and formalising an understanding of the evolution of river channels and iterative directional changes in the context of least action principle (LAP), the theoretical basis of variational mechanics. LAP is exemplified in rivers in the form of maximum flow efficiency (MFE). A sophisticated understanding of evolution in its broadest sense is essential to understand how rivers adjust towards an optimum state rather than towards some other. Because rivers, as dynamic contemporary systems, flow in valleys that are commonly historical landforms and often tectonically determined, we propose that most of the world's alluvial rivers are over-powered for the work they must do. To remain stable they commonly evolve to expend surplus energy via a variety of dynamic equilibrium forms that will further adjust, where possible, to maximise their stability as much less common MFE forms in stationary equilibrium. This paper: 1. Shows that the theory of evolution is derived from, and applicable to, both the physical and biological sciences; 2. Focusses the development of theory in geomorphology on the development of equilibrium theory; 3. Proposes

  5. Life’s a Gas: A Thermodynamic Theory of Biological Evolution

    Directory of Open Access Journals (Sweden)

    Keith R. Skene

    2015-07-01

    Full Text Available This paper outlines a thermodynamic theory of biological evolution. Beginning with a brief summary of the parallel histories of the modern evolutionary synthesis and thermodynamics, we use four physical laws and processes (the first and second laws of thermodynamics, diffusion and the maximum entropy production principle to frame the theory. Given that open systems such as ecosystems will move towards maximizing dispersal of energy, we expect biological diversity to increase towards a level, Dmax, representing maximum entropic production (Smax. Based on this theory, we develop a mathematical model to predict diversity over the last 500 million years. This model combines diversification, post-extinction recovery and likelihood of discovery of the fossil record. We compare the output of this model with that of the observed fossil record. The model predicts that life diffuses into available energetic space (ecospace towards a dynamic equilibrium, driven by increasing entropy within the genetic material. This dynamic equilibrium is punctured by extinction events, which are followed by restoration of Dmax through diffusion into available ecospace. Finally we compare and contrast our thermodynamic theory with the MES in relation to a number of important characteristics of evolution (progress, evolutionary tempo, form versus function, biosphere architecture, competition and fitness.

  6. Beyond the Virtues-Principles Debate.

    Science.gov (United States)

    Keat, Marilyn S.

    1992-01-01

    Indicates basic ontological assumptions in the virtues-principles debate in moral philosophy, noting Aristotle's and Kant's fundamental ideas about morality and considering a hermeneutic synthesis of theories. The article discusses what acceptance of the synthesis might mean in the theory and practice of moral pedagogy, offering examples of…

  7. High-Performance First-Principles Molecular Dynamics for Predictive Theory and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gygi, Francois [Univ. of California, Davis, CA (United States). Dept. of Computer Science; Galli, Giulia [Univ. of Chicago, IL (United States); Schwegler, Eric [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-03

    This project focused on developing high-performance software tools for First-Principles Molecular Dynamics (FPMD) simulations, and applying them in investigations of materials relevant to energy conversion processes. FPMD is an atomistic simulation method that combines a quantum-mechanical description of electronic structure with the statistical description provided by molecular dynamics (MD) simulations. This reliance on fundamental principles allows FPMD simulations to provide a consistent description of structural, dynamical and electronic properties of a material. This is particularly useful in systems for which reliable empirical models are lacking. FPMD simulations are increasingly used as a predictive tool for applications such as batteries, solar energy conversion, light-emitting devices, electro-chemical energy conversion devices and other materials. During the course of the project, several new features were developed and added to the open-source Qbox FPMD code. The code was further optimized for scalable operation of large-scale, Leadership-Class DOE computers. When combined with Many-Body Perturbation Theory (MBPT) calculations, this infrastructure was used to investigate structural and electronic properties of liquid water, ice, aqueous solutions, nanoparticles and solid-liquid interfaces. Computing both ionic trajectories and electronic structure in a consistent manner enabled the simulation of several spectroscopic properties, such as Raman spectra, infrared spectra, and sum-frequency generation spectra. The accuracy of the approximations used allowed for direct comparisons of results with experimental data such as optical spectra, X-ray and neutron diffraction spectra. The software infrastructure developed in this project, as applied to various investigations of solids, liquids and interfaces, demonstrates that FPMD simulations can provide a detailed, atomic-scale picture of structural, vibrational and electronic properties of complex systems

  8. Hamilton principle for the dual electrodynamics

    International Nuclear Information System (INIS)

    Souza Silva, Saulo Carneiro de

    1995-01-01

    The present work discusses the classical electromagnetic theory in the presence of magnetic monopoles. We review the connection between such objects and the long standing problem of charge quantization and the main theoretical difficulties in formulating the classical dual electromagnetic theory in terms of an action principle. We show that a deeper understanding of the source of such difficulties leads naturally to the construction of a variational principle for a non-local Lagrangian from which all the (local) dynamical equations for electric, magnetic charges and fields can be obtained. (author)

  9. Chemical analysis using coincidence Doppler broadening and supporting first-principles theory: Applications to vacancy defects in compound semiconductors

    International Nuclear Information System (INIS)

    Makkonen, I.; Rauch, C.; Mäki, J.-M.; Tuomisto, F.

    2012-01-01

    The Doppler broadening of the positron annihilation radiation contains information on the chemical environment of vacancy defects trapping positrons in solids. The measured signal can, for instance, reveal impurity atoms situated next to vacancies. As compared to integrated quantities such as the positron annihilation rate or the annihilation line shape parameters, the full Doppler spectrum measured in the coincidence mode contains much more useful information for defect identification. This information, however, is indirect and complementary understanding is needed to fully interpret the results. First-principles calculations are a valuable tool in the analysis of measured spectra. One can construct an atomic-scale model for a given candidate defect, calculate from first principles the corresponding Doppler spectrum, and directly compare results between experiment and theory. In this paper we discuss recent examples of successful combinations of coincidence Doppler broadening measurements and supporting first-principles calculations. These demonstrate the predictive power of state-of-the-art calculations and the usefulness of such an approach in the chemical analysis of vacancy defects.

  10. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  11. The Generalized Principle of the Golden Section and its applications in mathematics, science, and engineering

    International Nuclear Information System (INIS)

    Stakhov, A.P.

    2005-01-01

    The 'Dichotomy Principle' and the classical 'Golden Section Principle' are two of the most important principles of Nature, Science and also Art. The Generalized Principle of the Golden Section that follows from studying the diagonal sums of the Pascal triangle is a sweeping generalization of these important principles. This underlies the foundation of 'Harmony Mathematics', a new proposed mathematical direction. Harmony Mathematics includes a number of new mathematical theories: an algorithmic measurement theory, a new number theory, a new theory of hyperbolic functions based on Fibonacci and Lucas numbers, and a theory of the Fibonacci and 'Golden' matrices. These mathematical theories are the source of many new ideas in mathematics, philosophy, botanic and biology, electrical and computer science and engineering, communication systems, mathematical education as well as theoretical physics and physics of high energy particles

  12. Neural principles of memory and a neural theory of analogical insight

    Science.gov (United States)

    Lawson, David I.; Lawson, Anton E.

    1993-12-01

    Grossberg's principles of neural modeling are reviewed and extended to provide a neural level theory to explain how analogies greatly increase the rate of learning and can, in fact, make learning and retention possible. In terms of memory, the key point is that the mind is able to recognize and recall when it is able to match sensory input from new objects, events, or situations with past memory records of similar objects, events, or situations. When a match occurs, an adaptive resonance is set up in which the synaptic strengths of neurons are increased; thus a long term record of the new input is formed in memory. Systems of neurons called outstars and instars are presumably the underlying units that enable this to occur. Analogies can greatly facilitate learning and retention because they activate the outstars (i.e., the cells that are sampling the to-be-learned pattern) and cause the neural activity to grow exponentially by forming feedback loops. This increased activity insures the boost in synaptic strengths of neurons, thus causing storage and retention in long-term memory (i.e., learning).

  13. Principles of quantum chemistry

    CERN Document Server

    George, David V

    2013-01-01

    Principles of Quantum Chemistry focuses on the application of quantum mechanics in physical models and experiments of chemical systems.This book describes chemical bonding and its two specific problems - bonding in complexes and in conjugated organic molecules. The very basic theory of spectroscopy is also considered. Other topics include the early development of quantum theory; particle-in-a-box; general formulation of the theory of quantum mechanics; and treatment of angular momentum in quantum mechanics. The examples of solutions of Schroedinger equations; approximation methods in quantum c

  14. Comparison of four microfinance markets from the point of view of the effectuation theory, complemented by proposed musketeer principle illustrating forces within village banks

    Directory of Open Access Journals (Sweden)

    Hes Tomáš

    2017-03-01

    Full Text Available Microfinance services are essential tools of formalization of shadow economics, leveraging immature entrepreneurship with external capital. Given the importance of shadow economics for the social balance of developing countries, the importance of an answer to a question of how microfinance entities come into existence, is rather essential. While decision-taking process leading to entrepreneurship were explained by the effectuation theory developed in the 90’, these explanations were not concerned with the logics of creation of microenterprises in neither developing countries nor microfinance village banks. While the abovementioned theories explain the nascence of companies in environment of developed markets, importance of a focus on emerging markets related to large share of human society of microfinance clientele is obvious. The study provides a development streak to the effectuation Theory, adding the musketeer principle to the five effectuation principles proposed by Sarasvathy. Furthermore, the hitherto not considered relationship between social capital and effectuation related concepts is another proposal of the paper focusing on description of the nature of microfinance clientele from the point of view of effectuation theory and social capital drawing a comparison of microfinance markets in four countries, Turkey, Sierra Leone, Indonesia and Afghanistan.

  15. The effect of implementing cognitive load theory-based design principles in virtual reality simulation training of surgical skills: a randomized controlled trial.

    Science.gov (United States)

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten

    2016-01-01

    Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation training of mastoidectomy. Eighteen novice medical students received 1 h of self-directed virtual reality simulation training of the mastoidectomy procedure randomized for standard instructions (control) or cognitive load theory-based instructions with a worked example followed by a problem completion exercise (intervention). Participants then completed two post-training virtual procedures for assessment and comparison. Cognitive load during the post-training procedures was estimated by reaction time testing on an integrated secondary task. Final-product analysis by two blinded expert raters was used to assess the virtual mastoidectomy performances. Participants in the intervention group had a significantly increased cognitive load during the post-training procedures compared with the control group (52 vs. 41 %, p  = 0.02). This was also reflected in the final-product performance: the intervention group had a significantly lower final-product score than the control group (13.0 vs. 15.4, p  virtual reality surgical simulation training of novices.

  16. Critical reflections on the principle of beneficence in biomedicine

    African Journals Online (AJOL)

    raoul

    2012-02-18

    Feb 18, 2012 ... Medical ethics as a scholarly discipline and a system of moral principles ... Attribution License (http://creativecommons.org/licenses/by/2.0), which ..... the principle like other ethical principles is only fine in theory, but putting it.

  17. Principles of chiral perturbation theory

    International Nuclear Information System (INIS)

    Leutwyler, H.

    1995-01-01

    An elementary discussion of the main concepts used in chiral perturbation theory is given in textbooks and a more detailed picture of the applications may be obtained from the reviews. Concerning the foundations of the method, the literature is comparatively scarce. So, I will concentrate on the basic concepts and explain why the method works. (author)

  18. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  19. Quantum principles in field interactions

    International Nuclear Information System (INIS)

    Shirkov, D.V.

    1986-01-01

    The concept of quantum principle is intruduced as a principle whosee formulation is based on specific quantum ideas and notions. We consider three such principles, viz. those of quantizability, local gauge symmetry, and supersymmetry, and their role in the development of the quantum field theory (QFT). Concerning the first of these, we analyze the formal aspects and physical contents of the renormalization procedure in QFT and its relation to ultraviolet divergences and the renorm group. The quantizability principle is formulated as an existence condition of a self-consistent quantum version with a given mechanism of the field interaction. It is shown that the consecutive (from a historial point of view) use of these quantum principles puts still larger limitations on possible forms of field interactions

  20. The Generalized Principle of the Golden Section and its applications in mathematics, science, and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Stakhov, A.P. [International Club of the Golden Section, 6 McCreary Trail, Bolton, ON, L7E 2C8 (Canada)] e-mail: goldenmuseum@rogers.com

    2005-10-01

    The 'Dichotomy Principle' and the classical 'Golden Section Principle' are two of the most important principles of Nature, Science and also Art. The Generalized Principle of the Golden Section that follows from studying the diagonal sums of the Pascal triangle is a sweeping generalization of these important principles. This underlies the foundation of 'Harmony Mathematics', a new proposed mathematical direction. Harmony Mathematics includes a number of new mathematical theories: an algorithmic measurement theory, a new number theory, a new theory of hyperbolic functions based on Fibonacci and Lucas numbers, and a theory of the Fibonacci and 'Golden' matrices. These mathematical theories are the source of many new ideas in mathematics, philosophy, botanic and biology, electrical and computer science and engineering, communication systems, mathematical education as well as theoretical physics and physics of high energy particles.

  1. Gyarmati’s Variational Principle of Dissipative Processes

    Directory of Open Access Journals (Sweden)

    József Verhás

    2014-04-01

    Full Text Available Like in mechanics and electrodynamics, the fundamental laws of the thermodynamics of dissipative processes can be compressed into Gyarmati’s variational principle. This variational principle both in its differential (local and in integral (global forms was formulated by Gyarmati in 1965. The consistent application of both the local and the global forms of Gyarmati’s principle provides all the advantages throughout explicating the theory of irreversible thermodynamics that are provided in the study of mechanics and electrodynamics by the corresponding classical variational principles, e.g., Gauss’ differential principle of least constraint or Hamilton’s integral principle.

  2. No-Hypersignaling Principle

    Science.gov (United States)

    Dall'Arno, Michele; Brandsen, Sarah; Tosini, Alessandro; Buscemi, Francesco; Vedral, Vlatko

    2017-07-01

    A paramount topic in quantum foundations, rooted in the study of the Einstein-Podolsky-Rosen (EPR) paradox and Bell inequalities, is that of characterizing quantum theory in terms of the spacelike correlations it allows. Here, we show that to focus only on spacelike correlations is not enough: we explicitly construct a toy model theory that, while not contradicting classical and quantum theories at the level of spacelike correlations, still displays an anomalous behavior in its timelike correlations. We call this anomaly, quantified in terms of a specific communication game, the "hypersignaling" phenomena. We hence conclude that the "principle of quantumness," if it exists, cannot be found in spacelike correlations alone: nontrivial constraints need to be imposed also on timelike correlations, in order to exclude hypersignaling theories.

  3. Interpolation theory

    CERN Document Server

    Lunardi, Alessandra

    2018-01-01

    This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

  4. String theory

    International Nuclear Information System (INIS)

    Chan Hongmo.

    1987-10-01

    The paper traces the development of the String Theory, and was presented at Professor Sir Rudolf Peierls' 80sup(th) Birthday Symposium. The String theory is discussed with respect to the interaction of strings, the inclusion of both gauge theory and gravitation, inconsistencies in the theory, and the role of space-time. The physical principles underlying string theory are also outlined. (U.K.)

  5. Communication: Towards first principles theory of relaxation in supercooled liquids formulated in terms of cooperative motion

    Energy Technology Data Exchange (ETDEWEB)

    Freed, Karl F., E-mail: freed@uchicago.edu [James Franck Institute and Department of Chemistry, University of Chicago, 929 East 57 Street, Chicago, Illinois 60637 (United States)

    2014-10-14

    A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, “The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition” [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.

  6. Communication: Towards first principles theory of relaxation in supercooled liquids formulated in terms of cooperative motion.

    Science.gov (United States)

    Freed, Karl F

    2014-10-14

    A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, "The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition" [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.

  7. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Hosking, John Joseph Absalom, E-mail: j.j.a.hosking@cma.uio.no [University of Oslo, Centre of Mathematics for Applications (CMA) (Norway)

    2012-12-15

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  8. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    International Nuclear Information System (INIS)

    Hosking, John Joseph Absalom

    2012-01-01

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966–979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197–216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  9. Self-Organized Complexity and Coherent Infomax from the Viewpoint of Jaynes’s Probability Theory

    Directory of Open Access Journals (Sweden)

    William A. Phillips

    2012-01-01

    Full Text Available This paper discusses concepts of self-organized complexity and the theory of Coherent Infomax in the light of Jaynes’s probability theory. Coherent Infomax, shows, in principle, how adaptively self-organized complexity can be preserved and improved by using probabilistic inference that is context-sensitive. It argues that neural systems do this by combining local reliability with flexible, holistic, context-sensitivity. Jaynes argued that the logic of probabilistic inference shows it to be based upon Bayesian and Maximum Entropy methods or special cases of them. He presented his probability theory as the logic of science; here it is considered as the logic of life. It is concluded that the theory of Coherent Infomax specifies a general objective for probabilistic inference, and that contextual interactions in neural systems perform functions required of the scientist within Jaynes’s theory.

  10. Chern-Simons theory from first principles

    International Nuclear Information System (INIS)

    Marino, E.C.

    1994-01-01

    A review is made of the main properties of the Chern-Simons field theory. These include the dynamical mass generation to the photon without a Higgs field, the statistical transmutation of charged particles coupled to it and the natural appearance of a transverse conductivity. A review of standard theories proposed for the Quantum Hall Effect which use the Chern-Simons term is also made, emphasizing the fact that this terms is put in an artificial manner. A physical origin for the Chern-Simons term is proposed, starting from QED in 3+1 D with the topological term and imposing that the motion of charged matter is restricted to an infinite plane. (author). 12 refs

  11. General proof of the entropy principle for self-gravitating fluid in f(R) gravity

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Xiongjun [Department of Physics and Key Laboratory of Low Dimensional Quantum Structures andQuantum Control of Ministry of Education, Hunan Normal University,Changsha, Hunan 410081 (China); Guo, Minyong [Department of Physics, Beijing Normal University,Beijing 100875 (China); Jing, Jiliang [Department of Physics and Key Laboratory of Low Dimensional Quantum Structures andQuantum Control of Ministry of Education, Hunan Normal University,Changsha, Hunan 410081 (China)

    2016-08-29

    The discussions on the connection between gravity and thermodynamics attract much attention recently. We consider a static self-gravitating perfect fluid system in f(R) gravity, which is an important theory could explain the accelerated expansion of the universe. We first show that the Tolman-Oppenheimer-Volkoff equation of f(R) theories can be obtained by thermodynamical method in spherical symmetric spacetime. Then we prove that the maximum entropy principle is also valid for f(R) gravity in general static spacetimes beyond spherical symmetry. The result shows that if the constraint equation is satisfied and the temperature of fluid obeys Tolmans law, the extrema of total entropy implies other components of gravitational equations. Conversely, if f(R) gravitational equation hold, the total entropy of the fluid should be extremum. Our work suggests a general and solid connection between f(R) gravity and thermodynamics.

  12. Le Chatelier principle in replicator dynamics

    OpenAIRE

    Allahverdyan, Armen E.; Galstyan, Aram

    2011-01-01

    The Le Chatelier principle states that physical equilibria are not only stable, but they also resist external perturbations via short-time negative-feedback mechanisms: a perturbation induces processes tending to diminish its results. The principle has deep roots, e.g., in thermodynamics it is closely related to the second law and the positivity of the entropy production. Here we study the applicability of the Le Chatelier principle to evolutionary game theory, i.e., to perturbations of a Nas...

  13. A Trustworthiness Evaluation Method for Software Architectures Based on the Principle of Maximum Entropy (POME and the Grey Decision-Making Method (GDMM

    Directory of Open Access Journals (Sweden)

    Rong Jiang

    2014-09-01

    Full Text Available As the early design decision-making structure, a software architecture plays a key role in the final software product quality and the whole project. In the software design and development process, an effective evaluation of the trustworthiness of a software architecture can help making scientific and reasonable decisions on the architecture, which are necessary for the construction of highly trustworthy software. In consideration of lacking the trustworthiness evaluation and measurement studies for software architecture, this paper provides one trustworthy attribute model of software architecture. Based on this model, the paper proposes to use the Principle of Maximum Entropy (POME and Grey Decision-making Method (GDMM as the trustworthiness evaluation method of a software architecture and proves the scientificity and rationality of this method, as well as verifies the feasibility through case analysis.

  14. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  15. Principles of Uncertainty

    CERN Document Server

    Kadane, Joseph B

    2011-01-01

    An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus

  16. Maximum Available Accuracy of FM-CW Radars

    Directory of Open Access Journals (Sweden)

    V. Ricny

    2009-12-01

    Full Text Available This article deals with the principles and above all with the maximum available measuring accuracy analyse of FM-CW (Frequency Modulated Continuous Wave radars, which are usually employed for distance and velocity measurements of moving objects in road traffic, as well as air traffic and in other applications. These radars often form an important part of the active safety equipment of high-end cars – the so-called anticollision systems. They usually work in the frequency bands of mm waves (24, 35, 77 GHz. Function principles and analyses of factors, that dominantly influence the distance measurement accuracy of these equipments especially in the modulation and demodulation part, are shown in the paper.

  17. Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.

  18. Nonlinear classical theory of electromagnetism

    International Nuclear Information System (INIS)

    Pisello, D.

    1977-01-01

    A topological theory of electric charge is given. Einstein's criteria for the completion of classical electromagnetic theory are summarized and their relation to quantum theory and the principle of complementarity is indicated. The inhibiting effect that this principle has had on the development of physical thought is discussed. Developments in the theory of functions on nonlinear spaces provide the conceptual framework required for the completion of electromagnetism. The theory is based on an underlying field which is a continuous mapping of space-time into points on the two-sphere. (author)

  19. Some remarks on general covariance of quantum theory

    International Nuclear Information System (INIS)

    Schmutzer, E.

    1977-01-01

    If one accepts Einstein's general principle of relativity (covariance principle) also for the sphere of microphysics (quantum, mechanics, quantum field theory, theory of elemtary particles), one has to ask how far the fundamental laws of traditional quantum physics fulfil this principle. Attention is here drawn to a series of papers that have appeared during the last years, in which the author criticized the usual scheme of quantum theory (Heisenberg picture, Schroedinger picture etc.) and presented a new foundation of the basic laws of quantum physics, obeying the 'principle of fundamental covariance' (Einstein's covariance principle in space-time and covariance principle in Hilbert space of quantum operators and states). (author)

  20. Theory of fundamental interactions

    International Nuclear Information System (INIS)

    Pestov, A.B.

    1992-01-01

    In the present article the theory of fundamental interactions is derived in a systematic way from the first principles. In the developed theory there is no separation between space-time and internal gauge space. Main equations for basic fields are derived. In is shown that the theory satisfies the correspondence principle and gives rise to new notions in the considered region. In particular, the conclusion is made about the existence of particles which are characterized not only by the mass, spin, charge but also by the moment of inertia. These are rotating particles, the particles which represent the notion of the rigid body on the microscopical level and give the key for understanding strong interactions. The main concepts and dynamical laws for these particles are formulated. The basic principles of the theory may be examined experimentally not in the distant future. 29 refs

  1. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    Science.gov (United States)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  2. Principles of electrodynamics

    CERN Document Server

    Schwartz, Melvin

    1972-01-01

    This advanced undergraduate- and graduate-level text by the 1988 Nobel Prize winner establishes the subject's mathematical background, reviews the principles of electrostatics, then introduces Einstein's special theory of relativity and applies it throughout the book in topics ranging from Gauss' theorem and Coulomb's law to electric and magnetic susceptibility.

  3. The scope of the LeChatelier Principle

    Science.gov (United States)

    George M., Lady; Quirk, James P.

    2007-07-01

    LeChatelier [Comptes Rendus 99 (1884) 786; Ann. Mines 13 (2) (1888) 157] showed that a physical system's “adjustment” to a disturbance to its equilibrium tended to be smaller as constraints were added to the adjustment process. Samuelson [Foundations of Economic Analysis, Harvard University Press, Cambridge, 1947] applied this result to economics in the context of the comparative statics of the actions of individual agents characterized as the solutions to optimization problems; and later (1960), extended the application of the Principle to a stable, multi-market equilibrium and the case of all commodities gross substitutes [e.g., L. Metzler, Stability of multiple markets: the hicks conditions. Econometrica 13 (1945) 277-292]. Refinements and alternative routes of derivation have appeared in the literature since then, e.g., Silberberg [The LeChatelier Principle as a corollary to a generalized envelope theorem, J. Econ. Theory 3 (1971) 146-155; A revision of comparative statics methodology in economics, or, how to do comparative statics on the back of an envelope, J. Econ. Theory 7 (1974) 159-172], Milgrom and Roberts [The LeChatelier Principle, Am. Econ. Rev. 86 (1996) 173-179], W. Suen, E. Silberberg, P. Tseng [The LeChatelier Principle: the long and the short of it, Econ. Theory 16 (2000) 471-476], and Chavas [A global analysis of constrained behavior: the LeChatelier Principle ‘in the large’, South. Econ. J. 72 (3) (2006) 627-644]. In this paper, we expand the scope of the Principle in various ways keyed to Samuelson's proposed means of testing comparative statics results (optimization, stability, and qualitative analysis). In the optimization framework, we show that the converse LeChatelier Principle also can be found in constrained optimization problems and for not initially “conjugate” sensitivities. We then show how the Principle and its converse can be found through the qualitative analysis of any linear system. In these terms, the Principle and

  4. Two conceptions of legal principles

    Directory of Open Access Journals (Sweden)

    Spaić Bojan

    2017-01-01

    Full Text Available The paper discusses the classical understanding of legal principles as the most general norms of a legal order, confronting it with Dworkin's and Alexy's understanding of legal principles as prima facie, unconditional commands. The analysis shows that the common, classical conception brings into question the status of legal principles as norms, by disreguarding their usefulness in judicial reasoning, while, conversely, the latterhas significant import forlegal practice and consequently for legal dogmatics. It is argued that the heuristic fruitfulness of understanding principles as optimization commands thusbecomesapparent. When we understand the relation of priciples to the idea of proportionality, as thespecific mode of their application, which is different from the supsumtive mode of applying rules, the theory of legal principles advanced by Dworkin and Alexy appears therefore to be descriptively better than others, but not without its flaws.

  5. Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle

    Science.gov (United States)

    Isomura, Takuya; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-01-01

    Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle. PMID:26690814

  6. Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle.

    Directory of Open Access Journals (Sweden)

    Takuya Isomura

    2015-12-01

    Full Text Available Blind source separation is the computation underlying the cocktail party effect--a partygoer can distinguish a particular talker's voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes' principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle.

  7. Gauge theory and variational principles

    CERN Document Server

    Bleecker, David

    2005-01-01

    This text provides a framework for describing and organizing the basic forces of nature and the interactions of subatomic particles. A detailed and self-contained mathematical account of gauge theory, it is geared toward beginning graduate students and advanced undergraduates in mathematics and physics. This well-organized treatment supplements its rigor with intuitive ideas.Starting with an examination of principal fiber bundles and connections, the text explores curvature; particle fields, Lagrangians, and gauge invariance; Lagrange's equation for particle fields; and the inhomogeneous field

  8. Islam and the four principles of medical ethics.

    Science.gov (United States)

    Mustafa, Yassar

    2014-07-01

    The principles underpinning Islam's ethical framework applied to routine clinical scenarios remain insufficiently understood by many clinicians, thereby unfortunately permitting the delivery of culturally insensitive healthcare.This paper summarises the foundations of the Islamic ethical theory, elucidating the principles and methodology employed by the Muslim jurist in deriving rulings in the field of medical ethics. The four-principles approach, as espoused by Beauchamp and Childress, is also interpreted through the prism of Islamic ethical theory. Each of the four principles (beneficence, nonmaleficence,justice and autonomy) is investigated in turn, looking in particular at the extent to which each is rooted in the Islamic paradigm. This will provide an important insight into Islamic medical ethics, enabling the clinician to have a better informed discussion with the Muslim patient. It will also allow for a higher degree of concordance in consultations and consequently optimise culturally sensitive healthcare delivery.

  9. Theory and experiment in gravitational physics

    Science.gov (United States)

    Will, C. M.

    New technological advances have made it feasible to conduct measurements with precision levels which are suitable for experimental tests of the theory of general relativity. This book has been designed to fill a new need for a complete treatment of techniques for analyzing gravitation theory and experience. The Einstein equivalence principle and the foundations of gravitation theory are considered, taking into account the Dicke framework, basic criteria for the viability of a gravitation theory, experimental tests of the Einstein equivalence principle, Schiff's conjecture, and a model theory devised by Lightman and Lee (1973). Gravitation as a geometric phenomenon is considered along with the parametrized post-Newtonian formalism, the classical tests, tests of the strong equivalence principle, gravitational radiation as a tool for testing relativistic gravity, the binary pulsar, and cosmological tests.

  10. Theory of electroelasticity

    CERN Document Server

    Kuang, Zhen-Bang

    2014-01-01

    Theory of Electroelasticity analyzes the stress, strain, electric field and electric displacement in electroelastic structures such as sensors, actuators and other smart materials and structures. This book also describes new theories such as the physical variational principle and the inertial entropy theory. It differs from the traditional method by using the physical variational principle to derive the governing equations of the piezoelectric material, whereas the Maxwell stress is obtained automatically. By using the inertial entropy theory, the temperature wave equation is obtained very easily. The book is intended for scientists, researchers and engineers in the areas of mechanics, physics, smart material and control engineering as well as mechanical, aeronautical and civil engineering, etc. Zhen-Bang Kuang is a professor at Shanghai Jiao Tong University.

  11. Goal-Setting Learning Principles: A Lesson From Practitioner

    OpenAIRE

    Zainudin bin Abu Bakar; Lee Mei Yun; NG Siew Keow; Tan Hui Li

    2014-01-01

    One of the prominent theory was the goal-setting theory which was widely been used in educational setting. It is an approach than can enhance the teaching and learning activities in the classroom. This is a report paper about a simple study of the implementation of the goal-setting principle in the classroom. A clinical data of the teaching and learning session was then analysed to address several issues highlighted. It is found that the goal-setting principles if understood clearly by the te...

  12. Descent principle in modular Galois theory

    Indian Academy of Sciences (India)

    Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45

    with Drinfeld module theory see Remark 5.2 at the end of the paper. To describe the ... where the elements X1,...,Xm need not be algebraically independent over kq. When ..... In §5 we shall make some motivational and philosophical remarks.

  13. Principles of Optics

    Science.gov (United States)

    Born, Max; Wolf, Emil

    1999-10-01

    Principles of Optics is one of the classic science books of the twentieth century, and probably the most influential book in optics published in the past forty years. This edition has been thoroughly revised and updated, with new material covering the CAT scan, interference with broad-band light and the so-called Rayleigh-Sommerfeld diffraction theory. This edition also details scattering from inhomogeneous media and presents an account of the principles of diffraction tomography to which Emil Wolf has made a basic contribution. Several new appendices are also included. This new edition will be invaluable to advanced undergraduates, graduate students and researchers working in most areas of optics.

  14. The principle of the Fermionic projector

    CERN Document Server

    Finster, Felix

    2006-01-01

    The "principle of the fermionic projector" provides a new mathematical framework for the formulation of physical theories and is a promising approach for physics beyond the standard model. This book begins with a brief review of relativity, relativistic quantum mechanics, and classical gauge theories, emphasizing the basic physical concepts and mathematical foundations. The external field problem and Klein's paradox are discussed and then resolved by introducing the fermionic projector, a global object in space-time that generalizes the notion of the Dirac sea. At the mathematical core of the book is a precise definition of the fermionic projector and the use of methods of hyperbolic differential equations for detailed analysis. The fermionic projector makes it possible to formulate a new type of variational principle in space-time. The mathematical tools are developed for the analysis of the corresponding Euler-Lagrange equations. A particular variational principle is proposed that gives rise to an effective...

  15. Conscious and unconscious thought in risky choice: testing the capacity principle and the appropriate weighting principle of unconscious thought theory.

    Science.gov (United States)

    Ashby, Nathaniel J S; Glöckner, Andreas; Dickert, Stephan

    2011-01-01

    Daily we make decisions ranging from the mundane to the seemingly pivotal that shape our lives. Assuming rationality, all relevant information about one's options should be thoroughly examined in order to make the best choice. However, some findings suggest that under specific circumstances thinking too much has disadvantageous effects on decision quality and that it might be best to let the unconscious do the busy work. In three studies we test the capacity assumption and the appropriate weighting principle of Unconscious Thought Theory using a classic risky choice paradigm and including a "deliberation with information" condition. Although we replicate an advantage for unconscious thought (UT) over "deliberation without information," we find that "deliberation with information" equals or outperforms UT in risky choices. These results speak against the generality of the assumption that UT has a higher capacity for information integration and show that this capacity assumption does not hold in all domains. Furthermore, we show that "deliberate thought with information" leads to more differentiated knowledge compared to UT which speaks against the generality of the appropriate weighting assumption.

  16. Communication Theory.

    Science.gov (United States)

    Penland, Patrick R.

    Three papers are presented which delineate the foundation of theory and principles which underlie the research and instructional approach to communications at the Graduate School of Library and Information Science, University of Pittsburgh. Cybernetic principles provide the integration, and validation is based in part on a situation-producing…

  17. The Pauli Exclusion Principle

    Indian Academy of Sciences (India)

    his exclusion principle, the quantum theory was a mess. Moreover, it could ... This is a function of all the coordinates and 'internal variables' such as spin, of all the ... must remain basically the same (ie change by a phase factor at most) if we ...

  18. Goal-Setting Learning Principles: A Lesson From Practitioner

    Directory of Open Access Journals (Sweden)

    Zainudin bin Abu Bakar

    2014-02-01

    Full Text Available One of the prominent theory was the goal-setting theory which was widely been used in educational setting. It is an approach than can enhance the teaching and learning activities in the classroom. This is a report paper about a simple study of the implementation of the goal-setting principle in the classroom. A clinical data of the teaching and learning session was then analysed to address several issues highlighted. It is found that the goal-setting principles if understood clearly by the teachers can enhance the teaching and learning activities. Failed to see the needs of the session will revoke the students learning interest. It is suggested that goal-setting learning principles could become a powerful aid for the teachers in the classroom.

  19. Theories of Matter, Space and Time; Classical theories

    Science.gov (United States)

    Evans, N.; King, S. F.

    2017-12-01

    This book and its sequel ('Theories of Matter Space and Time: Quantum Theories') are taken from third and fourth year undergraduate Physics courses at Southampton University, UK. The aim of both books is to move beyond the initial courses in classical mechanics, special relativity, electromagnetism, and quantum theory to more sophisticated views of these subjects and their interdependence. The goal is to guide undergraduates through some of the trickier areas of theoretical physics with concise analysis while revealing the key elegance of each subject. The first chapter introduces the key areas of the principle of least action, an alternative treatment of Newtownian dynamics, that provides new understanding of conservation laws. In particular, it shows how the formalism evolved from Fermat's principle of least time in optics. The second introduces special relativity leading quickly to the need and form of four-vectors. It develops four-vectors for all kinematic variables and generalize Newton's second law to the relativistic environment; then returns to the principle of least action for a free relativistic particle. The third chapter presents a review of the integral and differential forms of Maxwell's equations before massaging them to four-vector form so that the Lorentz boost properties of electric and magnetic fields are transparent. Again, it then returns to the action principle to formulate minimal substitution for an electrically charged particle.

  20. The Structuring Principle: Political Socialization and Belief Systems

    Science.gov (United States)

    Searing, Donald D.; And Others

    1973-01-01

    Assesses the significance of data on childhood political learning to political theory by testing the structuring principle,'' considered one of the central assumptions of political socialization research. This principle asserts that basic orientations acquired during childhood structure the later learning of specific issue beliefs.'' The…

  1. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  2. Supersymmetry Breaking as a new source for the Generalized Uncertainty Principle

    OpenAIRE

    Faizal, Mir

    2016-01-01

    In this letter, we will demonstrate that the breaking of supersymmetry by a non-anticommutative deformation can be used to generate the generalized uncertainty principle. We will analyze the physical reasons for this observation, in the framework of string theory. We also discuss the relation between the generalized uncertainty principle and the Lee–Wick field theories.

  3. Supersymmetry breaking as a new source for the generalized uncertainty principle

    Energy Technology Data Exchange (ETDEWEB)

    Faizal, Mir, E-mail: mirfaizalmir@gmail.com

    2016-06-10

    In this letter, we will demonstrate that the breaking of supersymmetry by a non-anticommutative deformation can be used to generate the generalized uncertainty principle. We will analyze the physical reasons for this observation, in the framework of string theory. We also discuss the relation between the generalized uncertainty principle and the Lee–Wick field theories.

  4. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  5. Effect of Interface Structure on Thermal Boundary Conductance by using First-principles Density Functional Perturbation Theory

    Institute of Scientific and Technical Information of China (English)

    GAO Xue; ZHANG Yue; SHANG Jia-Xiang

    2011-01-01

    We choose a Si/Ge interface as a research object to investigate the infiuence of interface disorder on thermal boundary conductance. In the calculations, the diffuse mismatch model is used to study thermal boundary conductance between two non-metallic materials, while the phonon dispersion relationship is calculated by the first-principles density functional perturbation theory. The results show that interface disorder limits thermal transport. The increase of atomic spacing at the interface results in weakly coupled interfaces and a decrease in the thermal boundary conductance. This approach shows a simplistic method to investigate the relationship between microstructure and thermal conductivity.%We choose a Si/Ge interface as a research object to investigate the influence of interface disorder on thermal boundary conductance.In the calculations,the diffuse mismatch model is used to study thermal boundary conductance between two non-metallic materials,while the phonon dispersion relationship is calculated by the first-principles density functional perturbation theory.The results show that interface disorder limits thermal transport.The increase of atomic spacing at the interface results in weakly coupled interfaces and a decrease in the thermal boundary conductance.This approach shows a simplistic method to investigate the relationship between microstructure and thermal conductivity.It is well known that interfaces can play a dominant role in the overall thermal transport characteristics of structures whose length scale is less than the phonon mean free path.When heat flows across an interface between two different materials,there exists a temperature jump at the interface.Thermal boundary conductance (TBC),which describes the efficiency of heat flow at material interfaces,plays an importance role in the transport of thermal energy in nanometerscale devices,semiconductor superlattices,thin film multilayers and nanocrystalline materials.[1

  6. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  7. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  8. The Legal Principles In The Democratic State Of Law And The Labor Principle Of Protection: An Analysis Of The Informative, Regulatory And Interpretative Functions Of The Principle Of Protection

    Directory of Open Access Journals (Sweden)

    Nilson Feliciano de Araújo

    2016-12-01

    Full Text Available This article aims to investigate, from the perspective of the material and effective dimensions of the right to work, addressing the issue of principles and their functions, the comprehensiveness of the principle of protection in labor law. Promoting the deepening of research through a descriptive-explanatory research of the documentary-bibliographic type, is dedicated to analytical-conceptual problems of theories of principles. It reveals that the principles have broad application in labor law and these, especially protection, must be present in the labor system in its informative, interpretative and normative functions in order to ensure the effectiveness of fundamental social rights.

  9. Spectrum unfolding, sensitivity analysis and propagation of uncertainties with the maximum entropy deconvolution code MAXED

    CERN Document Server

    Reginatto, M; Neumann, S

    2002-01-01

    MAXED was developed to apply the maximum entropy principle to the unfolding of neutron spectrometric measurements. The approach followed in MAXED has several features that make it attractive: it permits inclusion of a priori information in a well-defined and mathematically consistent way, the algorithm used to derive the solution spectrum is not ad hoc (it can be justified on the basis of arguments that originate in information theory), and the solution spectrum is a non-negative function that can be written in closed form. This last feature permits the use of standard methods for the sensitivity analysis and propagation of uncertainties of MAXED solution spectra. We illustrate its use with unfoldings of NE 213 scintillation detector measurements of photon calibration spectra, and of multisphere neutron spectrometer measurements of cosmic-ray induced neutrons at high altitude (approx 20 km) in the atmosphere.

  10. MWH's water treatment: principles and design

    National Research Council Canada - National Science Library

    Crittenden, John C

    2012-01-01

    ... with additional worked problems and new treatment approaches. It covers both the principles and theory of water treatment as well as the practical considerations of plant design and distribution...

  11. Electromagnetic scattering theory

    Science.gov (United States)

    Bird, J. F.; Farrell, R. A.

    1986-01-01

    Electromagnetic scattering theory is discussed with emphasis on the general stochastic variational principle (SVP) and its applications. The stochastic version of the Schwinger-type variational principle is presented, and explicit expressions for its integrals are considered. Results are summarized for scalar wave scattering from a classic rough-surface model and for vector wave scattering from a random dielectric-body model. Also considered are the selection of trial functions and the variational improvement of the Kirchhoff short-wave approximation appropriate to large size-parameters. Other applications of vector field theory discussed include a general vision theory and the analysis of hydromagnetism induced by ocean motion across the geomagnetic field. Levitational force-torque in the magnetic suspension of the disturbance compensation system (DISCOS), now deployed in NOVA satellites, is also analyzed using the developed theory.

  12. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  13. Quasilinear theory of a spin-flip laser

    International Nuclear Information System (INIS)

    Arunasalam, V.

    1973-09-01

    A discussion of the nonlinear electrodynamic behavior of a gas of spin 1/2 particles in a uniform external magnetic field is presented. In particular, the quasilinear time evolution of a spin-flip laser system is examined in detail both from the point of view of the thermodynamics of negative temperature systems and the quantum kinetic methods of nonequilibrium statistical mechanics. It is shown that the quasilinear steady state of a spin-flip laser system is that state at which the populations of the spin-up and the spin-down states are equal to each other, and this quasilinear steady state is the state of minimum entropy production. The maximum output power of the spin-flip laser predicted by the theory presented in this paper is shown to be in reasonably good agreement with experimental results. The method used here is based on the general principles of nonrelativistic quantum theory and takes account of the Doppler broadening, collisional broadening, and Compton recoil effects. 30 refs., 1 fig

  14. Variational formulation of two scalar-tetradic theories of gravitation

    International Nuclear Information System (INIS)

    Saez, D.

    1983-01-01

    In this paper we obtain two scalar-tetradic theories of gravitation (theories A and B) from a variational principle. In these theories the gravitational energy is localized and the principle of equivalence holds. They combine some aspects of Moller theory and the Brans-Dicke theory. The first-order approximations and an introduction to the study of both theories in the static spherically symmetric case are presented

  15. The principles of electronic and electromechanic power conversion a systems approach

    CERN Document Server

    Ferreira, Braham

    2013-01-01

    Teaching the principles of power electronics and electromechanical power conversion through a unique top down systems approach, The Principles of Electromechanical Power Conversion takes the role and system context of power conversion functions as the starting point. Following this approach, the text defines the building blocks of the system and describes the theory of how they exchange power with each other. The authors introduce a modern, simple approach to machines, which makes the principles of field oriented control and space vector theory approachable to undergraduate students as well as

  16. Connection between optimal control theory and adiabatic-passage techniques in quantum systems

    Science.gov (United States)

    Assémat, E.; Sugny, D.

    2012-08-01

    This work explores the relationship between optimal control theory and adiabatic passage techniques in quantum systems. The study is based on a geometric analysis of the Hamiltonian dynamics constructed from Pontryagin's maximum principle. In a three-level quantum system, we show that the stimulated Raman adiabatic passage technique can be associated to a peculiar Hamiltonian singularity. One deduces that the adiabatic pulse is solution of the optimal control problem only for a specific cost functional. This analysis is extended to the case of a four-level quantum system.

  17. Derivation of some new distributions in statistical mechanics using maximum entropy approach

    Directory of Open Access Journals (Sweden)

    Ray Amritansu

    2014-01-01

    Full Text Available The maximum entropy principle has been earlier used to derive the Bose Einstein(B.E., Fermi Dirac(F.D. & Intermediate Statistics(I.S. distribution of statistical mechanics. The central idea of these distributions is to predict the distribution of the microstates, which are the particle of the system, on the basis of the knowledge of some macroscopic data. The latter information is specified in the form of some simple moment constraints. One distribution differs from the other in the way in which the constraints are specified. In the present paper, we have derived some new distributions similar to B.E., F.D. distributions of statistical mechanics by using maximum entropy principle. Some proofs of B.E. & F.D. distributions are shown, and at the end some new results are discussed.

  18. The 4th Thermodynamic Principle?

    International Nuclear Information System (INIS)

    Montero Garcia, Jose de la Luz; Novoa Blanco, Jesus Francisco

    2007-01-01

    It should be emphasized that the 4th Principle above formulated is a thermodynamic principle and, at the same time, is mechanical-quantum and relativist, as it should inevitably be and its absence has been one of main the theoretical limitations of the physical theory until today.We show that the theoretical discovery of Dimensional Primitive Octet of Matter, the 4th Thermodynamic Principle, the Quantum Hexet of Matter, the Global Hexagonal Subsystem of Fundamental Constants of Energy and the Measurement or Connected Global Scale or Universal Existential Interval of the Matter is that it is possible to be arrived at a global formulation of the four 'forces' or fundamental interactions of nature. The Einstein's golden dream is possible

  19. On Palacios-Gordon's theory of relativity

    International Nuclear Information System (INIS)

    Gulati, P.S.

    1981-01-01

    Since the early days of Einstein's special theory of relativity (1905), it is known that this theory suffers from some epistemological problems. Over the years, many theoreticians have endeavored to overcome these problems, rejecting either the 'Principle of Relativity' or the 'Light Principle'. Palacios and Gordon rejected the former and advanced an alternative theory governed by Voigt's transformation equations (1887). In the present paper, Palacios-Gordon's theory has been critically examined and some of its drawbacks are discovered. It becomes obvious that neither Einstein's special theory of relativity nor Palacios-Gordon's theory of relativity provides a flawless fit to the real world. It is speculated that suitable synthesis of these two theories might resolve all the controversial issues of special theory of relativity. (author)

  20. Intervention principles: Theory and practice

    International Nuclear Information System (INIS)

    Jensen, P.H.; Crick, M.J.

    2000-01-01

    After the Chernobyl accident, it became clear that some clarification of the basic principles for intervention was necessary as well as more internationally recognised numerical guidance on intervention levels. There was in the former USSR and in Europe much confusion over, and lack of recognition of, the very different origins and purposes of dose limits for controlling deliberate increases in radiation exposure for practices and dose levels at which intervention is prompted to decrease existing radiation exposure. In the latest recommendations from ICRP in its Publication 60, a clear distinction is made between the radiation protection systems for a practice and for intervention. According to ICRP, the protective measures forming a program of intervention, which always have some disadvantages, should each be justified on their own merit in the sense that they should do more good than harm, and their form, scale, and duration should be optimised so as to do the most good. Intervention levels for protective actions can be established for many possible accident scenarios. For planning and preparedness purposes, a generic optimisation based on generic accident scenario calculations, should result in optimised generic intervention levels for each protective measure. The factors entering such an optimisation will on the benefit side include avertable doses and avertable risks as well as reassurance. On the harm side the factors include monetary costs, collective and individual risk for the action itself, social disruption and anxiety. More precise optimisation analyses based on real site and accident specific data can be carried out and result in specific intervention levels. It is desirable that values for easily measurable quantities such as dose rate and surface contamination density be developed as surrogates for intervention levels of avertable dose. However, it is important that these quantities should be used carefully and applied taking account of local

  1. The Principle of Least Action

    Indian Academy of Sciences (India)

    THOLASI

    Reproduced from the book A Survey of Physical Theory (formerly titled: A Survey ... The dynamical laws for physical systems are usually expressed in the form of ... The reason for the difference in the results derived from the two principles lies ...

  2. Testing the quantum superposition principle: matter waves and beyond

    Science.gov (United States)

    Ulbricht, Hendrik

    2015-05-01

    New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Testing the superposition principle intrinsically also means to test suggested extensions of quantum theory, so-called collapse models. We will report on three new proposals to experimentally test the superposition principle with nanoparticle interferometry, optomechanical devices and by spectroscopic experiments in the frequency domain. We will also report on the status of optical levitation and cooling experiments with nanoparticles in our labs, towards an Earth bound matter-wave interferometer to test the superposition principle for a particle mass of one million amu (atomic mass unit).

  3. New theory of space-time and gravitation

    International Nuclear Information System (INIS)

    Denisov, V.I.; Logunov, A.A.

    1982-01-01

    It is shown that the general theory of relativity is not satisfactory physical theory, since in it there are no laws of conservation for the matter and gravitational field taken together and it does not satisfy the principle of correspondence with Newton's theory. In the present paper, we construct a new theory of gravitation which possesses conservation laws, can describe all the existing gravitational experiments, satisfies the correspondence principle, and predicts a number of fundamental consequences

  4. Boltzmann, Darwin and Directionality theory

    Energy Technology Data Exchange (ETDEWEB)

    Demetrius, Lloyd A., E-mail: ldemetr@oeb.harvard.edu

    2013-09-01

    Boltzmann’s statistical thermodynamics is a mathematical theory which relates the macroscopic properties of aggregates of interacting molecules with the laws of their interaction. The theory is based on the concept thermodynamic entropy, a statistical measure of the extent to which energy is spread throughout macroscopic matter. Macroscopic evolution of material aggregates is quantitatively explained in terms of the principle: Thermodynamic entropy increases as the composition of the aggregate changes under molecular collision. Darwin’s theory of evolution is a qualitative theory of the origin of species and the adaptation of populations to their environment. A central concept in the theory is fitness, a qualitative measure of the capacity of an organism to contribute to the ancestry of future generations. Macroscopic evolution of populations of living organisms can be qualitatively explained in terms of a neo-Darwinian principle: Fitness increases as the composition of the population changes under variation and natural selection. Directionality theory is a quantitative model of the Darwinian argument of evolution by variation and selection. This mathematical theory is based on the concept evolutionary entropy, a statistical measure which describes the rate at which an organism appropriates energy from the environment and reinvests this energy into survivorship and reproduction. According to directionality theory, microevolutionary dynamics, that is evolution by mutation and natural selection, can be quantitatively explained in terms of a directionality principle: Evolutionary entropy increases when the resources are diverse and of constant abundance; but decreases when the resource is singular and of variable abundance. This report reviews the analytical and empirical support for directionality theory, and invokes the microevolutionary dynamics of variation and selection to delineate the principles which govern macroevolutionary dynamics of speciation and

  5. Boltzmann, Darwin and Directionality theory

    International Nuclear Information System (INIS)

    Demetrius, Lloyd A.

    2013-01-01

    Boltzmann’s statistical thermodynamics is a mathematical theory which relates the macroscopic properties of aggregates of interacting molecules with the laws of their interaction. The theory is based on the concept thermodynamic entropy, a statistical measure of the extent to which energy is spread throughout macroscopic matter. Macroscopic evolution of material aggregates is quantitatively explained in terms of the principle: Thermodynamic entropy increases as the composition of the aggregate changes under molecular collision. Darwin’s theory of evolution is a qualitative theory of the origin of species and the adaptation of populations to their environment. A central concept in the theory is fitness, a qualitative measure of the capacity of an organism to contribute to the ancestry of future generations. Macroscopic evolution of populations of living organisms can be qualitatively explained in terms of a neo-Darwinian principle: Fitness increases as the composition of the population changes under variation and natural selection. Directionality theory is a quantitative model of the Darwinian argument of evolution by variation and selection. This mathematical theory is based on the concept evolutionary entropy, a statistical measure which describes the rate at which an organism appropriates energy from the environment and reinvests this energy into survivorship and reproduction. According to directionality theory, microevolutionary dynamics, that is evolution by mutation and natural selection, can be quantitatively explained in terms of a directionality principle: Evolutionary entropy increases when the resources are diverse and of constant abundance; but decreases when the resource is singular and of variable abundance. This report reviews the analytical and empirical support for directionality theory, and invokes the microevolutionary dynamics of variation and selection to delineate the principles which govern macroevolutionary dynamics of speciation and

  6. The Bohr--Einstein ''weighing-of-energy'' debate and the principle of equivalence

    International Nuclear Information System (INIS)

    Hughes, R.J.

    1990-01-01

    The Bohr--Einstein debate over the ''weighing of energy'' and the validity of the time--energy uncertainty relation is reexamined in the context of gravitation theories that do not respect the equivalence principle. Bohr's use of the equivalence principle is shown to be sufficient, but not necessary, to establish the validity of this uncertainty relation in Einstein's ''weighing-of-energy'' gedanken experiment. The uncertainty relation is shown to hold in any energy-conserving theory of gravity, and so a failure of the equivalence principle does not engender a failure of quantum mechanics. The relationship between the gravitational redshift and the equivalence principle is reviewed

  7. General fluid theories, variational principles and self-organization

    International Nuclear Information System (INIS)

    Mahajan, S.M.

    2002-01-01

    This paper reports two distinct but related advances: (1) The development and application of fluid theories that transcend conventional magnetohydrodynamics (MHD), in particular, theories that are valid in the long-mean-free-path limit and in which pressure anisotropy, heat flow, and arbitrarily strong sheared flows are treated consistently. (2) The discovery of new pressure-confining plasma configurations that are self-organized relaxed states. (author)

  8. Applying principles from the game theory to acute stroke care: Learning from the prisoner's dilemma, stag-hunt, and other strategies.

    Science.gov (United States)

    Saposnik, Gustavo; Johnston, S Claiborne

    2016-04-01

    Acute stroke care represents a challenge for decision makers. Decisions based on erroneous assessments may generate false expectations of patients and their family members, and potentially inappropriate medical advice. Game theory is the analysis of interactions between individuals to study how conflict and cooperation affect our decisions. We reviewed principles of game theory that could be applied to medical decisions under uncertainty. Medical decisions in acute stroke care are usually made under constrains: short period of time, with imperfect clinical information, limit understanding about patients and families' values and beliefs. Game theory brings some strategies to help us manage complex medical situations under uncertainty. For example, it offers a different perspective by encouraging the consideration of different alternatives through the understanding of patients' preferences and the careful evaluation of cognitive distortions when applying 'real-world' data. The stag-hunt game teaches us the importance of trust to strength cooperation for a successful patient-physician interaction that is beyond a good or poor clinical outcome. The application of game theory to stroke care may improve our understanding of complex medical situations and help clinicians make practical decisions under uncertainty. © 2016 World Stroke Organization.

  9. Greatest Happiness Principle in a Complex System: Maximisation versus Driving Force

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2012-06-01

    Full Text Available From philosophical point of view, micro-founded economic theories depart from the principle of the pursuit of the greatest happiness. From mathematical point of view, micro-founded economic theories depart from the utility maximisation program. Though economists are aware of the serious limitations of the equilibrium analysis, they remain in that framework. We show that the maximisation principle, which implies the equilibrium hypothesis, is responsible for this impasse. We formalise the pursuit of the greatest happiness principle by the help of the driving force postulate: the volumes of activities depend on the expected wealth increase. In that case we can get rid of the equilibrium hypothesis and have new insights into economic theory. For example, in what extent standard economic results depend on the equilibrium hypothesis?

  10. Conscious and unconscious thought in risky choice: Testing the capacity principle and the appropriate weighting principle of Unconscious Thought Theory

    Directory of Open Access Journals (Sweden)

    Nathaniel James Siebert Ashby

    2011-10-01

    Full Text Available Daily we make decisions ranging from the mundane to the seemingly pivotal that shape our lives. Assuming rationality, all relevant information about one’s options should be thoroughly examined in order to make the best choice. However, some findings suggest that under specific circumstances thinking too much has disadvantageous effects on decision quality and that it might be best to let the unconscious do the busy work. In three studies we test the capacity assumption and the appropriate weighting principle of unconscious thought theory using a classic risky choice paradigm and including a ‘deliberation with information’ condition. Although we replicate an advantage for unconscious thought over ‘deliberation without information’, we find that ‘deliberation with information’ equals or outperforms unconscious thought in risky choices. These results speak against the generality of the assumption that unconscious thought has a higher capacity for information integration and show that this capacity assumption does not hold in all domains. We furthermore show that ‘deliberate thought with information’ leads to more differentiated knowledge compared to unconscious thought which speaks against the generality of the appropriate weighting assumption.

  11. Theoretical preconditions of the realization of principle of domination of essence over form

    Directory of Open Access Journals (Sweden)

    T.S. Osadcha

    2015-06-01

    Full Text Available Organization and accounting should be based on accounting principles, which must ensure qualitative preparation of the financial statements. In this case an important role is played by the principle of domination of essence over form. Here is analyzed the main factors of its development and dissemination. Also it is determined the role of IFRS in the spreading of the principle of domination of essence over form in the world and its fixation on the level of normative regulation of accounting in certain countries, including Ukraine. It is defined theoretical preconditions for identifying and removing the existing obstacles in the application of the principle of domination of essence over form in the practice of management, which will ensure qualitative preparation of the financial statements. It is offered to use the principle of property rights theory, the theory of rent and the agency theory to solve the problem of disseverance between owner and manager.

  12. Differentiation and Integration: Guiding Principles for Analyzing Cognitive Change

    Science.gov (United States)

    Siegler, Robert S.; Chen, Zhe

    2008-01-01

    Differentiation and integration played large roles within classic developmental theories but have been relegated to obscurity within contemporary theories. However, they may have a useful role to play in modern theories as well, if conceptualized as guiding principles for analyzing change rather than as real-time mechanisms. In the present study,…

  13. Two theorems on flat space-time gravitational theories

    International Nuclear Information System (INIS)

    Castagnino, M.; Chimento, L.

    1980-01-01

    The first theorem states that all flat space-time gravitational theories must have a Lagrangian with a first term that is an homogeneous (degree-1) function of the 4-velocity usup(i), plus a functional of nsub(ij)usup(i)usup(j). The second theorem states that all gravitational theories that satisfy the strong equivalence principle have a Lagrangian with a first term gsub(ij)(x)usup(i)usup(j) plus an irrelevant term. In both cases the theories must issue from a unique variational principle. Therefore, under this condition it is impossible to find a flat space-time theory that satisfies the strong equivalence principle. (author)

  14. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    Science.gov (United States)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  15. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    Science.gov (United States)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  16. Probing Students' Ideas of the Principle of Equivalence

    Science.gov (United States)

    Bandyopadhyay, Atanu; Kumar, Arvind

    2011-01-01

    The principle of equivalence was the first vital clue to Einstein in his extension of special relativity to general relativity, the modern theory of gravitation. In this paper we investigate in some detail students' understanding of this principle in a variety of contexts, when they are undergoing an introductory course on general relativity. The…

  17. Boltzmann, Darwin and Directionality theory

    Science.gov (United States)

    Demetrius, Lloyd A.

    2013-09-01

    Boltzmann’s statistical thermodynamics is a mathematical theory which relates the macroscopic properties of aggregates of interacting molecules with the laws of their interaction. The theory is based on the concept thermodynamic entropy, a statistical measure of the extent to which energy is spread throughout macroscopic matter. Macroscopic evolution of material aggregates is quantitatively explained in terms of the principle: Thermodynamic entropy increases as the composition of the aggregate changes under molecular collision. Darwin’s theory of evolution is a qualitative theory of the origin of species and the adaptation of populations to their environment. A central concept in the theory is fitness, a qualitative measure of the capacity of an organism to contribute to the ancestry of future generations. Macroscopic evolution of populations of living organisms can be qualitatively explained in terms of a neo-Darwinian principle: Fitness increases as the composition of the population changes under variation and natural selection. Directionality theory is a quantitative model of the Darwinian argument of evolution by variation and selection. This mathematical theory is based on the concept evolutionary entropy, a statistical measure which describes the rate at which an organism appropriates energy from the environment and reinvests this energy into survivorship and reproduction. According to directionality theory, microevolutionary dynamics, that is evolution by mutation and natural selection, can be quantitatively explained in terms of a directionality principle: Evolutionary entropy increases when the resources are diverse and of constant abundance; but decreases when the resource is singular and of variable abundance. This report reviews the analytical and empirical support for directionality theory, and invokes the microevolutionary dynamics of variation and selection to delineate the principles which govern macroevolutionary dynamics of speciation and

  18. Cooling towers principles and practice

    CERN Document Server

    Hill, G B; Osborn, Peter D

    1990-01-01

    Cooling Towers: Principles and Practice, Third Edition, aims to provide the reader with a better understanding of the theory and practice, so that installations are correctly designed and operated. As with all branches of engineering, new technology calls for a level of technical knowledge which becomes progressively higher; this new edition seeks to ensure that the principles and practice of cooling towers are set against a background of up-to-date technology. The book is organized into three sections. Section A on cooling tower practice covers topics such as the design and operation of c

  19. Extended Thermodynamics of Rarefied Polyatomic Gases: 15-Field Theory Incorporating Relaxation Processes of Molecular Rotation and Vibration

    Directory of Open Access Journals (Sweden)

    Takashi Arima

    2018-04-01

    Full Text Available After summarizing the present status of Rational Extended Thermodynamics (RET of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The theory is justified, at mesoscopic level, by a generalized Boltzmann equation in which the distribution function depends on two internal variables that take into account the energy exchange among the different molecular modes of a gas, that is, translational, rotational, and vibrational modes. By adopting the generalized Bhatnagar, Gross and Krook (BGK-type collision term, we derive explicitly the closed system of field equations with the use of the Maximum Entropy Principle (MEP. The NSF theory is derived from the RET theory as a limiting case of small relaxation times via the Maxwellian iteration. The relaxation times introduced in the theory are shown to be related to the shear and bulk viscosities and heat conductivity.

  20. Dark matter and the equivalence principle

    Science.gov (United States)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  1. STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.

  2. Towards a quantum theory without 'quantization'

    International Nuclear Information System (INIS)

    Deutsch, D.; Texas Univ., Austin

    1984-01-01

    The paper argues the case for a quantum formulism without a reference to classical theory, in order to make progress with quantum theory. Quantum theory without classical theory; some elaboration of the pure quantum theory; and perturbation theory and the correspondence principle; are all discussed. (U.K.)

  3. Test of the Equivalence Principle in the Dark sector on galactic scales

    International Nuclear Information System (INIS)

    Mohapi, N.; Hees, A.; Larena, J.

    2016-01-01

    The Einstein Equivalence Principle is a fundamental principle of the theory of General Relativity. While this principle has been thoroughly tested with standard matter, the question of its validity in the Dark sector remains open. In this paper, we consider a general tensor-scalar theory that allows to test the equivalence principle in the Dark sector by introducing two different conformal couplings to standard matter and to Dark matter. We constrain these couplings by considering galactic observations of strong lensing and of velocity dispersion. Our analysis shows that, in the case of a violation of the Einstein Equivalence Principle, data favour violations through coupling strengths that are of opposite signs for ordinary and Dark matter. At the same time, our analysis does not show any significant deviations from General Relativity

  4. An energy principle for two-dimensional collisionless relativistic plasmas

    International Nuclear Information System (INIS)

    Otto, A.; Schindler, K.

    1984-01-01

    Using relativistic Vlasov theory an energy principle for two-dimensional plasmas is derived, which provides a sufficient and necessary criterion for the stability of relativistic plasma equilibria. This energy principle includes charge separating effects since the exact Poisson equation was taken into consideration. Applying the variational principle to the case of the relativistic plane plasma sheet, the same marginal wave length is found as in the non-relativistic case. (author)

  5. Introduction to the theory of relativity

    CERN Document Server

    Bergmann, Peter Gabriel

    1976-01-01

    Comprehensive coverage of special theory (frames of reference, Lorentz transformation, more), general theory (principle of equivalence, more) and unified theory (Weyl's gauge-invariant geometry, more.) Foreword by Albert Einstein.

  6. RENEWAL OF BASIC LAWS AND PRINCIPLES FOR POLAR CONTINUUM THEORIES (Ⅱ)-MICROMORPHIC CONTINUUM THEORY AND COUPLE STRESS THEORY

    Institute of Scientific and Technical Information of China (English)

    戴天民

    2003-01-01

    The purpose is to reestablish the balance laws of momentum, angular momentumand energy and to derive the corresponding local and nonlocal balance equations formicromorphic continuum mechanics and couple stress theory. The desired results formicromorphic continuum mechanics and couple stress theory are naturally obtained via directtransitions and reductions from the coupled conservation law of energy for micropolarcontinuum theory, respectively. The basic balance laws and equation s for micromorphiccontinuum mechanics and couple stress theory are constituted by combining these resultsderived here and the traditional conservation laws and equations of mass and microinertiaand the entropy inequality. The incomplete degrees of the former related continuum theoriesare clarified. Finally, some special cases are conveniently derived.

  7. Hidden crossing theory of charge exchange in H+ + He+(1 s) collisions in vicinity of maximum of cross section

    Science.gov (United States)

    Grozdanov, Tasko P.; Solov'ev, Evgeni A.

    2018-04-01

    Within the framework of dynamical adiabatic approach the hidden crossing theory of inelastic transitions is applied to charge exchange in H+ + He+(1 s) collisions in the wide range of center of mass collision energies E cm = (1.6 -70) keV. The good agreement with experiment and molecular close coupling calculations is obtained. At low energies our 4-state results are closest to the experiment and correctly reproduce the shoulder in energy dependence of the cross section around E cm = 6 keV. The 2-state results correctly predict the position of the maximum of the cross section at E cm ≈ 40 keV, whereas 4-state results fail to correctly describe the region around the maximum. The reason for this is the fact that adiabatic approximation for a given two-state hidden crossing is applicable for values of the Schtueckelberg parameter >1. But with increase of principal quantum number N the Schtueckelberg parameter decreases as N -3. That is why the 4-state approach involving higher excited states fails at smaller collision energies E cm ≈ 15 keV, while the 2-state approximation which involves low lying states can be extended to higher collision energies.

  8. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials.

    Science.gov (United States)

    Tsyshevsky, Roman V; Sharia, Onise; Kuklja, Maija M

    2016-02-19

    This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.

  9. Science 101: What, Exactly, Is the Heisenberg Uncertainty Principle?

    Science.gov (United States)

    Robertson, Bill

    2016-01-01

    Bill Robertson is the author of the NSTA Press book series, "Stop Faking It! Finally Understanding Science So You Can Teach It." In this month's issue, Robertson describes and explains the Heisenberg Uncertainty Principle. The Heisenberg Uncertainty Principle was discussed on "The Big Bang Theory," the lead character in…

  10. Conventional Principles in Science: On the foundations and development of the relativized a priori

    Science.gov (United States)

    Ivanova, Milena; Farr, Matt

    2015-11-01

    The present volume consists of a collection of papers originally presented at the conference Conventional Principles in Science, held at the University of Bristol, August 2011, which featured contributions on the history and contemporary development of the notion of 'relativized a priori' principles in science, from Henri Poincaré's conventionalism to Michael Friedman's contemporary defence of the relativized a priori. In Science and Hypothesis, Poincaré assessed the problematic epistemic status of Euclidean geometry and Newton's laws of motion, famously arguing that each has the status of 'convention' in that their justification is neither analytic nor empirical in nature. In The Theory of Relativity and A Priori Knowledge, Hans Reichenbach, in light of the general theory of relativity, proposed an updated notion of the Kantian synthetic a priori to account for the dynamic inter-theoretic status of geometry and other non-empirical physical principles. Reichenbach noted that one may reject the 'necessarily true' aspect of the synthetic a priori whilst preserving the feature of being constitutive of the object of knowledge. Such constitutive principles are theory-relative, as illustrated by the privileged role of non-Euclidean geometry in general relativity theory. This idea of relativized a priori principles in spacetime physics has been analysed and developed at great length in the modern literature in the work of Michael Friedman, in particular the roles played by the light postulate and the equivalence principle - in special and general relativity respectively - in defining the central terms of their respective theories and connecting the abstract mathematical formalism of the theories with their empirical content. The papers in this volume guide the reader through the historical development of conventional and constitutive principles in science, from the foundational work of Poincaré, Reichenbach and others, to contemporary issues and applications of the

  11. The Principles of Readability

    Science.gov (United States)

    DuBay, William H.

    2004-01-01

    The principles of readability are in every style manual. Readability formulas are in every writing aid. What is missing is the research and theory on which they stand. This short review of readability research spans 100 years. The first part covers the history of adult literacy studies in the U.S., establishing the stratified nature of the adult…

  12. The gravitational exclusion principle and null states in anti-de Sitter space

    International Nuclear Information System (INIS)

    Castro, Alejandra; Maloney, Alexander; Hartman, Thomas

    2011-01-01

    The holographic principle implies a vast reduction in the number of degrees of freedom of quantum gravity. This idea can be made precise in AdS 3 , where the the stringy or gravitational exclusion principle asserts that certain perturbative excitations are not present in the exact quantum spectrum. We show that this effect is visible directly in the bulk gravity theory: the norm of the offending linearized state is zero or negative. When the norm is negative, the theory is signalling its own breakdown as an effective field theory; this provides a perturbative bulk explanation for the stringy exclusion principle. When the norm vanishes the bulk state is null rather than physical. This implies that certain non-trivial diffeomorphisms must be regarded as gauge symmetries rather than spectrum-generating elements of the asymptotic symmetry group. This leads to subtle effects in the computation of one-loop determinants for Einstein gravity, higher spin theories and topologically massive gravity in AdS 3 . In particular, heat kernel methods do not capture the correct spectrum of a theory with null states. Communicated by S Ross

  13. Quantum theory and statistical thermodynamics principles and worked examples

    CERN Document Server

    Hertel, Peter

    2017-01-01

    This textbook presents a concise yet detailed introduction to quantum physics. Concise, because it condenses the essentials to a few principles. Detailed, because these few principles –  necessarily rather abstract – are illustrated by several telling examples. A fairly complete overview of the conventional quantum mechanics curriculum is the primary focus, but the huge field of statistical thermodynamics is covered as well. The text explains why a few key discoveries shattered the prevailing broadly accepted classical view of physics. First, matter appears to consist of particles which, when propagating, resemble waves. Consequently, some observable properties cannot be measured simultaneously with arbitrary precision. Second, events with single particles are not determined, but are more or less probable. The essence of this is that the observable properties of a physical system are to be represented by non-commuting mathematical objects instead of real numbers.  Chapters on exceptionally simple, but h...

  14. The precautionary principle and EMF: from the theory to the practice

    International Nuclear Information System (INIS)

    Lambrozo, J.

    2002-01-01

    In 1992 the United Nations Declaration on the Environment stated that where there are threats of serious or irreversible damage, lack of full scientific certainty will not be used as a reason for postponing cost-effective measures to prevent environmental degradation. Since this interpretation has been reaffirmed within numerous framework conventions and national environment law in a number of countries has begun to incorporate it. The contents of the precautionary principle: There are in fact two completely different ideas about the principle: the absolute: the precautionary principle would aim to guarantee complete harmless. The aim is zero risk and even a minimal suspicion of risk should result in a moratorium or a definitive ban, the moderate: its implementation is subject to a scientifically credible statement of hypothetical risk. It also gives priority to positive measures particularly research to provide a better assessment of the risk. In every case, before any decision a statement of cost and of advantages should be drawn up. The concept of prudent avoidance introduced in 1989 by G. Morgan and adopted by some states (Sweden, Australia) seems to be a specific application of the precautionary principle to EMF, taking into account the cost of the policy. The EMF research: After more than 20 years of research (epidemiological residential and professional studies, in vitro studies and laboratory animal studies) the scientific uncertainty has been considerably reduced but the possibility of some adverse effects remains. This fact and the public concern about EMF (partly explained by the ubiquity of exposure) explain the temptations in applying the precautionary principle to the EMF issue

  15. Le Chatelier's principle in replicator dynamics

    Science.gov (United States)

    Allahverdyan, Armen E.; Galstyan, Aram

    2011-10-01

    The Le Chatelier principle states that physical equilibria are not only stable, but they also resist external perturbations via short-time negative-feedback mechanisms: a perturbation induces processes tending to diminish its results. The principle has deep roots, e.g., in thermodynamics it is closely related to the second law and the positivity of the entropy production. Here we study the applicability of the Le Chatelier principle to evolutionary game theory, i.e., to perturbations of a Nash equilibrium within the replicator dynamics. We show that the principle can be reformulated as a majorization relation. This defines a stability notion that generalizes the concept of evolutionary stability. We determine criteria for a Nash equilibrium to satisfy the Le Chatelier principle and relate them to mutualistic interactions (game-theoretical anticoordination) showing in which sense mutualistic replicators can be more stable than (say) competing ones. There are globally stable Nash equilibria, where the Le Chatelier principle is violated even locally: in contrast to the thermodynamic equilibrium a Nash equilibrium can amplify small perturbations, though both types of equilibria satisfy the detailed balance condition.

  16. A Bayes-Maximum Entropy method for multi-sensor data fusion

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.

    1991-01-01

    In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.

  17. Optimal decisions principles of programming

    CERN Document Server

    Lange, Oskar

    1971-01-01

    Optimal Decisions: Principles of Programming deals with all important problems related to programming.This book provides a general interpretation of the theory of programming based on the application of the Lagrange multipliers, followed by a presentation of the marginal and linear programming as special cases of this general theory. The praxeological interpretation of the method of Lagrange multipliers is also discussed.This text covers the Koopmans' model of transportation, geometric interpretation of the programming problem, and nature of activity analysis. The solution of t

  18. Generalized information theory: aims, results, and open problems

    International Nuclear Information System (INIS)

    Klir, George J.

    2004-01-01

    The principal purpose of this paper is to present a comprehensive overview of generalized information theory (GIT): a research program whose objective is to develop a broad treatment of uncertainty-based information, not restricted to classical notions of uncertainty. After a brief overview of classical information theories, a broad framework for formalizing uncertainty and the associated uncertainty-based information of a great spectrum of conceivable types is sketched. The various theories of imprecise probabilities that have already been developed within this framework are then surveyed, focusing primarily on some important unifying principles applying to all these theories. This is followed by introducing two higher levels of the theories of imprecise probabilities: (i) the level of measuring the amount of relevant uncertainty (predictive, retrodictive, prescriptive, diagnostic, etc.) in any situation formalizable in each given theory, and (ii) the level of some methodological principles of uncertainty, which are contingent upon the capability to measure uncertainty and the associated uncertainty-based information. Various issues regarding both the measurement of uncertainty and the uncertainty principles are discussed. Again, the focus is on unifying principles applicable to all the theories. Finally, the current status of GIT is assessed and future research in the area is discussed

  19. The Lorentz Theory of Electrons and Einstein's Theory of Relativity

    Science.gov (United States)

    Goldberg, Stanley

    1969-01-01

    Traces the development of Lorentz's theory of electrons as applied to the problem of the electrodynamics of moving bodies. Presents evidence that the principle of relativity did not play an important role in Lorentz's theory, and that though Lorentz eventually acknowledged Einstein's work, he was unwilling to completely embrace the Einstein…

  20. Thermoelectric cooler concepts and the limit for maximum cooling

    International Nuclear Information System (INIS)

    Seifert, W; Hinsche, N F; Pluschke, V

    2014-01-01

    The conventional analysis of a Peltier cooler approximates the material properties as independent of temperature using a constant properties model (CPM). Alternative concepts have been published by Bian and Shakouri (2006 Appl. Phys. Lett. 89 212101), Bian (et al 2007 Phys. Rev. B 75 245208) and Snyder et al (2012 Phys. Rev. B 86 045202). While Snyder's Thomson cooler concept results from a consideration of compatibility, the method of Bian et al focuses on the redistribution of heat. Thus, both approaches are based on different principles. In this paper we compare the new concepts to CPM and we reconsider the limit for maximum cooling. The results provide a new perspective on maximum cooling. (paper)

  1. Application of maximum radiation exposure values and monitoring of radiation exposure

    International Nuclear Information System (INIS)

    1996-01-01

    The guide presents the principles to be applied in calculating the equivalent dose and the effective dose, instructions on application of the maximum values for radiation exposure, and instruction on monitoring of radiation exposure. In addition, the measurable quantities to be used in monitoring the radiation exposure are presented. (2 refs.)

  2. Economic Modelling in Institutional Economic Theory

    Directory of Open Access Journals (Sweden)

    Wadim Strielkowski

    2017-06-01

    Full Text Available Our paper is centered around the formation of theory of institutional modelling that includes principles and ideas reflecting the laws of societal development within the framework of institutional economic theory. We scrutinize and discuss the scientific principles of this institutional modelling that are increasingly postulated by the classics of institutional theory and find their way into the basics of the institutional economics. We propose scientific ideas concerning the new innovative approaches to institutional modelling. These ideas have been devised and developed on the basis of the results of our own original design, as well as on the formalisation and measurements of economic institutions, their functioning and evolution. Moreover, we consider the applied aspects of the institutional theory of modelling and employ them in our research for formalizing our results and maximising the practical outcome of our paper. Our results and findings might be useful for the researchers and stakeholders searching for the systematic and comprehensive description of institutional level modelling, the principles involved in this process and the main provisions of the institutional theory of economic modelling.

  3. PRINCIPLES OF CONTENT FORMATION EDUCATIONAL ELECTRONIC RESOURCE

    Directory of Open Access Journals (Sweden)

    О Ю Заславская

    2017-12-01

    Full Text Available The article considers modern possibilities of information and communication technologies for the design of electronic educational resources. The conceptual basis of the open educational multimedia system is based on the modular architecture of the electronic educational resource. The content of the electronic training module can be implemented in several versions of the modules: obtaining information, practical exercises, control. The regularities in the teaching process in modern pedagogical theory are considered: general and specific, and the principles for the formation of the content of instruction at different levels are defined, based on the formulated regularities. On the basis of the analysis, the principles of the formation of the electronic educational resource are determined, taking into account the general and didactic patterns of teaching.As principles of the formation of educational material for obtaining information for the electronic educational resource, the article considers: the principle of methodological orientation, the principle of general scientific orientation, the principle of systemic nature, the principle of fundamentalization, the principle of accounting intersubject communications, the principle of minimization. The principles of the formation of the electronic training module of practical studies in the article include: the principle of systematic and dose based consistency, the principle of rational use of study time, the principle of accessibility. The principles of the formation of the module for monitoring the electronic educational resource can be: the principle of the operationalization of goals, the principle of unified identification diagnosis.

  4. Os princípios constitucionais entre deontologia e axiologia: pressupostos para uma teoria hermenêutica democrática The constitutional principles between deontology and axiology: theoretical assumptions towards a democratic hermeneutic theory

    Directory of Open Access Journals (Sweden)

    Fábio Portela Lopes de Almeida

    2008-12-01

    Full Text Available O artigo tem por propósito discutir a natureza dos princípios constitucionais a partir de duas teorias hermenêuticas distintas: a axiologia e a deontologia. A perspectiva axiológica é descrita a partir da teoria dos princípios delineada por Robert Alexy em sua Teoria dos direitos fundamentais e criticada por ser incapaz de lidar democraticamente com o fato do pluralismo, isto é, com a circunstância de que as sociedades contemporâneas não se estruturam em torno de valores éticos compartilhados intersubjetivamente por todos os cidadãos. Como alternativa a esse modelo, sugere-se, a partir das obras de John Rawls, Ronald Dworkin e Jürgen Habermas, que a adoção de uma perspectiva deontológica, que assume a distinção entre princípios e valores, supera as dificuldades da teoria axiológica. Ao assumir como premissa central a possibilidade de legitimação do direito a partir de princípios justificados a partir de critérios aceitáveis por todos os cidadãos, uma teoria deontológica dos princípios se torna capaz de lidar com a pluralidade de concepções de bem presentes nas sociedades contemporâneas. Nesse sentido, o artigo se situa no campo de estudos próprio da teoria da Constituição.The article discusses the nature of the constitutional principles by opposing two distinct hermeneutic theories: axiology and deontology. the theory of principles proposed by robert alexy is assumed as an ideal example of axiological theory, and criticized for being unable to deal democratically with the fact of pluralism, i. e., the fact that the contemporary societies are not structured on ethical values shared by all the citizens. As an alternative to the axiological model, I suggest, based on a particular reading of the theories of John Rawls, Ronald Dworkin and Jürgen Habermas, that the adoption of a deontological perspective, which assumes a strict distinction between principles and values, overcomes the difficulties of the axiological

  5. Optimizing Computer Assisted Instruction By Applying Principles of Learning Theory.

    Science.gov (United States)

    Edwards, Thomas O.

    The development of learning theory and its application to computer-assisted instruction (CAI) are described. Among the early theoretical constructs thought to be important are E. L. Thorndike's concept of learning connectisms, Neal Miller's theory of motivation, and B. F. Skinner's theory of operant conditioning. Early devices incorporating those…

  6. First-principles structures for the close-packed and the 7/2 motif of collagen

    DEFF Research Database (Denmark)

    Jalkanen, Karl J.; Olsen, Kasper; Knapp-Mohammady, Michaela

    2012-01-01

    The newly proposed close-packed motif for collagen and the more established 7/2 structure are investigated and compared. First-principles semi-empirical wave function theory and Kohn-Sham density functional theory are applied in the study of these relatively large and complex structures. The stru......The newly proposed close-packed motif for collagen and the more established 7/2 structure are investigated and compared. First-principles semi-empirical wave function theory and Kohn-Sham density functional theory are applied in the study of these relatively large and complex structures...

  7. Phase equilibria basic principles, applications, experimental techniques

    CERN Document Server

    Reisman, Arnold

    2013-01-01

    Phase Equilibria: Basic Principles, Applications, Experimental Techniques presents an analytical treatment in the study of the theories and principles of phase equilibria. The book is organized to afford a deep and thorough understanding of such subjects as the method of species model systems; condensed phase-vapor phase equilibria and vapor transport reactions; zone refining techniques; and nonstoichiometry. Physicists, physical chemists, engineers, and materials scientists will find the book a good reference material.

  8. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials

    Directory of Open Access Journals (Sweden)

    Roman V. Tsyshevsky

    2016-02-01

    Full Text Available This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.

  9. Theory of group extension, Shubnikov-Curie principle and phase transformations

    International Nuclear Information System (INIS)

    Koptsik, V.A.; Talis, A.L.

    1983-01-01

    It is shown, that the generalized Curie principle (GCP) is the principle of nondecreasing abstract symmetry under structural transformations in (quasi) isolated physical systems. Asymmetry of such systems at any structural level is compensated by their symmetrization at another one, transformation of the old and appearance of qualitatively new symmetries. A corresponding situation is preserved also at the description level (mathematical simulation) of physical systems. Structural levels of substance arrangement and forms of connection between them, reflected by the Shubnikov-Curie (SCP) and GCP are inexhaustible. With the discovery of new structural levels and new forms of relations between them can be discovered and new forms of SCP, which can not be exhausted in the given work

  10. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  11. A Weak Comparison Principle for Reaction-Diffusion Systems

    Directory of Open Access Journals (Sweden)

    José Valero

    2012-01-01

    Full Text Available We prove a weak comparison principle for a reaction-diffusion system without uniqueness of solutions. We apply the abstract results to the Lotka-Volterra system with diffusion, a generalized logistic equation, and to a model of fractional-order chemical autocatalysis with decay. Moreover, in the case of the Lotka-Volterra system a weak maximum principle is given, and a suitable estimate in the space of essentially bounded functions L∞ is proved for at least one solution of the problem.

  12. Greatest Happiness Principle in a Complex System Approach

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2012-06-01

    Full Text Available The principle of greatest happiness was the basis of ethics in Plato’s and Aristotle’s work, it served as the basis of utility principle in economics, and the happiness research has become a hot topic in social sciences in Western countries in particular in economics recently. Nevertheless there is a considerable scientific pessimism over whether it is even possible to affect sustainable increases in happiness.In this paper we outline an economic theory of decision based on the greatest happiness principle (GHP. Modern equilibrium economics is a simple system simplification of the GHP, the complex approach outlines a non-equilibrium economic theory. The comparison of the approaches reveals the fact that the part of the results – laws of modern economics – follow from the simplifications and they are against the economic nature. The most important consequence is that within the free market economy one cannot be sure that the path found by it leads to a beneficial economic system.

  13. Maximum heat flux in boiling in a large volume

    International Nuclear Information System (INIS)

    Bergmans, Dzh.

    1976-01-01

    Relationships are derived for the maximum heat flux qsub(max) without basing on the assumptions of both the critical vapor velocity corresponding to the zero growth rate, and planar interface. The Helmholz nonstability analysis of vapor column has been made to this end. The results of this examination have been used to find maximum heat flux for spherical, cylindric and flat plate heaters. The conventional hydrodynamic theory was found to be incapable of producing a satisfactory explanation of qsub(max) for small heaters. The occurrence of qsub(max) in the present case can be explained by inadequate removal of vapor output from the heater (the force of gravity for cylindrical heaters and surface tension for the spherical ones). In case of flat plate heater the qsub(max) value can be explained with the help of the hydrodynamic theory

  14. Geometrical theory of spin motion

    International Nuclear Information System (INIS)

    Halpern, L.

    1983-01-01

    A discussion of the fundamental interrelation of geometry and physical laws with Lie groups leads to a reformulation and heuristic modification of the principle of inertia and the principle of equivalence, which is based on the simple De Sitter group instead of the Poincare group. The resulting law of motion allows a unified formulation for structureless and spinning test particles. A metrical theory of gravitation is constructed with the modified principle, which is structured after the geometry of the manifold of the De Sitter group. The theory is equivalent to a particular Kaluza-Klein theory in ten dimensions with the Lorentz group as gauge group. A restricted version of this theory excludes torsion. It is shown by a reformulation of the energy momentum complex that this version is equivalent to general relativity with a cosmologic term quadratic in the curvature tensor and in which the existence of spinning particle fields is inherent from first principles. The equations of the general theory with torsion are presented and it is shown in a special case how the boundary conditions for the torsion degree of freedom have to be chosen such as to treat orbital and spin angular momenta on an equal footing. The possibility of verification of the resulting anomalous spin-spin interaction is mentioned and a model imposed by the group topology of SO(3, 2) is outlined in which the unexplained discrepancy between the magnitude of the discrete valued coupling constants and the gravitational constant in Kaluza-Klein theories is resolved by the identification of identical fermions as one orbit. The mathematical structure can be adapted to larger groups to include other degrees of freedom. 41 references

  15. [Evolutionary process unveiled by the maximum genetic diversity hypothesis].

    Science.gov (United States)

    Huang, Yi-Min; Xia, Meng-Ying; Huang, Shi

    2013-05-01

    As two major popular theories to explain evolutionary facts, the neutral theory and Neo-Darwinism, despite their proven virtues in certain areas, still fail to offer comprehensive explanations to such fundamental evolutionary phenomena as the genetic equidistance result, abundant overlap sites, increase in complexity over time, incomplete understanding of genetic diversity, and inconsistencies with fossil and archaeological records. Maximum genetic diversity hypothesis (MGD), however, constructs a more complete evolutionary genetics theory that incorporates all of the proven virtues of existing theories and adds to them the novel concept of a maximum or optimum limit on genetic distance or diversity. It has yet to meet a contradiction and explained for the first time the half-century old Genetic Equidistance phenomenon as well as most other major evolutionary facts. It provides practical and quantitative ways of studying complexity. Molecular interpretation using MGD-based methods reveal novel insights on the origins of humans and other primates that are consistent with fossil evidence and common sense, and reestablished the important role of China in the evolution of humans. MGD theory has also uncovered an important genetic mechanism in the construction of complex traits and the pathogenesis of complex diseases. We here made a series of sequence comparisons among yeasts, fishes and primates to illustrate the concept of limit on genetic distance. The idea of limit or optimum is in line with the yin-yang paradigm in the traditional Chinese view of the universal creative law in nature.

  16. Application of the principles of Vygotsky's sociocultural theory of ...

    African Journals Online (AJOL)

    Sociocultural theory by Vygotsky (1896-1934) is a theory that has become popular in educational practice in recent years. It is especially important in the instruction of children in the preschool level as it is most suitable for their development and learning, which is more of social interaction. This paper discussed the ...

  17. The roles and uses of design principles for developing the trialogical approach on learning

    Directory of Open Access Journals (Sweden)

    Kari Kosonen

    2011-12-01

    Full Text Available In the present paper, the development and use of a specific set of pedagogical design principles in a large research and development project are analysed. The project (the Knowledge Practices Laboratory developed technology and a pedagogical approach to support certain kinds of collaborative knowledge creation practices related to the ‘trialogical' approach on learning. The design principles for trialogical learning are examined from three main developmental perspectives that were emphasised in the project: theory, pedagogy, and technology. As expected, the design principles had many different roles but not as straightforward or overarching as was planned. In their outer form they were more resistant to big changes than was expected but they were elaborated and specified during the process. How theories change in design-based research is discussed on the basis of the analysis. Design principles are usually seen as providing a bridge between theory and practice, but the present case showed that also complementary, more concrete frameworks are needed for bridging theory to practical pedagogical or technical design solutions.

  18. Consistency of the Mach principle and the gravitational-to-inertial mass equivalence principle

    International Nuclear Information System (INIS)

    Granada, Kh.K.; Chubykalo, A.E.

    1990-01-01

    Kinematics of the system, composed of two bodies, interacting with each other according to inverse-square law, was investigated. It is shown that the Mach principle, earlier rejected by the general relativity theory, can be used as an alternative for the absolute space concept, if it is proposed, that distant star background dictates both inertial and gravitational mass of a body

  19. Music-evoked emotions: principles, brain correlates, and implications for therapy.

    Science.gov (United States)

    Koelsch, Stefan

    2015-03-01

    This paper describes principles underlying the evocation of emotion with music: evaluation, resonance, memory, expectancy/tension, imagination, understanding, and social functions. Each of these principles includes several subprinciples, and the framework on music-evoked emotions emerging from these principles and subprinciples is supposed to provide a starting point for a systematic, coherent, and comprehensive theory on music-evoked emotions that considers both reception and production of music, as well as the relevance of emotion-evoking principles for music therapy. © 2015 New York Academy of Sciences.

  20. Maximum entropy reconstructions for crystallographic imaging; Cristallographie et reconstruction d`images par maximum d`entropie

    Energy Technology Data Exchange (ETDEWEB)

    Papoular, R

    1997-07-01

    The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.

  1. Variational principles for Ginzburg-Landau equation by He's semi-inverse method

    International Nuclear Information System (INIS)

    Liu, W.Y.; Yu, Y.J.; Chen, L.D.

    2007-01-01

    Via the semi-inverse method of establishing variational principles proposed by He, a generalized variational principle is established for Ginzburg-Landau equation. The present theory provides a quite straightforward tool to the search for various variational principles for physical problems. This paper aims at providing a more complete theoretical basis for applications using finite element and other direct variational methods

  2. Nonlinear analysis

    CERN Document Server

    Gasinski, Leszek

    2005-01-01

    Hausdorff Measures and Capacity. Lebesgue-Bochner and Sobolev Spaces. Nonlinear Operators and Young Measures. Smooth and Nonsmooth Analysis and Variational Principles. Critical Point Theory. Eigenvalue Problems and Maximum Principles. Fixed Point Theory.

  3. A cyclic symmetry principle in physics

    International Nuclear Information System (INIS)

    Green, H.S.; Adelaide Univ., SA

    1994-01-01

    Many areas of modern physics are illuminated by the application of a symmetry principle, requiring the invariance of the relevant laws of physics under a group of transformations. This paper examines the implications and some of the applications of the principle of cyclic symmetry, especially in the areas of statistical mechanics and quantum mechanics, including quantized field theory. This principle requires invariance under the transformations of a finite group, which may be a Sylow π-group, a group of Lie type, or a symmetric group. The utility of the principle of cyclic invariance is demonstrated in finding solutions of the Yang-Baxter equation that include and generalize known solutions. It is shown that the Sylow π-groups have other uses, in providing a basis for a type of generalized quantum statistics, and in parametrising a new generalization of Lie groups, with associated algebras that include quantized algebras. 31 refs

  4. Quantum wells and the generalized uncertainty principle

    International Nuclear Information System (INIS)

    Blado, Gardo; Owens, Constance; Meyers, Vincent

    2014-01-01

    The finite and infinite square wells are potentials typically discussed in undergraduate quantum mechanics courses. In this paper, we discuss these potentials in the light of the recent studies of the modification of the Heisenberg uncertainty principle into a generalized uncertainty principle (GUP) as a consequence of attempts to formulate a quantum theory of gravity. The fundamental concepts of the minimal length scale and the GUP are discussed and the modified energy eigenvalues and transmission coefficient are derived. (paper)

  5. Universal Principles of Media Ethics: South African and German Perspectives

    Directory of Open Access Journals (Sweden)

    Lea-Sophie Borgmann

    2012-11-01

    Full Text Available The increasingly globalised nature of media and journalism has led to a review of ethical standards, mainly to find universal ethical values which are applicable in a world with countless different cultures. This article attempts to address this field of research in comparing South African and German approaches to the topic of media ethics. Firstly, it outlines theories of universal and specific cultural ethical principles in journalism. Secondly, it shows how the conception of universal ethical principles, so called protonorms, is interpreted differently in the two cultures and how specific cultural values of media ethics are rated among the two cultural frameworks of Germany and South Africa. An online survey conducted among German and South African journalism students found significant differences in the ranking of media ethics principles as well as similarities and differences in the interpretations of protonorms. The results support existing normative theories of universal media ethics, such as the theory of protonorms, in contributing explorative empirical data to this field of mainly theoretical research.

  6. Elements of relativity theory

    International Nuclear Information System (INIS)

    Lawden, D.F.

    1985-01-01

    The book on elements of relativity theory is intended for final year school students or as an early university course in mathematical physics. Special principle of relativity, lorentz transformation, velocity transformations, relativistic mechanics, and general theory of relativity, are all discussed. (U.K.)

  7. MaxEnt-Based Ecological Theory: A Template for Integrated Catchment Theory

    Science.gov (United States)

    Harte, J.

    2017-12-01

    The maximum information entropy procedure (MaxEnt) is both a powerful tool for inferring least-biased probability distributions from limited data and a framework for the construction of complex systems theory. The maximum entropy theory of ecology (METE) describes remarkably well widely observed patterns in the distribution, abundance and energetics of individuals and taxa in relatively static ecosystems. An extension to ecosystems undergoing change in response to disturbance or natural succession (DynaMETE) is in progress. I describe the structure of both the static and the dynamic theory and show a range of comparisons with census data. I then propose a generalization of the MaxEnt approach that could provide a framework for a predictive theory of both static and dynamic, fully-coupled, eco-socio-hydrological catchment systems.

  8. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    Science.gov (United States)

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  9. Private law principles, pluralism and perfectionism

    NARCIS (Netherlands)

    Hesselink, M.W.; Bernitz, U.; Groussot, X.; Schulyok, F.

    2013-01-01

    This paper discusses the legitimacy of general principles of private law as they have been formulated recently by the Court of Justice of the European Union and proposed by the European Commission. It addresses challenges from different strands in political theory including liberal perfectionism,

  10. Theories Supporting Transfer of Training.

    Science.gov (United States)

    Yamnill, Siriporn; McLean, Gary N.

    2001-01-01

    Reviews theories about factors affecting the transfer of training, including theories on motivation (expectancy, equity, goal setting), training transfer design (identical elements, principle, near and far), and transfer climate (organizational). (Contains 36 references.) (SK)

  11. Unified field theory

    International Nuclear Information System (INIS)

    Vollendorf, F.

    1976-01-01

    A theory is developed in which the gravitational as well as the electromagnetic field is described in a purely geometrical manner. In the case of a static central symmetric field Newton's law of gravitation and Schwarzschild's line element are derived by means of an action principle. The same principle leads to Fermat's law which defines the world lines of photons. (orig.) [de

  12. Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices

    Directory of Open Access Journals (Sweden)

    Luis L. Bonilla

    2016-07-01

    Full Text Available Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.

  13. Quantum field theory III. Gauge theory. A bridge between mathematicians and physicists

    International Nuclear Information System (INIS)

    Zeidler, Eberhard

    2011-01-01

    In this third volume of his modern introduction to quantum field theory, Eberhard Zeidler examines the mathematical and physical aspects of gauge theory as a principle tool for describing the four fundamental forces which act in the universe: gravitative, electromagnetic, weak interaction and strong interaction. Volume III concentrates on the classical aspects of gauge theory, describing the four fundamental forces by the curvature of appropriate fiber bundles. This must be supplemented by the crucial, but elusive quantization procedure. The book is arranged in four sections, devoted to realizing the universal principle force equals curvature: Part I: The Euclidean Manifold as a Paradigm Part II: Ariadne's Thread in Gauge Theory Part III: Einstein's Theory of Special Relativity Part IV: Ariadne's Thread in Cohomology For students of mathematics the book is designed to demonstrate that detailed knowledge of the physical background helps to reveal interesting interrelationships among diverse mathematical topics. Physics students will be exposed to a fairly advanced mathematics, beyond the level covered in the typical physics curriculum. Quantum Field Theory builds a bridge between mathematicians and physicists, based on challenging questions about the fundamental forces in the universe (macrocosmos), and in the world of elementary particles (microcosmos). (orig.)

  14. A Complete Theory of Everything (Will Be Subjective

    Directory of Open Access Journals (Sweden)

    Marcus Hutter

    2010-09-01

    Full Text Available Increasingly encompassing models have been suggested for our world. Theories range from generally accepted to increasingly speculative to apparently bogus. The progression of theories from ego- to geo- to helio-centric models to universe and multiverse theories and beyond was accompanied by a dramatic increase in the sizes of the postulated worlds, with humans being expelled from their center to ever more remote and random locations. Rather than leading to a true theory of everything, this trend faces a turning point after which the predictive power of such theories decreases (actually to zero. Incorporating the location and other capacities of the observer into such theories avoids this problem and allows to distinguish meaningful from predictively meaningless theories. This also leads to a truly complete theory of everything consisting of a (conventional objective theory of everything plus a (novel subjective observer process. The observer localization is neither based on the controversial anthropic principle, nor has it anything to do with the quantum-mechanical observation process. The suggested principle is extended to more practical (partial, approximate, probabilistic, parametric world models (rather than theories of everything. Finally, I provide a justification of Ockham’s razor, and criticize the anthropic principle, the doomsday argument, the no free lunch theorem, and the falsifiability dogma.

  15. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    Science.gov (United States)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The

  16. MBA theory and application of business and management principles

    CERN Document Server

    Davim, J

    2016-01-01

    This book focuses on the relevant subjects in the curriculum of an MBA program. Covering many different fields within business, this book is ideal for readers who want to prepare for a Master of Business Administration degree. It provides discussions and exchanges of information on principles, strategies, models, techniques, methodologies and applications in the business area.

  17. Revisiting a theory of negotiation: the utility of Markiewicz (2005) proposed six principles.

    Science.gov (United States)

    McDonald, Diane

    2008-08-01

    their differences and be willing to move on. But the problem is that evaluators are not necessarily equipped with the technical or personal skills required for effective negotiation. In addition, the time and effort that are required to undertake this mediating role are often not sufficiently understood by those who commission a review. With such issues in mind Markiewicz, A. [(2005). A balancing act: Resolving multiple stakeholder interests in program evaluation. Evaluation Journal of Australasia, 4(1-2), 13-21] has proposed six principles upon which to build a case for negotiation to be integrated into the evaluation process. This paper critiques each of these principles in the context of an evaluation undertaken of a youth program. In doing so it challenges the view that stakeholder consensus is always possible if program improvement is to be achieved. This has led to some refinement and further extension of the proposed theory of negotiation that is seen to be instrumental to the role of an evaluator.

  18. Infrared behaviour, sources and the Schwinger action principle

    International Nuclear Information System (INIS)

    Burgess, M.

    1994-05-01

    An action principle technique is used to explore some issues concerning the infra-red problem in the effective action for gauge field theories. The relationship between the renormalization group and other non-perturbative resummation schemes is demonstrated by means of a source theory. It is shown that the use of vertex renormalization conditions and other resummation methods (large N expansion) can lead to erroneous conclusions about the phase transitions in the gauge theory, since it corresponds to only a partial resummation of the scalar self-energies at the expense of the gauge sector. The renormalization group as well as the ansatz of non-local sources can be derived from an associated operator problem for the field couplings by use of the Schwinger action principle. This method generalizes to curved spacetime and non-equilibrium models in a straightforward way. Some examples are computed to lowest order and the conclusion is drawn that none of the approximation schemes are able to extract true non-perturbative information from field theory. Only results which rely on the particular recursive structure of the perturbation series are accessible and the main purpose of the investigation is to determine legal ways of regulating the theory in the infrared. 35 refs

  19. Non-equilibrium thermodynamics, maximum entropy production and Earth-system evolution.

    Science.gov (United States)

    Kleidon, Axel

    2010-01-13

    The present-day atmosphere is in a unique state far from thermodynamic equilibrium. This uniqueness is for instance reflected in the high concentration of molecular oxygen and the low relative humidity in the atmosphere. Given that the concentration of atmospheric oxygen has likely increased throughout Earth-system history, we can ask whether this trend can be generalized to a trend of Earth-system evolution that is directed away from thermodynamic equilibrium, why we would expect such a trend to take place and what it would imply for Earth-system evolution as a whole. The justification for such a trend could be found in the proposed general principle of maximum entropy production (MEP), which states that non-equilibrium thermodynamic systems maintain steady states at which entropy production is maximized. Here, I justify and demonstrate this application of MEP to the Earth at the planetary scale. I first describe the non-equilibrium thermodynamic nature of Earth-system processes and distinguish processes that drive the system's state away from equilibrium from those that are directed towards equilibrium. I formulate the interactions among these processes from a thermodynamic perspective and then connect them to a holistic view of the planetary thermodynamic state of the Earth system. In conclusion, non-equilibrium thermodynamics and MEP have the potential to provide a simple and holistic theory of Earth-system functioning. This theory can be used to derive overall evolutionary trends of the Earth's past, identify the role that life plays in driving thermodynamic states far from equilibrium, identify habitability in other planetary environments and evaluate human impacts on Earth-system functioning. This journal is © 2010 The Royal Society

  20. Principles of hyperplasticity an approach to plasticity theory based on thermodynamic principles

    CERN Document Server

    Houlsby, Guy T

    2007-01-01

    A new approach to plasticity theory firmly routed in and compatible with the laws of thermodynamicsProvides a common basis for the formulation and comparison of many existing plasticity modelsIncorporates and introduction to elasticity, plasticity, thermodynamics and their interactionsShows the reader how to formulate constitutive models completely specified by two scalar potential functions from which the incremental responses of any hyperplastic model can be derived.

  1. Difference Discrete Variational Principles, Euler-Lagrange Cohomology and Symplectic, Multisymplectic Structures I: Difference Discrete Variational Principle

    Institute of Scientific and Technical Information of China (English)

    GUO Han-Ying,; LI Yu-Qi; WU Ke1; WANG Shi-Kun

    2002-01-01

    In this first paper of a series, we study the difference discrete variational principle in the framework of multi-parameter differential approach by regarding the forward difference as an entire geometric object in view of noncommutative differential geometry. Regarding the difference as an entire geometric object, the difference discrete version of Legendre transformation can be introduced. By virtue of this variational principle, we can discretely deal with the variation problems in both the Lagrangian and Hamiltonian formalisms to get difference discrete Euler-Lagrange equations and canonical ones for the difference discrete versions of the classical mechanics and classical field theory.

  2. The Principle of the Fermionic Projector: An Approach for Quantum Gravity?

    Science.gov (United States)

    Finster, Felix

    In this short article we introduce the mathematical framework of the principle of the fermionic projector and set up a variational principle in discrete space-time. The underlying physical principles are discussed. We outline the connection to the continuum theory and state recent results. In the last two sections, we speculate on how it might be possible to describe quantum gravity within this framework.

  3. VULNERABILITY, AUTHENTICITY, AND INTER-SUBJECTIVE CONTACT: PHILOSOPHICAL PRINCIPLES OF INTEGRATIVE PSYCHOTHERAPY

    OpenAIRE

    Richard G. Erskine

    2013-01-01

    The Philosophical principles of a relationally focused Integrative Psychotherapy are described through the concepts of vulnerability, authenticity, and inter-subjective contact. Eight principles or therapist attitudes are outlined with clinical examples that illustrate the philosophy. These philosophical principles provide the foundation for a theory of methods. This article is based on a keynote address given at the 6th International Integrative Psychotherapy Association Conference, Granth...

  4. Particle structure of gauge theories

    International Nuclear Information System (INIS)

    Fredenhagen, K.

    1985-11-01

    The implications of the principles of quantum field theory for the particle structure of gauge theories are discussed. The general structure which emerges is compared with that of the Z 2 Higgs model on a lattice. The discussion leads to several confinement criteria for gauge theories with matter fields. (orig.)

  5. Principles and theory of resonance power supplies

    International Nuclear Information System (INIS)

    Sreenivas, A.; Karady, G.G.

    1991-01-01

    The resonance power supply is widely used and proved to be an efficient method to supply accelerator magnets. The literature describes several power supply circuits but no comprehensive theory of operation is presented. This paper presents a mathematical method which describes the operation of the resonance power supply and it can be used for accurate design of components

  6. Optimal control theory an introduction

    CERN Document Server

    Kirk, Donald E

    2004-01-01

    Optimal control theory is the science of maximizing the returns from and minimizing the costs of the operation of physical, social, and economic processes. Geared toward upper-level undergraduates, this text introduces three aspects of optimal control theory: dynamic programming, Pontryagin's minimum principle, and numerical techniques for trajectory optimization.Chapters 1 and 2 focus on describing systems and evaluating their performances. Chapter 3 deals with dynamic programming. The calculus of variations and Pontryagin's minimum principle are the subjects of chapters 4 and 5, and chapter

  7. On the Dirichlet's Box Principle

    Science.gov (United States)

    Poon, Kin-Keung; Shiu, Wai-Chee

    2008-01-01

    In this note, we will focus on several applications on the Dirichlet's box principle in Discrete Mathematics lesson and number theory lesson. In addition, the main result is an innovative game on a triangular board developed by the authors. The game has been used in teaching and learning mathematics in Discrete Mathematics and some high schools in…

  8. Theoretical physics. Field theory

    International Nuclear Information System (INIS)

    Landau, L.; Lifchitz, E.

    2004-01-01

    This book is the fifth French edition of the famous course written by Landau/Lifchitz and devoted to both the theory of electromagnetic fields and the gravity theory. The talk of the theory of electromagnetic fields is based on special relativity and relates to only the electrodynamics in vacuum and that of pointwise electric charges. On the basis of the fundamental notions of the principle of relativity and of relativistic mechanics, and by using variational principles, the authors develop the fundamental equations of the electromagnetic field, the wave equation and the processes of emission and propagation of light. The theory of gravitational fields, i.e. the general theory of relativity, is exposed in the last five chapters. The fundamentals of the tensor calculus and all that is related to it are progressively introduced just when needed (electromagnetic field tensor, energy-impulse tensor, or curve tensor...). The worldwide reputation of this book is generally allotted to clearness, to the simplicity and the rigorous logic of the demonstrations. (A.C.)

  9. Enhancing Cognitive Theory of Multimedia Leaning through 3D Animation

    Directory of Open Access Journals (Sweden)

    Zeeshan Bhatti

    2017-12-01

    Full Text Available Cognitive theory of Multimedia learning has been a widely used principle in education. However, with current technological advancements and usage, the teaching and learning trend of children’s have also changed with more dependability towards technology. This research work explores and implement the use of 3D Animation as tool for multimedia learning based on cognitive theory. This new dimension in cognitive learning, will foster the latest multimedia tools and application driven through 3D Animation, Virtual Reality and Augmented Reality. The three principles, that facilitate cognitive theory of multimedia learning using animation, addressed in this research are temporal contiguity principle (screening matching narration with animation simultaneously rather than successively, personalization principle (screening text or dialogs in casual form rather than formal style and finally the multimedia principle (screen animation and audio narration together instead of just narration. The result of this new model would yield a new technique of educating the young children through 3D animation and virtual reality. The adaptation of  cognitive theory through 3D animation as a source of multimedia learning with various key principles produces a reliable paradigm for educational enhancement.

  10. On minimizers of causal variational principles

    International Nuclear Information System (INIS)

    Schiefeneder, Daniela

    2011-01-01

    Causal variational principles are a class of nonlinear minimization problems which arise in a formulation of relativistic quantum theory referred to as the fermionic projector approach. This thesis is devoted to a numerical and analytic study of the minimizers of a general class of causal variational principles. We begin with a numerical investigation of variational principles for the fermionic projector in discrete space-time. It is shown that for sufficiently many space-time points, the minimizing fermionic projector induces non-trivial causal relations on the space-time points. We then generalize the setting by introducing a class of causal variational principles for measures on a compact manifold. In our main result we prove under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed analysis of the minimizers. (orig.)

  11. Basic Knowledge for Market Principle: Approaches to the Price Coordination Mechanism by Using Optimization Theory and Algorithm

    Science.gov (United States)

    Aiyoshi, Eitaro; Masuda, Kazuaki

    On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.

  12. Elementary particle theory

    CERN Document Server

    Stefanovich, Eugene

    2018-01-01

    This book introduces notation, terminology, and basic ideas of relativistic quantum theories. The discussion proceeds systematically from the principle of relativity and postulates of quantum logics to the construction of Poincaré invariant few-particle models of interaction and scattering. It is the first of three volumes formulating a consistent relativistic quantum theory of interacting charged particles.

  13. Geophysical Field Theory

    International Nuclear Information System (INIS)

    Eloranta, E.

    2003-11-01

    The geophysical field theory includes the basic principles of electromagnetism, continuum mechanics, and potential theory upon which the computational modelling of geophysical phenomena is based on. Vector analysis is the main mathematical tool in the field analyses. Electrostatics, stationary electric current, magnetostatics, and electrodynamics form a central part of electromagnetism in geophysical field theory. Potential theory concerns especially gravity, but also electrostatics and magnetostatics. Solid state mechanics and fluid mechanics are central parts in continuum mechanics. Also the theories of elastic waves and rock mechanics belong to geophysical solid state mechanics. The theories of geohydrology and mass transport form one central field theory in geophysical fluid mechanics. Also heat transfer is included in continuum mechanics. (orig.)

  14. A revision of the generalized uncertainty principle

    International Nuclear Information System (INIS)

    Bambi, Cosimo

    2008-01-01

    The generalized uncertainty principle arises from the Heisenberg uncertainty principle when gravity is taken into account, so the leading order correction to the standard formula is expected to be proportional to the gravitational constant G N = L 2 Pl . On the other hand, the emerging picture suggests a set of departures from the standard theory which demand a revision of all the arguments used to deduce heuristically the new rule. In particular, one can now argue that the leading order correction to the Heisenberg uncertainty principle is proportional to the first power of the Planck length L Pl . If so, the departures from ordinary quantum mechanics would be much less suppressed than what is commonly thought

  15. Theory of colours

    CERN Document Server

    Goethe, Johann Wolfgang von

    2006-01-01

    The wavelength theory of light and color had been firmly established by the time the great German poet published his Theory of Colours in 1810. Nevertheless, Goethe believed that the theory derived from a fundamental error, in which an incidental result was mistaken for a elemental principle. Far from affecting a knowledge of physics, he maintained that such a background would inhibit understanding. The conclusions Goethe draws here rest entirely upon his personal observations.This volume does not have to be studied to be appreciated. The author's subjective theory of colors permits him to spe

  16. Statistical theory and inference

    CERN Document Server

    Olive, David J

    2014-01-01

    This text is for  a one semester graduate course in statistical theory and covers minimal and complete sufficient statistics, maximum likelihood estimators, method of moments, bias and mean square error, uniform minimum variance estimators and the Cramer-Rao lower bound, an introduction to large sample theory, likelihood ratio tests and uniformly most powerful  tests and the Neyman Pearson Lemma. A major goal of this text is to make these topics much more accessible to students by using the theory of exponential families. Exponential families, indicator functions and the support of the distribution are used throughout the text to simplify the theory. More than 50 ``brand name" distributions are used to illustrate the theory with many examples of exponential families, maximum likelihood estimators and uniformly minimum variance unbiased estimators. There are many homework problems with over 30 pages of solutions.

  17. The twin paradox and the principle of relativity

    International Nuclear Information System (INIS)

    Grøn, Øyvind

    2013-01-01

    The twin paradox is intimately related to the principle of relativity. Two twins A and B meet, travel away from each other and meet again. From the point of view of A, B is the traveller. Thus, A predicts B to be younger than A herself, and vice versa. Both cannot be correct. The special relativistic solution is to say that if one of the twins, say A, was inertial during the separation, she will be the older one. Since the principle of relativity is not valid for accelerated motion according to the special theory of relativity B cannot consider herself as at rest permanently because she must accelerate in order to return to her sister. A general relativistic solution is to say that due to the principle of equivalence B can consider herself as at rest, but she must invoke the gravitational change of time in order to predict correctly the age of A during their separation. However one may argue that the fact that B is younger than A shows that B was accelerated, not A, and hence the principle of relativity is not valid for accelerated motion in the general theory of relativity either. I here argue that perfect inertial dragging may save the principle of relativity, and that this requires a new model of the Minkowski spacetime where the cosmic mass is represented by a massive shell with radius equal to its own Schwarzschild radius. (paper)

  18. Overview of Maximum Power Point Tracking Techniques for Photovoltaic Energy Production Systems

    DEFF Research Database (Denmark)

    Koutroulis, Eftichios; Blaabjerg, Frede

    2015-01-01

    A substantial growth of the installed photovoltaic systems capacity has occurred around the world during the last decade, thus enhancing the availability of electric energy in an environmentally friendly way. The maximum power point tracking technique enables maximization of the energy production...... of photovoltaic sources during stochastically varying solar irradiation and ambient temperature conditions. Thus, the overall efficiency of the photovoltaic energy production system is increased. Numerous techniques have been presented during the last decade for implementing the maximum power point tracking...... process in a photovoltaic system. This article provides an overview of the operating principles of these techniques, which are suited for either uniform or non-uniform solar irradiation conditions. The operational characteristics and implementation requirements of these maximum power point tracking...

  19. Modern actuarial risk theory: using R

    NARCIS (Netherlands)

    Kaas, R.; Goovaerts, M.; Dhaene, J.; Denuit, M.

    2008-01-01

    Modern Actuarial Risk Theory -- Using R contains what every actuary needs to know about non-life insurance mathematics. It starts with the standard material like utility theory, individual and collective model and basic ruin theory. Other topics are risk measures and premium principles, bonus-malus

  20. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  1. Gravitation: Field theory par excellence Newton, Einstein, and beyond

    International Nuclear Information System (INIS)

    Yilmaz, H.

    1984-01-01

    Newtonian gravity satifies the two principles of equivalence m/sub i/ = m/sub p/ (the passive principle) and m/sub a/ = m/sub p/ (the active principle). A relativistic gauge field concept in D = s+1 dimensional curved-space will, in general, violate these two principles as in m/sub p/ = αm/sub i/, m/sub a/ = lambdam/sub p/ where α = D: 3 and lambda measures the presence of the field stress-energy t/sup ν//sub μ/ in the field equations. It is shown that α = 1, lambda = 0 corresponds to general relativity and α = 1, lambda = 1 to the theory of the author. It is noted that the correspondence limit of general relativity is not Newton's theory but a theory suggested by Robert Hooke a few years before Newton published his in Principia. The gauge is independent of the two principles but had to do with local special relativistic correspondence and compatibility with quantum mechanics. It is shown that unless α = 1, lambda = 1 the generalized theory cannot predict correctly many observables effects, including the 532'' per century Newtonian part in Mercury's perihelion advance

  2. Finite temperature grand canonical ensemble study of the minimum electrophilicity principle.

    Science.gov (United States)

    Miranda-Quintana, Ramón Alain; Chattaraj, Pratim K; Ayers, Paul W

    2017-09-28

    We analyze the minimum electrophilicity principle of conceptual density functional theory using the framework of the finite temperature grand canonical ensemble. We provide support for this principle, both for the cases of systems evolving from a non-equilibrium to an equilibrium state and for the change from one equilibrium state to another. In doing so, we clearly delineate the cases where this principle can, or cannot, be used.

  3. The haling principle-true or false?

    International Nuclear Information System (INIS)

    Sun, S.X.; Kropaczek, D.J.; Turinsky, P.J.

    1993-01-01

    The Haling principle has long been viewed as an idealized control strategy for maximizing thermal margin over the operating cycle of a nuclear reactor core. In essence, the Haling principle states that the time-dependent power distribution that minimizes power peaking over the cycle is that power distribution which remains invariant throughout cycle. Applicable but not strictly limited to boiling water reactor (BWR) operation, maintaining a constant power Haling distribution through manipulation of control materials should, in theory, yield an optimum beyond which further reductions in thermal margin remain unachievable, given a particular arrangement of fuel inventory

  4. Learning theory and gestalt therapy.

    Science.gov (United States)

    Harper, R; Bauer, R; Kannarkat, J

    1976-01-01

    This article discusses the theory and operations of Gestalt Therapy from the viewpoint of learning theory. General comparative issues are elaborated as well as the concepts of introjection, retroflextion, confluence, and projection. Principles and techniques of Gestalt Therapy are discussed in terms of learning theory paradigm. Practical implications of the various Gestalt techniques are presented.

  5. Continuous quantum measurements and the action uncertainty principle

    Science.gov (United States)

    Mensky, Michael B.

    1992-09-01

    The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.

  6. Anomalous singularities in the complex Kohn variational principle of quantum scattering theory

    International Nuclear Information System (INIS)

    Lucchese, R.R.

    1989-01-01

    Variational principles for symmetric complex scattering matrices (e.g., the S matrix or the T matrix) based on the Kohn variational principle have been thought to be free from anomalous singularities. We demonstrate that singularities do exist for these variational principles by considering single and multichannel model problems based on exponential interaction potentials. The singularities are found by considering simultaneous variations in two nonlinear parameters in the variational calculation (e.g., the energy and the cutoff function for the irregular continuum functions). The singularities are found when the cutoff function for the irregular continuum functions extends over a range of the radial coordinate where the square-integrable basis set does not have sufficient flexibility. Effects of these singularities generally should not appear in applications of the complex Kohn method where a fixed variational basis set is considered and only the energy is varied

  7. Concept of a collective subspace associated with the invariance principle of the Schroedinger equation

    International Nuclear Information System (INIS)

    Marumori, Toshio; Hayashi, Akihisa; Tomoda, Toshiaki; Kuriyama, Atsushi; Maskawa, Toshihide

    1980-01-01

    The aim of this series of papers is to propose a microscopic theory to go beyond the situations where collective motions are described by the random phase approximation, i.e., by small amplitude harmonic oscillations about equilibrium. The theory is thus appropriate for the microscopic description of the large amplitude collective motion of soft nuclei. The essential idea is to develop a method to determine the collective subspace (or submanifold) in the many-particle Hilbert space in an optimal way, on the basis of a fundamental principle called the invariance principle of the Schroedinger equation. By using the principle within the framework of the Hartree-Fock theory, it is shown that the theory can clarify the structure of the so-called ''phonon-bands'' by self-consistently deriving the collective Hamiltonian where the number of the ''physical phonon'' is conserved. The purpose of this paper is not to go into detailed quantitative discussion, but rather to develop the basic idea. (author)

  8. Maximum entropy restoration of laser fusion target x-ray photographs

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.

    1976-01-01

    Maximum entropy principles were used to analyze the microdensitometer traces of a laser-fusion target photograph. The object is a glowing laser-fusion target microsphere 0.95 cm from a pinhole of radius 2 x 10 -4 cm, the image is 7.2 cm from the pinhole and the photon wavelength is likely to be 6.2 x 10 -8 cm. Some computational aspects of the problem are also considered

  9. Tests of Cumulative Prospect Theory with graphical displays of probability

    Directory of Open Access Journals (Sweden)

    Michael H. Birnbaum

    2008-10-01

    Full Text Available Recent research reported evidence that contradicts cumulative prospect theory and the priority heuristic. The same body of research also violates two editing principles of original prospect theory: cancellation (the principle that people delete any attribute that is the same in both alternatives before deciding between them and combination (the principle that people combine branches leading to the same consequence by adding their probabilities. This study was designed to replicate previous results and to test whether the violations of cumulative prospect theory might be eliminated or reduced by using formats for presentation of risky gambles in which cancellation and combination could be facilitated visually. Contrary to the idea that decision behavior contradicting cumulative prospect theory and the priority heuristic would be altered by use of these formats, however, data with two new graphical formats as well as fresh replication data continued to show the patterns of evidence that violate cumulative prospect theory, the priority heuristic, and the editing principles of combination and cancellation. Systematic violations of restricted branch independence also contradicted predictions of ``stripped'' prospect theory (subjectively weighted additive utility without the editing rules.

  10. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. New foundation of quantum theory

    International Nuclear Information System (INIS)

    Schmutzer, E.

    1976-01-01

    A new foundation of quantum theory is given on the basis of the formulated 'Principle of Fundamental Covariance', combining the 'Principle of General Relativity' (coordinate-covariance in space-time) and the 'Principle of Operator-Covariance' (in Hilbert space). The fundamental quantum laws proposed are: (1) time-dependent simultaneous laws of motion for the operators, general states and eigenstates, (2) commutation relations, (3) time-dependent eigenvalue equations. All these laws fulfill the Principle of Fundamental Covariance (in non-relativistic quantum mechanics with restricted coordinate transformations). (author)

  12. Do violations of the axioms of expected utility theory threaten decision analysis?

    Science.gov (United States)

    Nease, R F

    1996-01-01

    Research demonstrates that people violate the independence principle of expected utility theory, raising the question of whether expected utility theory is normative for medical decision making. The author provides three arguments that violations of the independence principle are less problematic than they might first appear. First, the independence principle follows from other more fundamental axioms whose appeal may be more readily apparent than that of the independence principle. Second, the axioms need not be descriptive to be normative, and they need not be attractive to all decision makers for expected utility theory to be useful for some. Finally, by providing a metaphor of decision analysis as a conversation between the actual decision maker and a model decision maker, the author argues that expected utility theory need not be purely normative for decision analysis to be useful. In short, violations of the independence principle do not necessarily represent direct violations of the axioms of expected utility theory; behavioral violations of the axioms of expected utility theory do not necessarily imply that decision analysis is not normative; and full normativeness is not necessary for decision analysis to generate valuable insights.

  13. Quantum field theory III. Gauge theory. A bridge between mathematicians and physicists

    Energy Technology Data Exchange (ETDEWEB)

    Zeidler, Eberhard [Max Planck Institute for Mathematics in the Sciences, Leipzig (Germany)

    2011-07-01

    In this third volume of his modern introduction to quantum field theory, Eberhard Zeidler examines the mathematical and physical aspects of gauge theory as a principle tool for describing the four fundamental forces which act in the universe: gravitative, electromagnetic, weak interaction and strong interaction. Volume III concentrates on the classical aspects of gauge theory, describing the four fundamental forces by the curvature of appropriate fiber bundles. This must be supplemented by the crucial, but elusive quantization procedure. The book is arranged in four sections, devoted to realizing the universal principle force equals curvature: Part I: The Euclidean Manifold as a Paradigm Part II: Ariadne's Thread in Gauge Theory Part III: Einstein's Theory of Special Relativity Part IV: Ariadne's Thread in Cohomology For students of mathematics the book is designed to demonstrate that detailed knowledge of the physical background helps to reveal interesting interrelationships among diverse mathematical topics. Physics students will be exposed to a fairly advanced mathematics, beyond the level covered in the typical physics curriculum. Quantum Field Theory builds a bridge between mathematicians and physicists, based on challenging questions about the fundamental forces in the universe (macrocosmos), and in the world of elementary particles (microcosmos). (orig.)

  14. Some Implications of Two Forms of the Generalized Uncertainty Principle

    Directory of Open Access Journals (Sweden)

    Mohammed M. Khalil

    2014-01-01

    Full Text Available Various theories of quantum gravity predict the existence of a minimum length scale, which leads to the modification of the standard uncertainty principle to the Generalized Uncertainty Principle (GUP. In this paper, we study two forms of the GUP and calculate their implications on the energy of the harmonic oscillator and the hydrogen atom more accurately than previous studies. In addition, we show how the GUP modifies the Lorentz force law and the time-energy uncertainty principle.

  15. A note on weighted premium calculation principles

    NARCIS (Netherlands)

    Kaluszka, M.; Laeven, R.J.A.; Okolewski, A.

    2012-01-01

    A prominent problem in actuarial science is to determine premium calculation principles that satisfy certain criteria. Goovaerts et al. [Goovaerts, M. J., De Vylder, F., Haezendonck, J., 1984. Insurance Premiums: Theory and Applications. North-Holland, Amsterdam, p. 84] establish an optimality-type

  16. Complex Correspondence Principle

    International Nuclear Information System (INIS)

    Bender, Carl M.; Meisinger, Peter N.; Hook, Daniel W.; Wang Qinghai

    2010-01-01

    Quantum mechanics and classical mechanics are distinctly different theories, but the correspondence principle states that quantum particles behave classically in the limit of high quantum number. In recent years much research has been done on extending both quantum and classical mechanics into the complex domain. These complex extensions continue to exhibit a correspondence, and this correspondence becomes more pronounced in the complex domain. The association between complex quantum mechanics and complex classical mechanics is subtle and demonstrating this relationship requires the use of asymptotics beyond all orders.

  17. Information theory and rate distortion theory for communications and compression

    CERN Document Server

    Gibson, Jerry

    2013-01-01

    This book is very specifically targeted to problems in communications and compression by providing the fundamental principles and results in information theory and rate distortion theory for these applications and presenting methods that have proved and will prove useful in analyzing and designing real systems. The chapters contain treatments of entropy, mutual information, lossless source coding, channel capacity, and rate distortion theory; however, it is the selection, ordering, and presentation of the topics within these broad categories that is unique to this concise book. While the cover

  18. A Review of Multimedia Learning Principles: Split-Attention, Modality, and Redundancy Effects

    Directory of Open Access Journals (Sweden)

    Pelin YUKSEL ARSLAN

    2012-02-01

    Full Text Available This study aims to present a literature review on three principles of multimedia learningincluding split attention, modality, and redundancy effects with regard to their contribution to cognitiveload theory. According to cognitive load theory, information should be presented by considering excessiveload on working memory. The first principle states that attending to two distinct sources of informationmay impose a high cognitive load, and this process is referred to as the split-attention effect (Kalyuga,Chandler & Sweller, 1991, 1992. The second principle, Modality effect claims that on-screen text shouldbe presented in an auditory form instead of visually when designing a multimedia environment (Moreno &Mayer, 1999. Using more than one sensory mode augments forceful working memory that producesprogressive effects on learning. The third principle redundancy effect occurs when information presentedrepeatedly interferes with learning. This study provides guidance how to create more effective instructionwith multimedia materials for instructional designers.

  19. Optimal item discrimination and maximum information for logistic IRT models

    NARCIS (Netherlands)

    Veerkamp, W.J.J.; Veerkamp, Wim J.J.; Berger, Martijn P.F.; Berger, Martijn

    1999-01-01

    Items with the highest discrimination parameter values in a logistic item response theory model do not necessarily give maximum information. This paper derives discrimination parameter values, as functions of the guessing parameter and distances between person parameters and item difficulty, that

  20. Space and time optimization of nuclear reactors by means of the Pontryagin principle

    International Nuclear Information System (INIS)

    Anton, V.

    1979-01-01

    A numerical method is being presented for solving space dependent optimization problems concerning a functional for one dimensional geometries in the few group diffusion approximation. General dimensional analysis was applied to derive relations for the maximum of a functional and the limiting values of the constraints. Two procedures were given for calculating the anisotropic diffusion coefficients in order to improve the results of the diffusion approximation. In this work two procedures were presented for collapsing the microscopic multigroup cross sections, one general and another specific to the space dependent optimization problems solved by means of the Pontryagin maximum principle. Neutron spectrum optimization is performed to ensure the burnup of Pu 239 isotope produced in a thermal nuclear reactor. A procedure is also given for the minimization of finite functional set by means of the Pontryagin maximum principle. A method for determining the characteristics of fission Pseudo products is formulated in one group and multigroup cases. This method is applied in the optimization of the burnup in nuclear reactors with fuel electric cells. A procedure to mjnimze the number of the fuel burnup equations is described. The optimization problems presented and solved in this work point to the efficiency of the maximum principle. Each problem on method presented in the various chapters is accompanied by considerations concerning dual problems and possibilities of further research development. (author)