WorldWideScience

Sample records for maximum principle theory

  1. The Maximum Entropy Principle and the Modern Portfolio Theory

    Directory of Open Access Journals (Sweden)

    Ailton Cassetari

    2003-12-01

    Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.

  2. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  3. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  4. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  5. Ethical principles and theories.

    Science.gov (United States)

    Schultz, R C

    1993-01-01

    Ethical theory about what is right and good in human conduct lies behind the issues practitioners face and the codes they turn to for guidance; it also provides guidance for actions, practices, and policies. Principles of obligation, such as egoism, utilitarianism, and deontology, offer general answers to the question, "Which acts/practices are morally right?" A re-emerging alternative to using such principles to assess individual conduct is to center normative theory on personal virtues. For structuring society's institutions, principles of social justice offer alternative answers to the question, "How should social benefits and burdens be distributed?" But human concerns about right and good call for more than just theoretical responses. Some critics (eg, the postmodernists and the feminists) charge that normative ethical theorizing is a misguided enterprise. However, that charge should be taken as a caution and not as a refutation of normative ethical theorizing.

  6. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  7. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  8. Fundamental Principle for Quantum Theory

    OpenAIRE

    Khrennikov, Andrei

    2002-01-01

    We propose the principle, the law of statistical balance for basic physical observables, which specifies quantum statistical theory among all other statistical theories of measurements. It seems that this principle might play in quantum theory the role that is similar to the role of Einstein's relativity principle.

  9. On the Pontryagin maximum principle for systems with delays. Economic applications

    Science.gov (United States)

    Kim, A. V.; Kormyshev, V. M.; Kwon, O. B.; Mukhametshin, E. R.

    2017-11-01

    The Pontryagin maximum principle [6] is the key stone of finite-dimensional optimal control theory [1, 2, 5]. So beginning with opening the maximum principle it was important to extend the maximum principle on various classes of dynamical systems. In t he paper we consider some aspects of application of i-smooth analysis [3, 4] in the theory of the Pontryagin maximum principle [6] for systems with delays, obtained results can be applied by elaborating optimal program controls in economic models with delays.

  10. Maximum entropy principle and hydrodynamic models in statistical mechanics

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2012-01-01

    This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the

  11. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  12. Fundamental principles of quantum theory

    International Nuclear Information System (INIS)

    Bugajski, S.

    1980-01-01

    After introducing general versions of three fundamental quantum postulates - the superposition principle, the uncertainty principle and the complementarity principle - the question of whether the three principles are sufficiently strong to restrict the general Mackey description of quantum systems to the standard Hilbert-space quantum theory is discussed. An example which shows that the answer must be negative is constructed. An abstract version of the projection postulate is introduced and it is demonstrated that it could serve as the missing physical link between the general Mackey description and the standard quantum theory. (author)

  13. Application of the maximum entropy production principle to electrical systems

    International Nuclear Information System (INIS)

    Christen, Thomas

    2006-01-01

    For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated

  14. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  15. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  16. On a Weak Discrete Maximum Principle for hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Vejchodský, Tomáš

    -, č. 209 (2007), s. 54-65 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/05/0629 Institutional research plan: CEZ:AV0Z20570509; CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.943, year: 2007

  17. The constraint rule of the maximum entropy principle

    NARCIS (Netherlands)

    Uffink, J.

    1995-01-01

    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability

  18. A Maximum Principle for SDEs of Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)

    2011-06-15

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  19. A Maximum Principle for SDEs of Mean-Field Type

    International Nuclear Information System (INIS)

    Andersson, Daniel; Djehiche, Boualem

    2011-01-01

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  20. Twenty-five years of maximum-entropy principle

    Science.gov (United States)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  1. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  2. The discrete maximum principle for Galerkin solutions of elliptic problems

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2012-01-01

    Roč. 10, č. 1 (2012), s. 25-43 ISSN 1895-1074 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : discrete maximum principle * monotone methods * Galerkin solution Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2012 http://www.springerlink.com/content/x73624wm23x4wj26

  3. Discrete maximum principle for the P1 - P0 weak Galerkin finite element approximations

    Science.gov (United States)

    Wang, Junping; Ye, Xiu; Zhai, Qilong; Zhang, Ran

    2018-06-01

    This paper presents two discrete maximum principles (DMP) for the numerical solution of second order elliptic equations arising from the weak Galerkin finite element method. The results are established by assuming an h-acute angle condition for the underlying finite element triangulations. The mathematical theory is based on the well-known De Giorgi technique adapted in the finite element context. Some numerical results are reported to validate the theory of DMP.

  4. Maximum principles for boundary-degenerate linear parabolic differential operators

    OpenAIRE

    Feehan, Paul M. N.

    2013-01-01

    We develop weak and strong maximum principles for boundary-degenerate, linear, parabolic, second-order partial differential operators, $Lu := -u_t-\\tr(aD^2u)-\\langle b, Du\\rangle + cu$, with \\emph{partial} Dirichlet boundary conditions. The coefficient, $a(t,x)$, is assumed to vanish along a non-empty open subset, $\\mydirac_0!\\sQ$, called the \\emph{degenerate boundary portion}, of the parabolic boundary, $\\mydirac!\\sQ$, of the domain $\\sQ\\subset\\RR^{d+1}$, while $a(t,x)$ may be non-zero at po...

  5. Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles

    Directory of Open Access Journals (Sweden)

    Paulo H. Egydio

    2008-01-01

    Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.

  6. Optimal Control of Polymer Flooding Based on Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  7. A Stochastic Maximum Principle for General Mean-Field Systems

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Li, Juan; Ma, Jin

    2016-01-01

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  8. A Stochastic Maximum Principle for General Mean-Field Systems

    Energy Technology Data Exchange (ETDEWEB)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr [Université de Bretagne-Occidentale, Département de Mathématiques (France); Li, Juan, E-mail: juanli@sdu.edu.cn [Shandong University, Weihai, School of Mathematics and Statistics (China); Ma, Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States)

    2016-12-15

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  9. Optimal control of a double integrator a primer on maximum principle

    CERN Document Server

    Locatelli, Arturo

    2017-01-01

    This book provides an introductory yet rigorous treatment of Pontryagin’s Maximum Principle and its application to optimal control problems when simple and complex constraints act on state and control variables, the two classes of variable in such problems. The achievements resulting from first-order variational methods are illustrated with reference to a large number of problems that, almost universally, relate to a particular second-order, linear and time-invariant dynamical system, referred to as the double integrator. The book is ideal for students who have some knowledge of the basics of system and control theory and possess the calculus background typically taught in undergraduate curricula in engineering. Optimal control theory, of which the Maximum Principle must be considered a cornerstone, has been very popular ever since the late 1950s. However, the possibly excessive initial enthusiasm engendered by its perceived capability to solve any kind of problem gave way to its equally unjustified rejecti...

  10. Spatial data modelling and maximum entropy theory

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2005-01-01

    Roč. 51, č. 2 (2005), s. 80-83 ISSN 0139-570X Institutional research plan: CEZ:AV0Z10750506 Keywords : spatial data classification * distribution function * error distribution Subject RIV: BD - Theory of Information

  11. Principles of chiral perturbation theory

    International Nuclear Information System (INIS)

    Leutwyler, H.

    1995-01-01

    An elementary discussion of the main concepts used in chiral perturbation theory is given in textbooks and a more detailed picture of the applications may be obtained from the reviews. Concerning the foundations of the method, the literature is comparatively scarce. So, I will concentrate on the basic concepts and explain why the method works. (author)

  12. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    Science.gov (United States)

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  13. Gauge theory and variational principles

    CERN Document Server

    Bleecker, David

    2005-01-01

    This text provides a framework for describing and organizing the basic forces of nature and the interactions of subatomic particles. A detailed and self-contained mathematical account of gauge theory, it is geared toward beginning graduate students and advanced undergraduates in mathematics and physics. This well-organized treatment supplements its rigor with intuitive ideas.Starting with an examination of principal fiber bundles and connections, the text explores curvature; particle fields, Lagrangians, and gauge invariance; Lagrange's equation for particle fields; and the inhomogeneous field

  14. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  15. Intervention principles: Theory and practice

    International Nuclear Information System (INIS)

    Jensen, P.H.; Crick, M.J.

    2000-01-01

    After the Chernobyl accident, it became clear that some clarification of the basic principles for intervention was necessary as well as more internationally recognised numerical guidance on intervention levels. There was in the former USSR and in Europe much confusion over, and lack of recognition of, the very different origins and purposes of dose limits for controlling deliberate increases in radiation exposure for practices and dose levels at which intervention is prompted to decrease existing radiation exposure. In the latest recommendations from ICRP in its Publication 60, a clear distinction is made between the radiation protection systems for a practice and for intervention. According to ICRP, the protective measures forming a program of intervention, which always have some disadvantages, should each be justified on their own merit in the sense that they should do more good than harm, and their form, scale, and duration should be optimised so as to do the most good. Intervention levels for protective actions can be established for many possible accident scenarios. For planning and preparedness purposes, a generic optimisation based on generic accident scenario calculations, should result in optimised generic intervention levels for each protective measure. The factors entering such an optimisation will on the benefit side include avertable doses and avertable risks as well as reassurance. On the harm side the factors include monetary costs, collective and individual risk for the action itself, social disruption and anxiety. More precise optimisation analyses based on real site and accident specific data can be carried out and result in specific intervention levels. It is desirable that values for easily measurable quantities such as dose rate and surface contamination density be developed as surrogates for intervention levels of avertable dose. However, it is important that these quantities should be used carefully and applied taking account of local

  16. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    Science.gov (United States)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  17. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...

  18. A general maximum entropy framework for thermodynamic variational principles

    International Nuclear Information System (INIS)

    Dewar, Roderick C.

    2014-01-01

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law

  19. A general maximum entropy framework for thermodynamic variational principles

    Energy Technology Data Exchange (ETDEWEB)

    Dewar, Roderick C., E-mail: roderick.dewar@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.

  20. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  1. Drying principles and theory: An overview

    International Nuclear Information System (INIS)

    Ekechukwu, O.V.

    1995-10-01

    A comprehensive review of the fundamental principles and theories governing the drying process is presented. Basic definitions are given. The development of contemporary models of drying of agricultural products are traced from the earliest reported sorption and moisture equilibrium models, through the single kernel of product models to the thin layer and deep bed drying analysis. (author). 29 refs, 10 figs

  2. A maximum principle for time dependent transport in systems with voids

    International Nuclear Information System (INIS)

    Schofield, S.L.; Ackroyd, R.T.

    1996-01-01

    A maximum principle is developed for the first-order time dependent Boltzmann equation. The maximum principle is a generalization of Schofield's κ(θ) principle for the first-order steady state Boltzmann equation, and provides a treatment of time dependent transport in systems with void regions. The formulation comprises a direct least-squares minimization allied with a suitable choice of bilinear functional, and gives rise to a maximum principle whose functional is free of terms that have previously led to difficulties in treating void regions. (Author)

  3. Venus atmosphere profile from a maximum entropy principle

    Directory of Open Access Journals (Sweden)

    L. N. Epele

    2007-10-01

    Full Text Available The variational method with constraints recently developed by Verkley and Gerkema to describe maximum-entropy atmospheric profiles is generalized to ideal gases but with temperature-dependent specific heats. In so doing, an extended and non standard potential temperature is introduced that is well suited for tackling the problem under consideration. This new formalism is successfully applied to the atmosphere of Venus. Three well defined regions emerge in this atmosphere up to a height of 100 km from the surface: the lowest one up to about 35 km is adiabatic, a transition layer located at the height of the cloud deck and finally a third region which is practically isothermal.

  4. Can the maximum entropy principle be explained as a consistency requirement?

    NARCIS (Netherlands)

    Uffink, J.

    1997-01-01

    The principle of maximum entropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in

  5. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    Science.gov (United States)

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  6. Setting the renormalization scale in QCD: The principle of maximum conformality

    DEFF Research Database (Denmark)

    Brodsky, S. J.; Di Giustino, L.

    2012-01-01

    A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when the renormali......A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale mu of the running coupling alpha(s)(mu(2)). The purpose of the running coupling in any gauge theory is to sum all terms involving the beta function; in fact, when...... the renormalization scale is set properly, all nonconformal beta not equal 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with beta...... = 0. The resulting scale-fixed predictions using the principle of maximum conformality (PMC) are independent of the choice of renormalization scheme-a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale setting in the Abelian limit...

  7. Stochastic control theory dynamic programming principle

    CERN Document Server

    Nisio, Makiko

    2015-01-01

    This book offers a systematic introduction to the optimal stochastic control theory via the dynamic programming principle, which is a powerful tool to analyze control problems. First we consider completely observable control problems with finite horizons. Using a time discretization we construct a nonlinear semigroup related to the dynamic programming principle (DPP), whose generator provides the Hamilton–Jacobi–Bellman (HJB) equation, and we characterize the value function via the nonlinear semigroup, besides the viscosity solution theory. When we control not only the dynamics of a system but also the terminal time of its evolution, control-stopping problems arise. This problem is treated in the same frameworks, via the nonlinear semigroup. Its results are applicable to the American option price problem. Zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games are studied via a nonlinear semigroup related to DPP (the min-ma...

  8. A maximum principle for the first-order Boltzmann equation, incorporating a potential treatment of voids

    International Nuclear Information System (INIS)

    Schofield, S.L.

    1988-01-01

    Ackroyd's generalized least-squares method for solving the first-order Boltzmann equation is adapted to incorporate a potential treatment of voids. The adaptation comprises a direct least-squares minimization allied with a suitably-defined bilinear functional. The resulting formulation gives rise to a maximum principle whose functional does not contain terms of the type that have previously led to difficulties in treating void regions. The maximum principle is derived without requiring continuity of the flux at interfaces. The functional of the maximum principle is concluded to have an Euler-Lagrange equation given directly by the first-order Boltzmann equation. (author)

  9. Maximum principle and convergence of central schemes based on slope limiters

    KAUST Repository

    Mehmetoglu, Orhan; Popov, Bojan

    2012-01-01

    A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.

  10. Maximum Principles and Boundary Value Problems for First-Order Neutral Functional Differential Equations

    Directory of Open Access Journals (Sweden)

    Domoshnitsky Alexander

    2009-01-01

    Full Text Available We obtain the maximum principles for the first-order neutral functional differential equation where , and are linear continuous operators, and are positive operators, is the space of continuous functions, and is the space of essentially bounded functions defined on . New tests on positivity of the Cauchy function and its derivative are proposed. Results on existence and uniqueness of solutions for various boundary value problems are obtained on the basis of the maximum principles.

  11. Maximum principles for boundary-degenerate second-order linear elliptic differential operators

    OpenAIRE

    Feehan, Paul M. N.

    2012-01-01

    We prove weak and strong maximum principles, including a Hopf lemma, for smooth subsolutions to equations defined by linear, second-order, partial differential operators whose principal symbols vanish along a portion of the domain boundary. The boundary regularity property of the smooth subsolutions along this boundary vanishing locus ensures that these maximum principles hold irrespective of the sign of the Fichera function. Boundary conditions need only be prescribed on the complement in th...

  12. Principles of the theory of solids

    CERN Document Server

    Ziman, J M

    1972-01-01

    Professor Ziman's classic textbook on the theory of solids was first pulished in 1964. This paperback edition is a reprint of the second edition, which was substantially revised and enlarged in 1972. The value and popularity of this textbook is well attested by reviewers' opinions and by the existence of several foreign language editions, including German, Italian, Spanish, Japanese, Polish and Russian. The book gives a clear exposition of the elements of the physics of perfect crystalline solids. In discussing the principles, the author aims to give students an appreciation of the conditions which are necessary for the appearance of the various phenomena. A self-contained mathematical account is given of the simplest model that will demonstrate each principle. A grounding in quantum mechanics and knowledge of elementary facts about solids is assumed. This is therefore a textbook for advanced undergraduates and is also appropriate for graduate courses.

  13. Maximum Entropy and Theory Construction: A Reply to Favretti

    Directory of Open Access Journals (Sweden)

    John Harte

    2018-04-01

    Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.

  14. A General Stochastic Maximum Principle for SDEs of Mean-field Type

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Djehiche, Boualem; Li Juan

    2011-01-01

    We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.

  15. An extension of the maximum principle to dimensional systems and its application in nuclear engineering problems

    International Nuclear Information System (INIS)

    Gilai, D.

    1976-01-01

    The Maximum Principle deals with optimization problems of systems, which are governed by ordinary differential equations, and which include constraints on the state and control variables. The development of nuclear engineering confronted the designers of reactors, shielding and other nuclear devices with many requests of optimization and savings and it was straight forward to use the Maximum Principle for solving optimization problems in nuclear engineering, in fact, it was widely used both structural concept design and dynamic control of nuclear systems. The main disadvantage of the Maximum Principle is that it is suitable only for systems which may be described by ordinary differential equations, e.g. one dimensional systems. In the present work, starting from the variational approach, the original Maximum Principle is extended to multidimensional systems, and the principle which has been derived, is of a more general form and is applicable to any system which can be defined by linear partial differential equations of any order. To check out the applicability of the extended principle, two examples are solved: the first in nuclear shield design, where the goal is to construct a shield around a neutron emitting source, using given materials, so that the total dose outside of the shielding boundaries is minimized, the second in material distribution design in the core of a power reactor, so that the power peak is minimised. For the second problem, an iterative method was developed. (B.G.)

  16. Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    International Nuclear Information System (INIS)

    Han Yuecai; Hu Yaozhong; Song Jian

    2013-01-01

    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.

  17. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)

    2016-03-15

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  18. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    International Nuclear Information System (INIS)

    Kaya, Savaş; Kaya, Cemal; Islam, Nazmul

    2016-01-01

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  19. Bounds and maximum principles for the solution of the linear transport equation

    International Nuclear Information System (INIS)

    Larsen, E.W.

    1981-01-01

    Pointwise bounds are derived for the solution of time-independent linear transport problems with surface sources in convex spatial domains. Under specified conditions, upper bounds are derived which, as a function of position, decrease with distance from the boundary. Also, sufficient conditions are obtained for the existence of maximum and minimum principles, and a counterexample is given which shows that such principles do not always exist

  20. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  1. Statistical Significance of the Maximum Hardness Principle Applied to Some Selected Chemical Reactions.

    Science.gov (United States)

    Saha, Ranajit; Pan, Sudip; Chattaraj, Pratim K

    2016-11-05

    The validity of the maximum hardness principle (MHP) is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd) and LC-BLYP/6-311++G(2df,3pd) (def2-QZVP for iodine and mercury) levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.

  2. On the use Pontryagin's maximum principle in the reactor profiling problem

    International Nuclear Information System (INIS)

    Silko, P.P.

    1976-01-01

    The optimal given power profile approximation problem in nuclear reactors is posed as one of physical profiling problems in terms of the theory of optimal processes. It is necessary to distribute the concentration of the profiling substance in a certain nuclear reactor in such a way that the power profile obtained in the core would be as near as possible to the given profile. It is suggested that the original system of differential equations describing the behaviour of neutrons in a reactor and some applied requirements may be written in the form of usual differential equations of the first order. The integral quadratic criterion evaluating a deviation of the power profile obtained in a reactor from the given one is used as a purpose function. The initial state is given, the control aim is determined as the necessity of transfer of a control object from the initial state to the given set of finite states known as a purpose set. A class of permissible controls consists of measurable functions in the given range. On solving the formulated problem Pontryagin's maximum principle is used. As an example, the power profile flattening problem is considered, for which a program in Fortran-4 for the 'Minsk-32' computer has been written. The optimal reactor parameters calculated by this program at various boundary values of the control are presented. It is noticed that a type of the optimal reactor configuration depends on boundary values of the control

  3. The Maximum Entropy Production Principle: Its Theoretical Foundations and Applications to the Earth System

    Directory of Open Access Journals (Sweden)

    Axel Kleidon

    2010-03-01

    Full Text Available The Maximum Entropy Production (MEP principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the already established framework of non-equilibrium thermodynamics, with the assumption of local thermodynamic equilibrium at the appropriate scales.

  4. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

    OpenAIRE

    Ge Cheng; Zhenyu Zhang; Moses Ntanda Kyebambe; Nasser Kimbugwe

    2016-01-01

    Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that...

  5. Conceptual Models and Theory-Embedded Principles on Effective Schooling.

    Science.gov (United States)

    Scheerens, Jaap

    1997-01-01

    Reviews models and theories on effective schooling. Discusses four rationality-based organization theories and a fifth perspective, chaos theory, as applied to organizational functioning. Discusses theory-embedded principles flowing from these theories: proactive structuring, fit, market mechanisms, cybernetics, and self-organization. The…

  6. Relationship between Maximum Principle and Dynamic Programming for Stochastic Recursive Optimal Control Problems and Applications

    Directory of Open Access Journals (Sweden)

    Jingtao Shi

    2013-01-01

    Full Text Available This paper is concerned with the relationship between maximum principle and dynamic programming for stochastic recursive optimal control problems. Under certain differentiability conditions, relations among the adjoint processes, the generalized Hamiltonian function, and the value function are given. A linear quadratic recursive utility portfolio optimization problem in the financial engineering is discussed as an explicitly illustrated example of the main result.

  7. Discrete maximum principle for FE solutions of the diffusion-reaction problem on prismatic meshes

    Czech Academy of Sciences Publication Activity Database

    Hannukainen, A.; Korotov, S.; Vejchodský, Tomáš

    2009-01-01

    Roč. 226, č. 2 (2009), s. 275-287 ISSN 0377-0427 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : diffusion-reaction problem * maximum principle * prismatic finite elements Subject RIV: BA - General Mathematics Impact factor: 1.292, year: 2009

  8. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  9. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Science.gov (United States)

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  10. Optimal control problems with delay, the maximum principle and necessary conditions

    NARCIS (Netherlands)

    Frankena, J.F.

    1975-01-01

    In this paper we consider a rather general optimal control problem involving ordinary differential equations with delayed arguments and a set of equality and inequality restrictions on state- and control variables. For this problem a maximum principle is given in pointwise form, using variational

  11. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    Energy Technology Data Exchange (ETDEWEB)

    Donnelly, Catherine, E-mail: C.Donnelly@hw.ac.uk [Heriot-Watt University, Department of Actuarial Mathematics and Statistics (United Kingdom)

    2011-10-15

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  12. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    International Nuclear Information System (INIS)

    Donnelly, Catherine

    2011-01-01

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  13. The underlying principles of relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Logunov, A.A.

    1989-01-01

    The paper deals with the main statements of relativistic theory of gravitation, constructed in result of critical analysis of the general theory of relativity. The principle of geometrization is formulated

  14. Maximum principle for a stochastic delayed system involving terminal state constraints.

    Science.gov (United States)

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  15. Africa and the Principles and Theories of International Relations ...

    African Journals Online (AJOL)

    To what extent have the principles and theories of international relations (as formulated) accommodated the specific needs and circumstances of Africa? In other words, how can the circumstances and peculiarities of Africa be made to shape and influence the established principles and theories of international relations as ...

  16. Maximum principles and sharp constants for solutions of elliptic and parabolic systems

    CERN Document Server

    Kresin, Gershon

    2012-01-01

    The main goal of this book is to present results pertaining to various versions of the maximum principle for elliptic and parabolic systems of arbitrary order. In particular, the authors present necessary and sufficient conditions for validity of the classical maximum modulus principles for systems of second order and obtain sharp constants in inequalities of Miranda-Agmon type and in many other inequalities of a similar nature. Somewhat related to this topic are explicit formulas for the norms and the essential norms of boundary integral operators. The proofs are based on a unified approach using, on one hand, representations of the norms of matrix-valued integral operators whose target spaces are linear and finite dimensional, and, on the other hand, on solving certain finite dimensional optimization problems. This book reflects results obtained by the authors, and can be useful to research mathematicians and graduate students interested in partial differential equations.

  17. Comments on a derivation and application of the 'maximum entropy production' principle

    International Nuclear Information System (INIS)

    Grinstein, G; Linsker, R

    2007-01-01

    We show that (1) an error invalidates the derivation (Dewar 2005 J. Phys. A: Math. Gen. 38 L371) of the maximum entropy production (MaxEP) principle for systems far from equilibrium, for which the constitutive relations are nonlinear; and (2) the claim (Dewar 2003 J. Phys. A: Math. Gen. 36 631) that the phenomenon of 'self-organized criticality' is a consequence of MaxEP for slowly driven systems is unjustified. (comment)

  18. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    Energy Technology Data Exchange (ETDEWEB)

    Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  19. Discrete maximum principle for Poisson equation with mixed boundary conditions solved by hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš; Šolín, P.

    2009-01-01

    Roč. 1, č. 2 (2009), s. 201-214 ISSN 2070-0733 R&D Projects: GA AV ČR IAA100760702; GA ČR(CZ) GA102/07/0496; GA ČR GA102/05/0629 Institutional research plan: CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM * Poisson equation * mixed boundary conditions Subject RIV: BA - General Mathematics

  20. Discrete Maximum Principle for Higher-Order Finite Elements in 1D

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš; Šolín, Pavel

    2007-01-01

    Roč. 76, č. 260 (2007), s. 1833-1846 ISSN 0025-5718 R&D Projects: GA ČR GP201/04/P021 Institutional research plan: CEZ:AV0Z10190503; CEZ:AV0Z20760514 Keywords : discrete maximum principle * discrete Grren´s function * higher-order elements Subject RIV: BA - General Mathematics Impact factor: 1.230, year: 2007

  1. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    Science.gov (United States)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  2. Generalized uncertainty principle and the maximum mass of ideal white dwarfs

    Energy Technology Data Exchange (ETDEWEB)

    Rashidi, Reza, E-mail: reza.rashidi@srttu.edu

    2016-11-15

    The effects of a generalized uncertainty principle on the structure of an ideal white dwarf star is investigated. The equation describing the equilibrium configuration of the star is a generalized form of the Lane–Emden equation. It is proved that the star always has a finite size. It is then argued that the maximum mass of such an ideal white dwarf tends to infinity, as opposed to the conventional case where it has a finite value.

  3. Application of maximum values for radiation exposure and principles for the calculation of radiation doses

    International Nuclear Information System (INIS)

    2007-08-01

    The guide presents the definitions of equivalent dose and effective dose, the principles for calculating these doses, and instructions for applying their maximum values. The limits (Annual Limit on Intake and Derived Air Concentration) derived from dose limits are also presented for the purpose of monitoring exposure to internal radiation. The calculation of radiation doses caused to a patient from medical research and treatment involving exposure to ionizing radiation is beyond the scope of this ST Guide

  4. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2011-01-01

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of (ℎ/2π) 2 . In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when (ℎ/2π)→0.

  5. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  6. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  7. Predicting the Outcome of NBA Playoffs Based on the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Ge Cheng

    2016-12-01

    Full Text Available Predicting the outcome of National Basketball Association (NBA matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.

  8. Relativity and equivalence principles in the gauge theory of gravitation

    International Nuclear Information System (INIS)

    Ivanenko, D.; Sardanashvili, G.

    1981-01-01

    Roles of relativity (RP) and equivalence principles (EP) in the gauge theory of gravity are shown. RP in the gravitational theory in formalism of laminations can be formulated as requirement of covariance of equations relative to the GL + (4, R)(X) gauge group. In such case RP turns out to be identical to the gauge principle in the gauge theory of a group of outer symmetries, and the gravitational theory can be directly constructed as the gauge theory. In general relativity theory the equivalence theory adds RP and is intended for description of transition to a special relativity theory in some system of reference. The approach described takes into account that in the gauge theory, besides gauge fields under conditions of spontaneous symmetry breaking, the Goldstone and Higgs fields can also arise, to which the gravitational metric field is related, what is the sequence of taking account of RP in the gauge theory of gravitation [ru

  9. Effective medium theory principles and applications

    CERN Document Server

    Choy, Tuck C

    2015-01-01

    Effective medium theory dates back to the early days of the theory of electricity. Faraday in 1837 proposed one of the earliest models for a composite metal-insulator dielectric and around 1870 Maxwell and later Garnett (1904) developed models to describe a composite or mixed material medium. The subject has been developed considerably since and while the results are useful for predicting materials performance, the theory can also be used in a wide range of problems in physics and materials engineering. This book develops the topic of effective medium theory by bringing together the essentials of both the static and the dynamical theory. Electromagnetic systems are thoroughly dealt with, as well as related areas such as the CPA theory of alloys, liquids, the density functional theory etc., with applications to ultrasonics, hydrodynamics, superconductors, porous media and others, where the unifying aspects of the effective medium concept are emphasized. In this new second edition two further chapters have been...

  10. The Independence of Markov's Principle in Type Theory

    DEFF Research Database (Denmark)

    Coquand, Thierry; Mannaa, Bassel

    2017-01-01

    for the generic point of this model. Instead we design an extension of type theory, which intuitively extends type theory by the addition of a generic point of Cantor space. We then show the consistency of this extension by a normalization argument. Markov's principle does not hold in this extension......In this paper, we show that Markov's principle is not derivable in dependent type theory with natural numbers and one universe. One way to prove this would be to remark that Markov's principle does not hold in a sheaf model of type theory over Cantor space, since Markov's principle does not hold......, and it follows that it cannot be proved in type theory....

  11. Perspective: Maximum caliber is a general variational principle for dynamical systems.

    Science.gov (United States)

    Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A

    2018-01-07

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  12. Perspective: Maximum caliber is a general variational principle for dynamical systems

    Science.gov (United States)

    Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.

    2018-01-01

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  13. Foundations of gravitation theory: the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1978-01-01

    A new framework is presented within which to discuss the principle of equivalence and its experimental tests. The framework incorporates a special structure imposed on the equivalence principle by the principle of energy conservation. This structure includes relations among the conceptual components of the equivalence principle as well as quantitative relations among the outcomes of its experimental tests. One of the most striking new results obtained through use of this framework is a connection between the breakdown of local Lorentz invariance and the breakdown of the principle that all bodies fall with the same acceleration in a gravitational field. An extensive discussion of experimental tests of the equivalence principle and their significance is also presented. Within the above framework, theory-independent analyses of a broad range of equivalence principle tests are possible. Gravitational redshift experiments. Doppler-shift experiments, the Turner-Hill and Hughes-Drever experiments, and a number of solar-system tests of gravitation theories are analyzed. Application of the techniques of theoretical nuclear physics to the quantitative interpretation of equivalence principle tests using laboratory materials of different composition yields a number of important results. It is found that current Eotvos experiments significantly demonstrate the compatibility of the weak interactions with the equivalence principle. It is also shown that the Hughes-Drever experiment is the most precise test of local Lorentz invariance yet performed. The work leads to a strong, tightly knit empirical basis for the principle of equivalence, the central pillar of the foundations of gravitation theory

  14. A maximum-principle preserving finite element method for scalar conservation equations

    KAUST Repository

    Guermond, Jean-Luc

    2014-04-01

    This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.

  15. Optimal control of algae growth by controlling CO 2 and nutrition flow using Pontryagin Maximum Principle

    Science.gov (United States)

    Mardlijah; Jamil, Ahmad; Hanafi, Lukman; Sanjaya, Suharmadi

    2017-09-01

    There are so many benefit of algae. One of them is using for renewable energy and sustainable in the future. The greater growth of algae will increasing biodiesel production and the increase of algae growth is influenced by glucose, nutrients and photosynthesis process. In this paper, the optimal control problem of the growth of algae is discussed. The objective function is to maximize the concentration of dry algae while the control is the flow of carbon dioxide and the nutrition. The solution is obtained by applying the Pontryagin Maximum Principle. and the result show that the concentration of algae increased more than 15 %.

  16. A Second-Order Maximum Principle Preserving Lagrange Finite Element Technique for Nonlinear Scalar Conservation Equations

    KAUST Repository

    Guermond, Jean-Luc; Nazarov, Murtazo; Popov, Bojan; Yang, Yong

    2014-01-01

    © 2014 Society for Industrial and Applied Mathematics. This paper proposes an explicit, (at least) second-order, maximum principle satisfying, Lagrange finite element method for solving nonlinear scalar conservation equations. The technique is based on a new viscous bilinear form introduced in Guermond and Nazarov [Comput. Methods Appl. Mech. Engrg., 272 (2014), pp. 198-213], a high-order entropy viscosity method, and the Boris-Book-Zalesak flux correction technique. The algorithm works for arbitrary meshes in any space dimension and for all Lipschitz fluxes. The formal second-order accuracy of the method and its convergence properties are tested on a series of linear and nonlinear benchmark problems.

  17. A maximum-principle preserving finite element method for scalar conservation equations

    KAUST Repository

    Guermond, Jean-Luc; Nazarov, Murtazo

    2014-01-01

    This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.

  18. Dynamic Optimization of a Polymer Flooding Process Based on Implicit Discrete Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and some inequality constraints as polymer concentration and injection amount limitation. The optimal control model is discretized by full implicit finite-difference method. To cope with the discrete optimal control problem (OCP, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s discrete maximum principle. A modified gradient method with new adjoint construction is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  19. Justifying Design Decisions with Theory-based Design Principles

    OpenAIRE

    Schermann, Michael;Gehlert, Andreas;Pohl, Klaus;Krcmar, Helmut

    2014-01-01

    Although the role of theories in design research is recognized, we show that little attention has been paid on how to use theories when designing new artifacts. We introduce design principles as a new methodological approach to address this problem. Design principles extend the notion of design rationales that document how a design decision emerged. We extend the concept of design rationales by using theoretical hypotheses to support or object to design decisions. At the example of developing...

  20. Applications of the principle of maximum entropy: from physics to ecology.

    Science.gov (United States)

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.

  1. Applications of the principle of maximum entropy: from physics to ecology

    International Nuclear Information System (INIS)

    Banavar, Jayanth R; Volkov, Igor; Maritan, Amos

    2010-01-01

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori. (topical review)

  2. Experimental verification of the imposing principle for maximum permissible levels of multicolor laser radiation

    Directory of Open Access Journals (Sweden)

    Ivashin V.A.

    2013-12-01

    Full Text Available Aims. The study presents the results of experimental research to verify the principle overlay for maximum permissible levels (MPL of multicolor laser radiation single exposure on eyes. This principle of the independence of the effects of radiation with each wavelength (the imposing principle, was founded and generalized to a wide range of exposure conditions. Experimental verification of this approach in relation to the impact of laser radiation on tissue fundus of an eye, as shows the analysis of the literature was not carried out. Material and methods. Was used in the experimental laser generating radiation with wavelengths: Л1 =0,532 microns, A2=0,556to 0,562 microns and A3=0,619to 0,621 urn. Experiments were carried out on eyes of rabbits with evenly pigmented eye bottom. Results. At comparison of results of processing of the experimental data with the calculated data it is shown that these levels are close by their parameters. Conclusions. For the first time in the Russian Federation had been performed experimental studies on the validity of multi-colored laser radiation on the organ of vision. In view of the objective coincidence of the experimental data with the calculated data, we can conclude that the mathematical formulas work.

  3. Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.

    Science.gov (United States)

    Frieden, B Roy; Gatenby, Robert A

    2013-10-01

    Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.

  4. Descent principle in modular Galois theory

    Indian Academy of Sciences (India)

    Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45

    with Drinfeld module theory see Remark 5.2 at the end of the paper. To describe the ... where the elements X1,...,Xm need not be algebraically independent over kq. When ..... In §5 we shall make some motivational and philosophical remarks.

  5. Principles and theory of resonance power supplies

    International Nuclear Information System (INIS)

    Sreenivas, A.; Karady, G.G.

    1991-01-01

    The resonance power supply is widely used and proved to be an efficient method to supply accelerator magnets. The literature describes several power supply circuits but no comprehensive theory of operation is presented. This paper presents a mathematical method which describes the operation of the resonance power supply and it can be used for accurate design of components

  6. Energy aspect of the correspondence principle in gravitation theory

    International Nuclear Information System (INIS)

    Mitskevich, N.V.; Nesterov, A.I.

    1976-01-01

    The correspondence of different definitions of invariant values in the general relativity theory with the Newton theory is considered. The analysis is carried out in the system of reference of a single Fermi-observer. It turns out that of the values considered the Papapetru pseudotensor only satisfies the correspondence principle

  7. Chern-Simons theory from first principles

    International Nuclear Information System (INIS)

    Marino, E.C.

    1994-01-01

    A review is made of the main properties of the Chern-Simons field theory. These include the dynamical mass generation to the photon without a Higgs field, the statistical transmutation of charged particles coupled to it and the natural appearance of a transverse conductivity. A review of standard theories proposed for the Quantum Hall Effect which use the Chern-Simons term is also made, emphasizing the fact that this terms is put in an artificial manner. A physical origin for the Chern-Simons term is proposed, starting from QED in 3+1 D with the topological term and imposing that the motion of charged matter is restricted to an infinite plane. (author). 12 refs

  8. Quantum theory from first principles an informational approach

    CERN Document Server

    D'Ariano, Giacomo Mauro; Perinotti, Paolo

    2017-01-01

    Quantum theory is the soul of theoretical physics. It is not just a theory of specific physical systems, but rather a new framework with universal applicability. This book shows how we can reconstruct the theory from six information-theoretical principles, by rebuilding the quantum rules from the bottom up. Step by step, the reader will learn how to master the counterintuitive aspects of the quantum world, and how to efficiently reconstruct quantum information protocols from first principles. Using intuitive graphical notation to represent equations, and with shorter and more efficient derivations, the theory can be understood and assimilated with exceptional ease. Offering a radically new perspective on the field, the book contains an efficient course of quantum theory and quantum information for undergraduates. The book is aimed at researchers, professionals, and students in physics, computer science and philosophy, as well as the curious outsider seeking a deeper understanding of the theory.

  9. Gauge theories under incorporation of a generalized uncertainty principle

    International Nuclear Information System (INIS)

    Kober, Martin

    2010-01-01

    There is considered an extension of gauge theories according to the assumption of a generalized uncertainty principle which implies a minimal length scale. A modification of the usual uncertainty principle implies an extended shape of matter field equations like the Dirac equation. If there is postulated invariance of such a generalized field equation under local gauge transformations, the usual covariant derivative containing the gauge potential has to be replaced by a generalized covariant derivative. This leads to a generalized interaction between the matter field and the gauge field as well as to an additional self-interaction of the gauge field. Since the existence of a minimal length scale seems to be a necessary assumption of any consistent quantum theory of gravity, the gauge principle is a constitutive ingredient of the standard model, and even gravity can be described as gauge theory of local translations or Lorentz transformations, the presented extension of gauge theories appears as a very important consideration.

  10. Variational principle for a prototype Rastall theory of gravitation

    International Nuclear Information System (INIS)

    Smalley, L.L.

    1984-01-01

    A prototype of Rastall's theory of gravity, in which the divergence of the energy-momentum tensor is proportional to the gradient of the scalar curvature, is shown to be derivable from a variational principle. Both the proportionality factor and the unrenormalized gravitational constant are found to be covariantly constant, but not necessarily constant. The prototype theory is, therefore, a gravitational theory with variable gravitational constant

  11. Collapsing of multigroup cross sections in optimization problems solved by means of the maximum principle of Pontryagin

    International Nuclear Information System (INIS)

    Anton, V.

    1979-05-01

    A new formulation of multigroup cross section collapsing based on the conservation of point or zone value of hamiltonian is presented. This attempt is proper to optimization problems solved by means of maximum principle of Pontryagin. (author)

  12. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua

    2016-10-19

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  13. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang

    2016-01-01

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  14. Principle of maximum entropy for reliability analysis in the design of machine components

    Science.gov (United States)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  15. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    International Nuclear Information System (INIS)

    2000-01-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance

  16. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.

  17. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    Science.gov (United States)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  18. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  19. A Stochastic Maximum Principle for Risk-Sensitive Mean-Field Type Control

    KAUST Repository

    Djehiche, Boualem; Tembine, Hamidou; Tempone, Raul

    2015-01-01

    In this paper we study mean-field type control problems with risk-sensitive performance functionals. We establish a stochastic maximum principle (SMP) for optimal control of stochastic differential equations (SDEs) of mean-field type, in which the drift and the diffusion coefficients as well as the performance functional depend not only on the state and the control but also on the mean of the distribution of the state. Our result extends the risk-sensitive SMP (without mean-field coupling) of Lim and Zhou (2005), derived for feedback (or Markov) type optimal controls, to optimal control problems for non-Markovian dynamics which may be time-inconsistent in the sense that the Bellman optimality principle does not hold. In our approach to the risk-sensitive SMP, the smoothness assumption on the value-function imposed in Lim and Zhou (2005) needs not be satisfied. For a general action space a Peng's type SMP is derived, specifying the necessary conditions for optimality. Two examples are carried out to illustrate the proposed risk-sensitive mean-field type SMP under linear stochastic dynamics with exponential quadratic cost function. Explicit solutions are given for both mean-field free and mean-field models.

  20. A Stochastic Maximum Principle for Risk-Sensitive Mean-Field Type Control

    KAUST Repository

    Djehiche, Boualem

    2015-02-24

    In this paper we study mean-field type control problems with risk-sensitive performance functionals. We establish a stochastic maximum principle (SMP) for optimal control of stochastic differential equations (SDEs) of mean-field type, in which the drift and the diffusion coefficients as well as the performance functional depend not only on the state and the control but also on the mean of the distribution of the state. Our result extends the risk-sensitive SMP (without mean-field coupling) of Lim and Zhou (2005), derived for feedback (or Markov) type optimal controls, to optimal control problems for non-Markovian dynamics which may be time-inconsistent in the sense that the Bellman optimality principle does not hold. In our approach to the risk-sensitive SMP, the smoothness assumption on the value-function imposed in Lim and Zhou (2005) needs not be satisfied. For a general action space a Peng\\'s type SMP is derived, specifying the necessary conditions for optimality. Two examples are carried out to illustrate the proposed risk-sensitive mean-field type SMP under linear stochastic dynamics with exponential quadratic cost function. Explicit solutions are given for both mean-field free and mean-field models.

  1. The free-energy principle: a unified brain theory?

    Science.gov (United States)

    Friston, Karl

    2010-02-01

    A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.

  2. The canonical equation of adaptive dynamics for life histories: from fitness-returns to selection gradients and Pontryagin's maximum principle.

    Science.gov (United States)

    Metz, Johan A Jacob; Staňková, Kateřina; Johansson, Jacob

    2016-03-01

    This paper should be read as addendum to Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013). Our goal is, using little more than high-school calculus, to (1) exhibit the form of the canonical equation of adaptive dynamics for classical life history problems, where the examples in Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013) are chosen such that they avoid a number of the problems that one gets in this most relevant of applications, (2) derive the fitness gradient occurring in the CE from simple fitness return arguments, (3) show explicitly that setting said fitness gradient equal to zero results in the classical marginal value principle from evolutionary ecology, (4) show that the latter in turn is equivalent to Pontryagin's maximum principle, a well known equivalence that however in the literature is given either ex cathedra or is proven with more advanced tools, (5) connect the classical optimisation arguments of life history theory a little better to real biology (Mendelian populations with separate sexes subject to an environmental feedback loop), (6) make a minor improvement to the form of the CE for the examples in Dieckmann et al. and Parvinen et al.

  3. Theory and application of maximum magnetic energy in toroidal plasmas

    International Nuclear Information System (INIS)

    Chu, T.K.

    1992-02-01

    The magnetic energy in an inductively driven steady-state toroidal plasma is a maximum for a given rate of dissipation of energy (Poynting flux). A purely resistive steady state of the piecewise force-free configuration, however, cannot exist, as the periodic removal of the excess poloidal flux and pressure, due to heating, ruptures the static equilibrium of the partitioning rational surfaces intermittently. The rupture necessitates a plasma with a negative q'/q (as in reverse field pinches and spheromaks) to have the same α in all its force-free regions and with a positive q'/q (as in tokamaks) to have centrally peaked α's

  4. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  5. On the fundamental principles of the relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Logunov, A.A.; Mestvirishvili, M.A.

    1990-01-01

    This paper expounds consistently within the frames of the Special Relativity Theory the fundamental postulates of the Relativistic Theory of Gravitation (RTG) which make it possible to obtain the unique complete system of the equations for gravitational field. Major attention has been paid to the analysis of the gauge group and of the causality principle. Some results related to the evolution of the Friedmann Universe, to gravitational collapse, etc. being the consequences of the RTG equations are also presented. 7 refs

  6. Completely boundary-free minimum and maximum principles for neutron transport and their least-squares and Galerkin equivalents

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1982-01-01

    Some minimum and maximum variational principles for even-parity neutron transport are reviewed and the corresponding principles for odd-parity transport are derived by a simple method to show why the essential boundary conditions associated with these maximum principles have to be imposed. The method also shows why both the essential and some of the natural boundary conditions associated with these minimum principles have to be imposed. These imposed boundary conditions for trial functions in the variational principles limit the choice of the finite element used to represent trial functions. The reasons for the boundary conditions imposed on the principles for even- and odd-parity transport point the way to a treatment of composite neutron transport, for which completely boundary-free maximum and minimum principles are derived from a functional identity. In general a trial function is used for each parity in the composite neutron transport, but this can be reduced to one without any boundary conditions having to be imposed. (author)

  7. The renormalized action principle in quantum field theory

    International Nuclear Information System (INIS)

    Balasin, H.

    1990-03-01

    The renormalized action principle holds a central position in field theory, since it offers a variety of applications. The main concern of this work is the proof of the action principle within the so-called BPHZ-scheme of renormalization. Following the classical proof given by Lam and Lowenstein, some loopholes are detected and closed. The second part of the work deals with the application of the action principle to pure Yang-Mills-theories within the axial gauge (n 2 ≠ 0). With the help of the action principle we investigate the decoupling of the Faddeev-Popov-ghost-fields from the gauge field. The consistency of this procedure, suggested by three-graph approximation, is proven to survive quantization. Finally we deal with the breaking of Lorentz-symmetry caused by the presence of the gauge-direction n. Using BRST-like techniques and the semi-simplicity of the Lorentz-group, it is shown that no new breakings arise from quantization. Again the main step of the proof is provided by the action principle. (Author, shortened by G.Q.)

  8. Optimal Control of Hypersonic Planning Maneuvers Based on Pontryagin’s Maximum Principle

    Directory of Open Access Journals (Sweden)

    A. Yu. Melnikov

    2015-01-01

    Full Text Available The work objective is the synthesis of simple analytical formula of the optimal roll angle of hypersonic gliding vehicles for conditions of quasi-horizontal motion, allowing its practical implementation in onboard control algorithms.The introduction justifies relevance, formulates basic control tasks, and describes a history of scientific research and achievements in the field concerned. The author reveals a common disadvantage of the other authors’ methods, i.e. the problem of practical implementation in onboard control algorithms.The similar tasks of hypersonic maneuvers are systemized according to the type of maneuver, control parameters and limitations.In the statement of the problem the glider launched horizontally with a suborbital speed glides passive in the static Atmosphere on a spherical surface of constant radius in the Central field of gravitation.The work specifies a system of equations of motion in the inertial spherical coordinate system, sets the limits on the roll angle and optimization criteria at the end of the flight: high speed or azimuth and the minimum distances to the specified geocentric points.The solution.1 A system of equations of motion is transformed by replacing the time argument with another independent argument – the normal equilibrium overload. The Hamiltonian and the equations of mated parameters are obtained using the Pontryagin’s maximum principle. The number of equations of motion and mated vector is reduced.2 The mated parameters were expressed by formulas using current movement parameters. The formulas are proved through differentiation and substitution in the equations of motion.3 The Formula of optimal roll-position control by condition of maximum is obtained. After substitution of mated parameters, the insertion of constants, and trigonometric transformations the Formula of the optimal roll angle is obtained as functions of the current parameters of motion.The roll angle is expressed as the ratio

  9. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Hosking, John Joseph Absalom, E-mail: j.j.a.hosking@cma.uio.no [University of Oslo, Centre of Mathematics for Applications (CMA) (Norway)

    2012-12-15

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  10. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    International Nuclear Information System (INIS)

    Hosking, John Joseph Absalom

    2012-01-01

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966–979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197–216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  11. The η{sub c} decays into light hadrons using the principle of maximum conformality

    Energy Technology Data Exchange (ETDEWEB)

    Du, Bo-Lun; Wu, Xing-Gang; Zeng, Jun; Bu, Shi; Shen, Jian-Ming [Chongqing University, Department of Physics, Chongqing (China)

    2018-01-15

    In the paper, we analyze the η{sub c} decays into light hadrons at the next-to-leading order QCD corrections by applying the principle of maximum conformality (PMC). The relativistic correction at the O(α{sub s}v{sup 2})-order level has been included in the discussion, which gives about 10% contribution to the ratio R. The PMC, which satisfies the renormalization group invariance, is designed to obtain a scale-fixed and scheme-independent prediction at any fixed order. To avoid the confusion of treating n{sub f}-terms, we transform the usual MS pQCD series into the one under the minimal momentum space subtraction scheme. To compare with the prediction under conventional scale setting, R{sub Conv,mMOM-r} = (4.12{sup +0.30}{sub -0.28}) x 10{sup 3}, after applying the PMC, we obtain R{sub PMC,mMOM-r} = (6.09{sup +0.62}{sub -0.55}) x 10{sup 3}, where the errors are squared averages of the ones caused by m{sub c} and Λ{sub mMOM}. The PMC prediction agrees with the recent PDG value within errors, i.e. R{sup exp} = (6.3 ± 0.5) x 10{sup 3}. Thus we think the mismatching of the prediction under conventional scale-setting with the data is due to improper choice of scale, which however can be solved by using the PMC. (orig.)

  12. Lattice Field Theory with the Sign Problem and the Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Masahiro Imachi

    2007-02-01

    Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.

  13. Noncommutative Common Cause Principles in algebraic quantum field theory

    International Nuclear Information System (INIS)

    Hofer-Szabó, Gábor; Vecsernyés, Péter

    2013-01-01

    States in algebraic quantum field theory “typically” establish correlation between spacelike separated events. Reichenbach's Common Cause Principle, generalized to the quantum field theoretical setting, offers an apt tool to causally account for these superluminal correlations. In the paper we motivate first why commutativity between the common cause and the correlating events should be abandoned in the definition of the common cause. Then we show that the Noncommutative Weak Common Cause Principle holds in algebraic quantum field theory with locally finite degrees of freedom. Namely, for any pair of projections A, B supported in spacelike separated regions V A and V B , respectively, there is a local projection C not necessarily commuting with A and B such that C is supported within the union of the backward light cones of V A and V B and the set {C, C ⊥ } screens off the correlation between A and B.

  14. A variational principle for Newton-Cartan theory

    International Nuclear Information System (INIS)

    Goenner, H.F.M.

    1984-01-01

    In the framework of a space-time theory of gravitation a variational principle is set up for the gravitational field equations and the equations of motion of matter. The general framework leads to Newton's equations of motion with an unspecified force term and, for irrotational motion, to a restriction on the propagation of the shear tensor along the streamlines of matter. The field equations obtained from the variation are weaker than the standard field equations of Newton-Cartan theory. An application to fluids with shear and bulk viscosity is given. (author)

  15. Application of the principle of maximum conformality to the hadroproduction of the Higgs boson at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Sheng-Quan; Wu, Xing-Gang; Brodsky, Stanley J.; Mojaza, Matin

    2016-09-09

    We present improved perturbative QCD (pQCD) predictions for Higgs boson hadroproduction at the LHC by applying the principle of maximum conformality (PMC), a procedure which resums the pQCD series using the renormalization group (RG), thereby eliminating the dependence of the predictions on the choice of the renormalization scheme while minimizing sensitivity to the initial choice of the renormalization scale. In previous pQCD predictions for Higgs boson hadroproduction, it has been conventional to assume that the renormalization scale μ r of the QCD coupling α s ( μ r ) is the Higgs mass and then to vary this choice over the range 1 / 2 m H < μ r < 2 m H in order to estimate the theory uncertainty. However, this error estimate is only sensitive to the nonconformal β terms in the pQCD series, and thus it fails to correctly estimate the theory uncertainty in cases where a pQCD series has large higher-order contributions, as is the case for Higgs boson hadroproduction. Furthermore, this ad hoc choice of scale and range gives pQCD predictions which depend on the renormalization scheme being used, in contradiction to basic RG principles. In contrast, after applying the PMC, we obtain next-to-next-to-leading-order RG resummed pQCD predictions for Higgs boson hadroproduction which are renormalization-scheme independent and have minimal sensitivity to the choice of the initial renormalization scale. Taking m H = 125 GeV , the PMC predictions for the p p → H X Higgs inclusive hadroproduction cross sections for various LHC center-of-mass energies are σ Incl | 7 TeV = 21.2 1 + 1.36 - 1.32 pb , σ Incl | 8 TeV = 27.3 7 + 1.65 - 1.59 pb , and σ Incl | 13 TeV = 65.7 2 + 3.46 - 3.0 pb . We also predict the fiducial cross section σ fid ( p p → H → γ γ ) : σ fid | 7 TeV = 30.1 + 2.3 - 2.2 fb , σ fid | 8 TeV = 38.3 + 2.9 - 2.8 fb , and σ fid | 13 TeV = 85.8 + 5.7 - 5.3 fb . The error limits in these predictions include the small residual high

  16. Principle-based concept analysis: intentionality in holistic nursing theories.

    Science.gov (United States)

    Aghebati, Nahid; Mohammadi, Eesa; Ahmadi, Fazlollah; Noaparast, Khosrow Bagheri

    2015-03-01

    This is a report of a principle-based concept analysis of intentionality in holistic nursing theories. A principle-based concept analysis method was used to analyze seven holistic theories. The data included eight books and 31 articles (1998-2011), which were retrieved through MEDLINE and CINAHL. Erickson, Kriger, Parse, Watson, and Zahourek define intentionality as a capacity, a focused consciousness, and a pattern of human being. Rogers and Newman do not explicitly mention intentionality; however, they do explain pattern and consciousness (epistemology). Intentionality has been operationalized as a core concept of nurse-client relationships (pragmatic). The theories are consistent on intentionality as a noun and as an attribute of the person-intentionality is different from intent and intention (linguistic). There is ambiguity concerning the boundaries between intentionality and consciousness (logic). Theoretically, intentionality is an evolutionary capacity to integrate human awareness and experience. Because intentionality is an individualized concept, we introduced it as "a matrix of continuous known changes" that emerges in two forms: as a capacity of human being and as a capacity of transpersonal caring. This study has produced a theoretical definition of intentionality and provides a foundation for future research to further investigate intentionality to better delineate its boundaries. © The Author(s) 2014.

  17. Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO

    Directory of Open Access Journals (Sweden)

    Lo C. Y.

    2006-04-01

    Full Text Available The theoretical foundation of LIGO's design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein's theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein's equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne's formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein's equivalence principle is not important or even irrelevant.

  18. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  19. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  20. Theory-generating practice. Proposing a principle for learning design

    DEFF Research Database (Denmark)

    Buhl, Mie

    2016-01-01

    This contribution proposes a principle for learning design – Theory-Generating Practice (TGP) – as an alternative to the way university courses are traditionally taught and structured, with a series of theoretical lectures isolated from practical experience and concluding with an exam or a project...... building, and takes tacit knowledge into account. The article introduces TGP, contextualizes it to a Danish tradition of didactics, and discusses it in relation to contemporary conceptual currents of didactic design and learning design. This is followed by a theoretical framing of TGP. Finally, three...

  1. Post-Newtonian approximation of the maximum four-dimensional Yang-Mills gauge theory

    International Nuclear Information System (INIS)

    Smalley, L.L.

    1982-01-01

    We have calculated the post-Newtonian approximation of the maximum four-dimensional Yang-Mills theory proposed by Hsu. The theory contains torsion; however, torsion is not active at the level of the post-Newtonian approximation of the metric. Depending on the nature of the approximation, we obtain the general-relativistic values for the classical Robertson parameters (γ = β = 1), but deviations for the Nordtvedt effect and violations of post-Newtonian conservation laws. We conclude that in its present form the theory is not a viable theory of gravitation

  2. Directionality Theory and the Entropic Principle of Natural Selection

    Directory of Open Access Journals (Sweden)

    Lloyd A. Demetrius

    2014-10-01

    Full Text Available Darwinian fitness describes the capacity of an organism to appropriate resources from the environment and to convert these resources into net-offspring production. Studies of competition between related types indicate that fitness is analytically described by entropy, a statistical measure which is positively correlated with population stability, and describes the number of accessible pathways of energy flow between the individuals in the population. Directionality theory is a mathematical model of the evolutionary process based on the concept evolutionary entropy as the measure of fitness. The theory predicts that the changes which occur as a population evolves from one non-equilibrium steady state to another are described by the following directionality principle–fundamental theorem of evolution: (a an increase in evolutionary entropy when resource composition is diverse, and resource abundance constant; (b a decrease in evolutionary entropy when resource composition is singular, and resource abundance variable. Evolutionary entropy characterizes the dynamics of energy flow between the individual elements in various classes of biological networks: (a where the units are individuals parameterized by age, and their age-specific fecundity and mortality; where the units are metabolites, and the transitions are the biochemical reactions that convert substrates to products; (c where the units are social groups, and the forces are the cooperative and competitive interactions between the individual groups. % This article reviews the analytical basis of the evolutionary entropic principle, and describes applications of directionality theory to the study of evolutionary dynamics in two biological systems; (i social networks–the evolution of cooperation; (ii metabolic networks–the evolution of body size. Statistical thermodynamics is a mathematical model of macroscopic behavior in inanimate matter based on entropy, a statistical measure which

  3. Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics

    International Nuclear Information System (INIS)

    Xu, Bin; Zhang, Hongen; Wang, Zhijian; Zhang, Jianbo

    2012-01-01

    By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.

  4. Test the principle of maximum entropy in constant sum 2×2 game: Evidence in experimental economics

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bin, E-mail: xubin211@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Public Administration College, Zhejiang Gongshang University, Hangzhou, 310018 (China); Zhang, Hongen, E-mail: hongen777@163.com [Department of Physics, Zhejiang University, Hangzhou, 310027 (China); Wang, Zhijian, E-mail: wangzj@zju.edu.cn [Experimental Social Science Laboratory, Zhejiang University, Hangzhou, 310058 (China); Zhang, Jianbo, E-mail: jbzhang08@zju.edu.cn [Department of Physics, Zhejiang University, Hangzhou, 310027 (China)

    2012-03-19

    By using laboratory experimental data, we test the uncertainty of strategy type in various competing environments with two-person constant sum 2×2 game in the social system. It firstly shows that, in these competing game environments, the outcome of human's decision-making obeys the principle of the maximum entropy. -- Highlights: ► Test the uncertainty in two-person constant sum games with experimental data. ► On game level, the constant sum game fits the principle of maximum entropy. ► On group level, all empirical entropy values are close to theoretical maxima. ► The results can be different for the games that are not constant sum game.

  5. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  6. Discrete Maximum Principle for a 1D Problem with Piecewise-Constant Coefficients Solved by hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš; Šolín, Pavel

    2007-01-01

    Roč. 15, č. 3 (2007), s. 233-243 ISSN 1570-2820 R&D Projects: GA ČR GP201/04/P021; GA ČR GA102/05/0629 Institutional research plan: CEZ:AV0Z10190503; CEZ:AV0Z20570509 Keywords : discrete maximum principle * hp-FEM * Poisson equation Subject RIV: BA - General Mathematics

  7. Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.

    Science.gov (United States)

    Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  8. Variational principle for the Bloch unified reaction theory

    International Nuclear Information System (INIS)

    MacDonald, W.; Rapheal, R.

    1975-01-01

    The unified reaction theory formulated by Claude Bloch uses a boundary value operator to write the Schroedinger equation for a scattering state as an inhomogeneous equation over the interaction region. As suggested by Lane and Robson, this equation can be solved by using a matrix representation on any set which is complete over the interaction volume. Lane and Robson have proposed, however, that a variational form of the Bloch equation can be used to obtain a ''best'' value for the S-matrix when a finite subset of this basis is used. The variational principle suggested by Lane and Robson, which gives a many-channel S-matrix different from the matrix solution on a finite basis, is considered first, and it is shown that the difference results from the fact that their variational principle is not, in fact, equivalent to the Bloch equation. Then a variational principle is presented which is fully equivalent to the Bloch form of the Schroedinger equation, and it is shown that the resulting S-matrix is the same as that obtained from the matrix solution of this equation. (U.S.)

  9. Dispersion correction derived from first principles for density functional theory and Hartree-Fock theory.

    Science.gov (United States)

    Guidez, Emilie B; Gordon, Mark S

    2015-03-12

    The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.

  10. Principles of physics from quantum field theory to classical mechanics

    CERN Document Server

    Jun, Ni

    2014-01-01

    This book starts from a set of common basic principles to establish the formalisms in all areas of fundamental physics, including quantum field theory, quantum mechanics, statistical mechanics, thermodynamics, general relativity, electromagnetic field, and classical mechanics. Instead of the traditional pedagogic way, the author arranges the subjects and formalisms in a logical-sequential way, i.e. all the formulas are derived from the formulas before them. The formalisms are also kept self-contained. Most of the required mathematical tools are also given in the appendices. Although this book covers all the disciplines of fundamental physics, the book is concise and can be treated as an integrated entity. This is consistent with the aphorism that simplicity is beauty, unification is beauty, and thus physics is beauty. The book may be used as an advanced textbook by graduate students. It is also suitable for physicists who wish to have an overview of fundamental physics. Readership: This is an advanced gradua...

  11. Theory-Generating Practice: Proposing a principle for learning design

    Directory of Open Access Journals (Sweden)

    Mie Buhl

    2016-06-01

    Full Text Available This contribution proposes a principle for learning design: Theory-Generating Practice (TGP as an alternative to the way university courses often are taught and structured with a series of theoretical lectures separate from practical experience and concluding with an exam or a project. The aim is to contribute to a development of theoretical frameworks for learning designs by suggesting TGP which may lead to new practices and turn the traditional dramaturgy for teaching upside down. TGP focuses on embodied experience prior to text reading and lectures to enhance theoretical knowledge building and takes tacit knowledge into account. The article introduces TGP and contextualizes it to a Danish tradition of didactics as well as discusses it in relation to contemporary conceptual currents of didactic design and learning design. This is followed by a theoretical framing of TGP, and is discussed through three empirical examples from bachelor and master programs involving technology, and showing three ways of practicing it.

  12. Theory-Generating Practice: Proposing a principle for learning design

    Directory of Open Access Journals (Sweden)

    Mie Buhl

    2016-05-01

    Full Text Available This contribution proposes a principle for learning design: Theory-Generating Practice (TGP as an alternative to the way university courses often are taught and structured with a series of theoretical lectures separate from practical experience and concluding with an exam or a project. The aim is to contribute to a development of theoretical frameworks for learning designs by suggesting TGP which may lead to new practices and turn the traditional dramaturgy for teaching upside down. TGP focuses on embodied experience prior to text reading and lectures to enhance theoretical knowledge building and takes tacit knowledge into account. The article introduces TGP and contextualizes it to a Danish tradition of didactics as well as discusses it in relation to contemporary conceptual currents of didactic design and learning design. This is followed by a theoretical framing of TGP, and is discussed through three empirical examples from bachelor and master programs involving technology, and showing three ways of practicing it.

  13. Principle-theoretic approach of kondo and construction-theoretic formalism of gauge theories

    International Nuclear Information System (INIS)

    Jain, L.C.

    1986-01-01

    Einstein classified various theories in physics as principle-theories and constructive-theories. In this lecture Kondo's approach to microscopic and macroscopic phenomena is analysed for its principle theoretic pursuit as followed by construction. The fundamentals of his theory may be recalled as Tristimulus principle, Observation principle, Kawaguchi spaces, empirical information, epistemological point of view, unitarity, intrinsicality, and dimensional analysis subject to logical and geometrical achievement. On the other hand, various physicists have evolved constructive gauge theories through the phenomenological point of view, often a collective one. Their synthetic method involves fibre bundles and connections, path integrals as well as other hypothetical structures. They lead towards clarity, completeness and adaptability

  14. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  15. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  16. A parametrization of two-dimensional turbulence based on a maximum entropy production principle with a local conservation of energy

    International Nuclear Information System (INIS)

    Chavanis, Pierre-Henri

    2014-01-01

    In the context of two-dimensional (2D) turbulence, we apply the maximum entropy production principle (MEPP) by enforcing a local conservation of energy. This leads to an equation for the vorticity distribution that conserves all the Casimirs, the energy, and that increases monotonically the mixing entropy (H-theorem). Furthermore, the equation for the coarse-grained vorticity dissipates monotonically all the generalized enstrophies. These equations may provide a parametrization of 2D turbulence. They do not generally relax towards the maximum entropy state. The vorticity current vanishes for any steady state of the 2D Euler equation. Interestingly, the equation for the coarse-grained vorticity obtained from the MEPP turns out to coincide, after some algebraic manipulations, with the one obtained with the anticipated vorticity method. This shows a connection between these two approaches when the conservation of energy is treated locally. Furthermore, the newly derived equation, which incorporates a diffusion term and a drift term, has a nice physical interpretation in terms of a selective decay principle. This sheds new light on both the MEPP and the anticipated vorticity method. (paper)

  17. Toward a Principled Sampling Theory for Quasi-Orders.

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  18. Toward a Principled Sampling Theory for Quasi-Orders

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  19. Principles of general relativity theory in terms of the present day physics

    International Nuclear Information System (INIS)

    Pervushin, V.N.

    1986-01-01

    A hystory of gradual unification of general relativity theory and quantum field theory on the basis of unified geometrical principles is detected. The gauge invariance principles became universal for construction of all physical theories. Quantum mechanics, electrodynamics and Einstein gravitation theory were used to form geometrical principles. Identity of inertial and gravitational masses is an experimental basis of the general relativity theory (GRT). It is shown that correct understanding of GRT bases is a developing process related to the development of the present physics and stimulating this development

  20. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang

    2014-01-01

    Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

  1. The collapsing of multigroup cross sections in optimization problems solved by means of the Pontryagin maximum principle in nuclear reactor dynamics

    International Nuclear Information System (INIS)

    Anton, V.

    1979-12-01

    The collapsing formulae for the optimization problems solved by means of the Pontryagin maximum principle in nuclear reactor dynamics are presented. A comparison with the corresponding formulae of the static case is given too. (author)

  2. Generalized uncertainty principle as a consequence of the effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Faizal, Mir, E-mail: mirfaizalmir@gmail.com [Irving K. Barber School of Arts and Sciences, University of British Columbia – Okanagan, Kelowna, British Columbia V1V 1V7 (Canada); Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada); Ali, Ahmed Farag, E-mail: ahmed.ali@fsc.bu.edu.eg [Department of Physics, Faculty of Science, Benha University, Benha, 13518 (Egypt); Netherlands Institute for Advanced Study, Korte Spinhuissteeg 3, 1012 CG Amsterdam (Netherlands); Nassar, Ali, E-mail: anassar@zewailcity.edu.eg [Department of Physics, Zewail City of Science and Technology, 12588, Giza (Egypt)

    2017-02-10

    We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

  3. Generalized uncertainty principle as a consequence of the effective field theory

    Directory of Open Access Journals (Sweden)

    Mir Faizal

    2017-02-01

    Full Text Available We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

  4. Principles of Economic Union. An Extension of John Rawls's Theory of Justice

    NARCIS (Netherlands)

    Wolthuis, A.J.

    2017-01-01

    In this article I uncover the principles of justice by which an economic union is to be constituted. For this purpose I extend John Rawls’s constructivist theory of justice to economically integrated societies. With regard to the principles I defend a twofold claim. First, the principles of economic

  5. Principles of General Systems Theory: Some Implications for Higher Education Administration

    Science.gov (United States)

    Gilliland, Martha W.; Gilliland, J. Richard

    1978-01-01

    Three principles of general systems theory are presented and systems theory is distinguished from systems analysis. The principles state that all systems tend to become more disorderly, that they must be diverse in order to be stable, and that only those maximizing their resource utilization for doing useful work will survive. (Author/LBH)

  6. Optimizing Computer Assisted Instruction By Applying Principles of Learning Theory.

    Science.gov (United States)

    Edwards, Thomas O.

    The development of learning theory and its application to computer-assisted instruction (CAI) are described. Among the early theoretical constructs thought to be important are E. L. Thorndike's concept of learning connectisms, Neal Miller's theory of motivation, and B. F. Skinner's theory of operant conditioning. Early devices incorporating those…

  7. J/ψ+χcJ production at the B factories under the principle of maximum conformality

    International Nuclear Information System (INIS)

    Wang, Sheng-Quan; Wu, Xing-Gang; Zheng, Xu-Chang; Shen, Jian-Ming; Zhang, Qiong-Lian

    2013-01-01

    Under the conventional scale setting, the renormalization scale uncertainty usually constitutes a systematic error for a fixed-order perturbative QCD estimation. The recently suggested principle of maximum conformality (PMC) provides a principle to eliminate such scale ambiguity in a step-by-step way. Using the PMC, all non-conformal terms in perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. In the paper, we make a detailed PMC analysis for both the polarized and the unpolarized cross sections for the double charmonium production process, e + +e − →J/ψ(ψ ′ )+χ cJ with (J=0,1,2). The running behavior for the coupling constant, governed by the PMC scales, are determined exactly for the specific processes. We compare our predictions with the measurements at the B factories, BaBar and Belle, and the theoretical estimations obtained in the literature. Because the non-conformal terms are different for various polarized and unpolarized cross sections, the PMC scales of these cross sections are different in principle. It is found that all the PMC scales are almost independent of the initial choice of renormalization scale. Thus, the large renormalization scale uncertainty usually adopted in the literature up to ∼40% at the NLO level, obtained from the conventional scale setting, for both the polarized and the unpolarized cross sections are greatly suppressed. It is found that the charmonium production is dominated by J=0 channel. After PMC scale setting, we obtain σ(J/ψ+χ c0 )=12.25 −3.13 +3.70 fb and σ(ψ ′ +χ c0 )=5.23 −1.32 +1.56 fb, where the squared average errors are caused by bound state parameters as m c , |R J/ψ (0)| and |R χ cJ ′ (0)|, which are non-perturbative error sources in different to the QCD scale setting problem. In comparison to the experimental data, a more accurate theoretical estimation shall be helpful for a precise

  8. Convex integration theory solutions to the h-principle in geometry and topology

    CERN Document Server

    Spring, David

    1998-01-01

    This book provides a comprehensive study of convex integration theory in immersion-theoretic topology. Convex integration theory, developed originally by M. Gromov, provides general topological methods for solving the h-principle for a wide variety of problems in differential geometry and topology, with applications also to PDE theory and to optimal control theory. Though topological in nature, the theory is based on a precise analytical approximation result for higher order derivatives of functions, proved by M. Gromov. This book is the first to present an exacting record and exposition of all of the basic concepts and technical results of convex integration theory in higher order jet spaces, including the theory of iterated convex hull extensions and the theory of relative h-principles. A second feature of the book is its detailed presentation of applications of the general theory to topics in symplectic topology, divergence free vector fields on 3-manifolds, isometric immersions, totally real embeddings, u...

  9. The argument of the principles in contemporary theory of law: An antipositivist plea

    Directory of Open Access Journals (Sweden)

    José Julián Suárez-Rodríguez

    2012-06-01

    Full Text Available The theory of legal principles knows today a resonance unknown in other times of legal science and several authors have dedicated themselves to its formation, each of them giving important elements in its configuration. This article presents the characteristics of the contemporary theory of the principles and the contributions that the most important authors in the field gave to it. Furthermore, it shows how the theory of principles has been developed as an argument against the main thesis of legal positivism, the dominant legal culture until the second half of the twentieth century.

  10. Application of the principles of Vygotsky's sociocultural theory of ...

    African Journals Online (AJOL)

    Sociocultural theory by Vygotsky (1896-1934) is a theory that has become popular in educational practice in recent years. It is especially important in the instruction of children in the preschool level as it is most suitable for their development and learning, which is more of social interaction. This paper discussed the ...

  11. Basic economic principles of road pricing: From theory to applications

    NARCIS (Netherlands)

    Rouwendal, J.; Verhoef, E.T.

    2006-01-01

    This paper presents, a non-technical introduction to the economic principles relevant for transport pricing design and analysis. We provide the basic rationale behind pricing of externalities, discuss why simple Pigouvian tax rules that equate charges to marginal external costs are not optimal in

  12. MBA theory and application of business and management principles

    CERN Document Server

    Davim, J

    2016-01-01

    This book focuses on the relevant subjects in the curriculum of an MBA program. Covering many different fields within business, this book is ideal for readers who want to prepare for a Master of Business Administration degree. It provides discussions and exchanges of information on principles, strategies, models, techniques, methodologies and applications in the business area.

  13. Designing the Electronic Classroom: Applying Learning Theory and Ergonomic Design Principles.

    Science.gov (United States)

    Emmons, Mark; Wilkinson, Frances C.

    2001-01-01

    Applies learning theory and ergonomic principles to the design of effective learning environments for library instruction. Discusses features of electronic classroom ergonomics, including the ergonomics of physical space, environmental factors, and workstations; and includes classroom layouts. (Author/LRW)

  14. General fluid theories, variational principles and self-organization

    International Nuclear Information System (INIS)

    Mahajan, S.M.

    2002-01-01

    This paper reports two distinct but related advances: (1) The development and application of fluid theories that transcend conventional magnetohydrodynamics (MHD), in particular, theories that are valid in the long-mean-free-path limit and in which pressure anisotropy, heat flow, and arbitrarily strong sheared flows are treated consistently. (2) The discovery of new pressure-confining plasma configurations that are self-organized relaxed states. (author)

  15. Scattering theory in quantum mechanics. Physical principles and mathematical methods

    International Nuclear Information System (INIS)

    Amrein, W.O.; Jauch, J.M.; Sinha, K.B.

    1977-01-01

    A contemporary approach is given to the classical topics of physics. The purpose is to explain the basic physical concepts of quantum scattering theory, to develop the necessary mathematical tools for their description, to display the interrelation between the three methods (the Schroedinger equation solutions, stationary scattering theory, and time dependence) to derive the properties of various quantities of physical interest with mathematically rigorous methods

  16. Reconceptualization of the Diffusion Process: An Application of Selected Principles from Modern Systems Theory.

    Science.gov (United States)

    Silver, Wayne

    A description of the communication behaviors in high innovation societies depends on the application of selected principles from modern systems theory. The first is the principle of equifinality which explains the activities of open systems. If the researcher views society as an open system, he frees himself from the client approach since society…

  17. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    Science.gov (United States)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field

  18. Quantum theory and statistical thermodynamics principles and worked examples

    CERN Document Server

    Hertel, Peter

    2017-01-01

    This textbook presents a concise yet detailed introduction to quantum physics. Concise, because it condenses the essentials to a few principles. Detailed, because these few principles –  necessarily rather abstract – are illustrated by several telling examples. A fairly complete overview of the conventional quantum mechanics curriculum is the primary focus, but the huge field of statistical thermodynamics is covered as well. The text explains why a few key discoveries shattered the prevailing broadly accepted classical view of physics. First, matter appears to consist of particles which, when propagating, resemble waves. Consequently, some observable properties cannot be measured simultaneously with arbitrary precision. Second, events with single particles are not determined, but are more or less probable. The essence of this is that the observable properties of a physical system are to be represented by non-commuting mathematical objects instead of real numbers.  Chapters on exceptionally simple, but h...

  19. Measurement Invariance: A Foundational Principle for Quantitative Theory Building

    Science.gov (United States)

    Nimon, Kim; Reio, Thomas G., Jr.

    2011-01-01

    This article describes why measurement invariance is a critical issue to quantitative theory building within the field of human resource development. Readers will learn what measurement invariance is and how to test for its presence using techniques that are accessible to applied researchers. Using data from a LibQUAL+[TM] study of user…

  20. Applying Cognitive Load Theory Principles to Library Instructional Guidance

    Science.gov (United States)

    Pickens, Kathleen E.

    2017-01-01

    If the goal of library instructional guidance is to provide students with the knowledge needed to acquire new skills in order to accomplish their learning objectives, then it is prudent to consider factors that impact learning. Cognitive load theory addresses several of these factors and is applicable to a wide-range of instructional devices used…

  1. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1975-12-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution then rearranged into the superposition principle. The inductive proof is simpler than Rostoker's, although similar in some ways; it differs in that first order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  2. Detailed balance principle and finite-difference stochastic equation in a field theory

    International Nuclear Information System (INIS)

    Kozhamkulov, T.A.

    1986-01-01

    A finite-difference equation, which is a generalization of the Langevin equation in field theory, has been obtained basing upon the principle of detailed balance for the Markov chain. Advantages of the present approach as compared with the conventional Parisi-Wu method are shown for examples of an exactly solvable problem of zero-dimensional quantum theory and a simple numerical simulation

  3. Principle of detailed balance and the finite-difference stochastic equation in field theory

    International Nuclear Information System (INIS)

    Kozhamkulov, T.A.

    1986-01-01

    The principle of detailed balance for the Markov chain is used to obtain a finite-difference equation which generalizes the Langevin equation in field theory. The advantages of using this approach compared to the conventional Parisi-Wu method are demonstrated for the examples of an exactly solvable problem in zero-dimensional quantum theory and a simple numerical simulation

  4. Directionality Theory and the Entropic Principle of Natural Selection

    OpenAIRE

    Demetrius, Lloyd; Gundlach, Volker

    2014-01-01

    Darwinian fitness describes the capacity of an organism to appropriate resources from the environment and to convert these resources into net-offspring production. Studies of competition between related types indicate that fitness is analytically described by entropy, a statistical measure which is positively correlated with population stability, and describes the number of accessible pathways of energy flow between the individuals in the population. Directionality theory is a mathematical mo...

  5. Approach to the nonrelatiVistic scattering theory based on the causality superposition and unitarity principles

    International Nuclear Information System (INIS)

    Gajnutdinov, R.Kh.

    1983-01-01

    Possibility is studied to build the nonrelativistic scattering theory on the base of the general physical principles: causality, superposition, and unitarity, making no use of the Schroedinger formalism. The suggested approach is shown to be more general than the nonrelativistic scattering theory based on the Schroedinger equation. The approach is applied to build a model ofthe scattering theory for a system which consists of heavy nonrelativistic particles and a light relativistic particle

  6. Weak principle of equivalence and gauge theory of tetrad aravitational field

    International Nuclear Information System (INIS)

    Tunyak, V.N.

    1978-01-01

    It is shown that, unlike the tetrade formulation of the general relativity theory derived from the requirement on the Poincare group localization, the tetrade gravitation theory corresponding to the Trader formulation of the weak equivalence principle, where the nongravitational-matter Lagrangian is the direct covariant generalization of the partial relativistic expression on the Riemann space-time is incompatible with the known method for deriving the calibration theory of the tetrade gravitation field

  7. Principles of hyperplasticity an approach to plasticity theory based on thermodynamic principles

    CERN Document Server

    Houlsby, Guy T

    2007-01-01

    A new approach to plasticity theory firmly routed in and compatible with the laws of thermodynamicsProvides a common basis for the formulation and comparison of many existing plasticity modelsIncorporates and introduction to elasticity, plasticity, thermodynamics and their interactionsShows the reader how to formulate constitutive models completely specified by two scalar potential functions from which the incremental responses of any hyperplastic model can be derived.

  8. Principle of equivalence and a theory of gravitation

    International Nuclear Information System (INIS)

    Shelupsky, D.

    1985-01-01

    We examine a well-known thought experiment often used to explain why we should expect a ray of light to be bent by gravity; according to this the light bends downward in the gravitational field because this is just what an observer would see if there were no field and he were accelerating upward instead. We show that this description of the action of Newtonian gravity in a flat space-time corresponds to an old two-index symmetric tensor field theory of gravitation

  9. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    Science.gov (United States)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  10. Computer-based teaching module design: principles derived from learning theories.

    Science.gov (United States)

    Lau, K H Vincent

    2014-03-01

    The computer-based teaching module (CBTM), which has recently gained prominence in medical education, is a teaching format in which a multimedia program serves as a single source for knowledge acquisition rather than playing an adjunctive role as it does in computer-assisted learning (CAL). Despite empirical validation in the past decade, there is limited research into the optimisation of CBTM design. This review aims to summarise research in classic and modern multimedia-specific learning theories applied to computer learning, and to collapse the findings into a set of design principles to guide the development of CBTMs. Scopus was searched for: (i) studies of classic cognitivism, constructivism and behaviourism theories (search terms: 'cognitive theory' OR 'constructivism theory' OR 'behaviourism theory' AND 'e-learning' OR 'web-based learning') and their sub-theories applied to computer learning, and (ii) recent studies of modern learning theories applied to computer learning (search terms: 'learning theory' AND 'e-learning' OR 'web-based learning') for articles published between 1990 and 2012. The first search identified 29 studies, dominated in topic by the cognitive load, elaboration and scaffolding theories. The second search identified 139 studies, with diverse topics in connectivism, discovery and technical scaffolding. Based on their relative representation in the literature, the applications of these theories were collapsed into a list of CBTM design principles. Ten principles were identified and categorised into three levels of design: the global level (managing objectives, framing, minimising technical load); the rhetoric level (optimising modality, making modality explicit, scaffolding, elaboration, spaced repeating), and the detail level (managing text, managing devices). This review examined the literature in the application of learning theories to CAL to develop a set of principles that guide CBTM design. Further research will enable educators to

  11. Classical field theory in the space of reference frames. [Space-time manifold, action principle

    Energy Technology Data Exchange (ETDEWEB)

    Toller, M [Dipartimento di Matematica e Fisica, Libera Universita, Trento (Italy)

    1978-03-11

    The formalism of classical field theory is generalized by replacing the space-time manifold M by the ten-dimensional manifold S of all the local reference frames. The geometry of the manifold S is determined by ten vector fields corresponding to ten operationally defined infinitesimal transformations of the reference frames. The action principle is written in terms of a differential 4-form in the space S (the Lagrangian form). Densities and currents are represented by differential 3-forms in S. The field equations and the connection between symmetries and conservation laws (Noether's theorem) are derived from the action principle. Einstein's theory of gravitation and Maxwell's theory of electromagnetism are reformulated in this language. The general formalism can also be used to formulate theories in which charge, energy and momentum cannot be localized in space-time and even theories in which a space-time manifold cannot be defined exactly in any useful way.

  12. An Evolutionary Comparison of the Handicap Principle and Hybrid Equilibrium Theories of Signaling

    Science.gov (United States)

    Kane, Patrick; Zollman, Kevin J. S.

    2015-01-01

    The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the “hybrid equilibrium,” to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith’s Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory. PMID:26348617

  13. Novel theory of the human brain: information-commutation basis of architecture and principles of operation

    Directory of Open Access Journals (Sweden)

    Bryukhovetskiy AS

    2015-02-01

    Full Text Available Andrey S Bryukhovetskiy Center for Biomedical Technologies, Federal Research and Clinical Center for Specialized Types of Medical Assistance and Medical Technologies of the Federal Medical Biological Agency, NeuroVita Clinic of Interventional and Restorative Neurology and Therapy, Moscow, Russia Abstract: Based on the methodology of the informational approach and research of the genome, proteome, and complete transcriptome profiles of different cells in the nervous tissue of the human brain, the author proposes a new theory of information-commutation organization and architecture of the human brain which is an alternative to the conventional systemic connective morphofunctional paradigm of the brain framework. Informational principles of brain operation are defined: the modular principle, holographic principle, principle of systematicity of vertical commutative connection and complexity of horizontal commutative connection, regulatory principle, relay principle, modulation principle, “illumination” principle, principle of personalized memory and intellect, and principle of low energy consumption. The author demonstrates that the cortex functions only as a switchboard and router of information, while information is processed outside the nervous tissue of the brain in the intermeningeal space. The main structural element of information-commutation in the brain is not the neuron, but information-commutation modules that are subdivided into receiver modules, transmitter modules, and subscriber modules, forming a vertical architecture of nervous tissue in the brain as information lines and information channels, and a horizontal architecture as central, intermediate, and peripheral information-commutation platforms. Information in information-commutation modules is transferred by means of the carriers that are characteristic to the specific information level from inductome to genome, transcriptome, proteome, metabolome, secretome, and magnetome

  14. Quantum Field Theoretic Derivation of the Einstein Weak Equivalence Principle Using Emqg Theory

    OpenAIRE

    Ostoma, Tom; Trushyk, Mike

    1999-01-01

    We provide a quantum field theoretic derivation of Einstein's Weak Equivalence Principle of general relativity using a new quantum gravity theory proposed by the authors called Electro-Magnetic Quantum Gravity or EMQG (ref. 1). EMQG is based on a new theory of inertia (ref. 5) proposed by R. Haisch, A. Rueda, and H. Puthoff (which we modified and called Quantum Inertia). Quantum Inertia states that classical Newtonian Inertia is a property of matter due to the strictly local electrical force ...

  15. Least action principle with unilateral constraints on the velocity in the special theory of relativity

    International Nuclear Information System (INIS)

    Blaquiere, Augustin

    1981-01-01

    A least action principle with unilateral constraints on the velocity is applied to an example in the area of the special theory of relativity. Equations obtained for a particle with non-zero rest-mass, and speed c the speed of light, are those which are usually associated with the photon, namely: the equation of eikonale and the wave equation of d'Alembert. Extension of the theory [fr

  16. A finite state, finite memory minimum principle, part 2. [a discussion of game theory, signaling, stochastic processes, and control theory

    Science.gov (United States)

    Sandell, N. R., Jr.; Athans, M.

    1975-01-01

    The development of the theory of the finite - state, finite - memory (FSFM) stochastic control problem is discussed. The sufficiency of the FSFM minimum principle (which is in general only a necessary condition) was investigated. By introducing the notion of a signaling strategy as defined in the literature on games, conditions under which the FSFM minimum principle is sufficient were determined. This result explicitly interconnects the information structure of the FSFM problem with its optimality conditions. The min-H algorithm for the FSFM problem was studied. It is demonstrated that a version of the algorithm always converges to a particular type of local minimum termed a person - by - person extremal.

  17. On the foundations of special theory of relativity - II. (The principle of covariance and a basic inertial frame)

    International Nuclear Information System (INIS)

    Gulati, S.P.; Gulati, S.

    1979-01-01

    An attempt has been made to replace the principle of relativity with the principle of covariance. This amounts to modification of the theory of relativity based on the two postulates (i) the principle of covariance and (ii) the light principle. Some of the fundamental results and the laws of relativistic mechanics, electromagnetodynamics and quantum mechanics are re-examined. The principle of invariance is questioned. (A.K.)

  18. Variational principles are a powerful tool also for formulating field theories

    OpenAIRE

    Dell'Isola , Francesco; Placidi , Luca

    2012-01-01

    Variational principles and calculus of variations have always been an important tool for formulating mathematical models for physical phenomena. Variational methods give an efficient and elegant way to formulate and solve mathematical problems that are of interest for scientists and engineers and are the main tool for the axiomatization of physical theories

  19. First-principles theory of inelastic currents in a scanning tunneling microscope

    DEFF Research Database (Denmark)

    Stokbro, Kurt; Hu, Ben Yu-Kuang; Thirstrup, C.

    1998-01-01

    A first-principles theory of inelastic tunneling between a model probe tip and an atom adsorbed on a surface is presented, extending the elastic tunneling theory of Tersoff and Hamann. The inelastic current is proportional to the change in the local density of states at the center of the tip due...... to the addition of the adsorbate. We use the theory to investigate the vibrational heating of an adsorbate below a scanning tunneling microscopy tip. We calculate the desorption rate of PI from Si(100)-H(2 X 1) as a function of the sample bias and tunnel current, and find excellent a,agreement with recent...

  20. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  1. Physical principles, geometrical aspects, and locality properties of gauge field theories

    International Nuclear Information System (INIS)

    Mack, G.; Hamburg Univ.

    1981-01-01

    Gauge field theories, particularly Yang - Mills theories, are discussed at a classical level from a geometrical point of view. The introductory chapters are concentrated on physical principles and mathematical tools. The main part is devoted to locality problems in gauge field theories. Examples show that locality problems originate from two sources in pure Yang - Mills theories (without matter fields). One is topological and the other is related to the existence of degenerated field configurations of the infinitesimal holonomy groups on some extended region of space or space-time. Nondegenerate field configurations in theories with semisimple gauge groups can be analysed with the help of the concept of a local gauge. Such gauges play a central role in the discussion. (author)

  2. At the frontier of spacetime scalar-tensor theory, Bells inequality, Machs principle, exotic smoothness

    CERN Document Server

    Asselmeyer-Maluga, Torsten

    2016-01-01

    In this book, leading theorists present new contributions and reviews addressing longstanding challenges and ongoing progress in spacetime physics. In the anniversary year of Einstein's General Theory of Relativity, developed 100 years ago, this collection reflects the subsequent and continuing fruitful development of spacetime theories. The volume is published in honour of Carl Brans on the occasion of his 80th birthday. Carl H. Brans, who also contributes personally, is a creative and independent researcher and one of the founders of the scalar-tensor theory, also known as Jordan-Brans-Dicke theory. In the present book, much space is devoted to scalar-tensor theories. Since the beginning of the 1990s, Brans has worked on new models of spacetime, collectively known as exotic smoothness, a field largely established by him. In this Festschrift, one finds an outstanding and unique collection of articles about exotic smoothness. Also featured are Bell's inequality and Mach's principle. Personal memories and hist...

  3. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1976-01-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented; the deductive approach appears here for the first time in the literature. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution is then re-arranged into the superposition principle. The inductive proof is simpler than Rostoker's although similar in some ways; it differs in that first-order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  4. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference

    Directory of Open Access Journals (Sweden)

    Nicholas Scott Cardell

    2013-05-01

    Full Text Available Maximum entropy methods of parameter estimation are appealing because they impose no additional structure on the data, other than that explicitly assumed by the analyst. In this paper we prove that the data constrained GME estimator of the general linear model is consistent and asymptotically normal. The approach we take in establishing the asymptotic properties concomitantly identifies a new computationally efficient method for calculating GME estimates. Formulae are developed to compute asymptotic variances and to perform Wald, likelihood ratio, and Lagrangian multiplier statistical tests on model parameters. Monte Carlo simulations are provided to assess the performance of the GME estimator in both large and small sample situations. Furthermore, we extend our results to maximum cross-entropy estimators and indicate a variant of the GME estimator that is unbiased. Finally, we discuss the relationship of GME estimators to Bayesian estimators, pointing out the conditions under which an unbiased GME estimator would be efficient.

  5. Role of Logic and Mentality as the Basics of Wittgenstein's Picture Theory of Language and Extracting Educational Principles and Methods According to This Theory

    Science.gov (United States)

    Heshi, Kamal Nosrati; Nasrabadi, Hassanali Bakhtiyar

    2016-01-01

    The present paper attempts to recognize principles and methods of education based on Wittgenstein's picture theory of language. This qualitative research utilized inferential analytical approach to review the related literature and extracted a set of principles and methods from his theory on picture language. Findings revealed that Wittgenstein…

  6. Toward Principles of Construct Clarity: Exploring the Usefulness of Facet Theory in Guiding Conceptualization

    Directory of Open Access Journals (Sweden)

    Meng Zhang

    2016-02-01

    Full Text Available Conceptualization in theory development has received limited consideration despite its frequently stressed importance in Information Systems research. This paper focuses on the role of construct clarity in conceptualization, arguing that construct clarity should be considered an essential criterion for evaluating conceptualization and that a focus on construct clarity can advance conceptualization methodology. Drawing from Facet Theory literature, we formulate a set of principles for assessing construct clarity, particularly regarding a construct’s relationships to its extant related constructs. Conscious and targeted attention to this criterion can promote a research ecosystem more supportive of knowledge accumulation.

  7. Scale dependence of the average potential around the maximum in Φ4 theories

    International Nuclear Information System (INIS)

    Tetradis, N.; Wetterich, C.

    1992-04-01

    The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)

  8. On the role of the equivalence principle in the general relativity theory

    International Nuclear Information System (INIS)

    Gertsenshtein, M.E.; Stanyukovich, K.P.; Pogosyan, V.A.

    1977-01-01

    The conditions under which the solutions of the general relativity theory equations satisfy the correspondence principle are considered. It is shown that in general relativity theory, as in a plane space any systems of coordinates satisfying the topological requirements of continuity and uniqueness are admissible. The coordinate transformations must be mutually unique, and the following requirements must be met: the transformations of the coordinates xsup(i)=xsup(i)(anti xsup(k)) must preserve the class of the function, while the transformation jacobian must be finite and nonzero. The admissible metrics in the Tolmen problem for a vacuum are considered. A prohibition of the vacuum solution of the Tolmen problem is obtained from the correspondence principle. The correspondence principle is applied to the solution of the Friedmann problem by constructing a spherical symmetric self-similar solution, in which replacement of compression by expansion occurs at a finite density. The examples adduced convince that the application of the correspondence principle makes it possible to discard physically inadmissible solutions and obtained new physical results

  9. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    Science.gov (United States)

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  10. RENEWAL OF BASIC LAWS AND PRINCIPLES FOR POLAR CONTINUUM THEORIES (Ⅱ)-MICROMORPHIC CONTINUUM THEORY AND COUPLE STRESS THEORY

    Institute of Scientific and Technical Information of China (English)

    戴天民

    2003-01-01

    The purpose is to reestablish the balance laws of momentum, angular momentumand energy and to derive the corresponding local and nonlocal balance equations formicromorphic continuum mechanics and couple stress theory. The desired results formicromorphic continuum mechanics and couple stress theory are naturally obtained via directtransitions and reductions from the coupled conservation law of energy for micropolarcontinuum theory, respectively. The basic balance laws and equation s for micromorphiccontinuum mechanics and couple stress theory are constituted by combining these resultsderived here and the traditional conservation laws and equations of mass and microinertiaand the entropy inequality. The incomplete degrees of the former related continuum theoriesare clarified. Finally, some special cases are conveniently derived.

  11. A least squares principle unifying finite element, finite difference and nodal methods for diffusion theory

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1987-01-01

    A least squares principle is described which uses a penalty function treatment of boundary and interface conditions. Appropriate choices of the trial functions and vectors employed in a dual representation of an approximate solution established complementary principles for the diffusion equation. A geometrical interpretation of the principles provides weighted residual methods for diffusion theory, thus establishing a unification of least squares, variational and weighted residual methods. The complementary principles are used with either a trial function for the flux or a trial vector for the current to establish for regular meshes a connection between finite element, finite difference and nodal methods, which can be exact if the mesh pitches are chosen appropriately. Whereas the coefficients in the usual nodal equations have to be determined iteratively, those derived via the complementary principles are given explicitly in terms of the data. For the further development of the connection between finite element, finite difference and nodal methods, some hybrid variational methods are described which employ both a trial function and a trial vector. (author)

  12. Conscious and unconscious thought in risky choice: testing the capacity principle and the appropriate weighting principle of unconscious thought theory.

    Science.gov (United States)

    Ashby, Nathaniel J S; Glöckner, Andreas; Dickert, Stephan

    2011-01-01

    Daily we make decisions ranging from the mundane to the seemingly pivotal that shape our lives. Assuming rationality, all relevant information about one's options should be thoroughly examined in order to make the best choice. However, some findings suggest that under specific circumstances thinking too much has disadvantageous effects on decision quality and that it might be best to let the unconscious do the busy work. In three studies we test the capacity assumption and the appropriate weighting principle of Unconscious Thought Theory using a classic risky choice paradigm and including a "deliberation with information" condition. Although we replicate an advantage for unconscious thought (UT) over "deliberation without information," we find that "deliberation with information" equals or outperforms UT in risky choices. These results speak against the generality of the assumption that UT has a higher capacity for information integration and show that this capacity assumption does not hold in all domains. Furthermore, we show that "deliberate thought with information" leads to more differentiated knowledge compared to UT which speaks against the generality of the appropriate weighting assumption.

  13. Conscious and unconscious thought in risky choice: Testing the capacity principle and the appropriate weighting principle of Unconscious Thought Theory

    Directory of Open Access Journals (Sweden)

    Nathaniel James Siebert Ashby

    2011-10-01

    Full Text Available Daily we make decisions ranging from the mundane to the seemingly pivotal that shape our lives. Assuming rationality, all relevant information about one’s options should be thoroughly examined in order to make the best choice. However, some findings suggest that under specific circumstances thinking too much has disadvantageous effects on decision quality and that it might be best to let the unconscious do the busy work. In three studies we test the capacity assumption and the appropriate weighting principle of unconscious thought theory using a classic risky choice paradigm and including a ‘deliberation with information’ condition. Although we replicate an advantage for unconscious thought over ‘deliberation without information’, we find that ‘deliberation with information’ equals or outperforms unconscious thought in risky choices. These results speak against the generality of the assumption that unconscious thought has a higher capacity for information integration and show that this capacity assumption does not hold in all domains. We furthermore show that ‘deliberate thought with information’ leads to more differentiated knowledge compared to unconscious thought which speaks against the generality of the appropriate weighting assumption.

  14. The generally covariant locality principle - a new paradigm for local quantum field theory

    International Nuclear Information System (INIS)

    Brunetti, R.; Fredenhagen, K.; Verch, R.

    2002-05-01

    A new approach to the model-independent description of quantum field theories will be introduced in the present work. The main feature of this new approach is to incorporate in a local sense the principle of general covariance of general relativity, thus giving rise to the concept of a locally covariant quantum field theory. Such locally covariant quantum field theories will be described mathematically in terms of covariant functors between the categories, on one side, of globally hyperbolic spacetimes with isometric embeddings as morphisms and, on the other side, of *-algebras with unital injective *-endomorphisms as morphisms. Moreover, locally covariant quantum fields can be described in this framework as natural transformations between certain functors. The usual Haag-Kastler framework of nets of operator-algebras over a fixed spacetime background-manifold, together with covariant automorphic actions of the isometry-group of the background spacetime, can be re-gained from this new approach as a special case. Examples of this new approach are also outlined. In case that a locally covariant quantum field theory obeys the time-slice axiom, one can naturally associate to it certain automorphic actions, called ''relative Cauchy-evolutions'', which describe the dynamical reaction of the quantum field theory to a local change of spacetime background metrics. The functional derivative of a relative Cauchy-evolution with respect to the spacetime metric is found to be a divergence-free quantity which has, as will be demonstrated in an example, the significance of an energy-momentum tensor for the locally covariant quantum field theory. Furthermore, we discuss the functorial properties of state spaces of locally covariant quantum field theories that entail the validity of the principle of local definiteness. (orig.)

  15. Development by Niels Bohr of the quantum theory of the atom and of the correspondence principle

    International Nuclear Information System (INIS)

    El'yashevich, M.A.

    1985-01-01

    Bohr's investigations in 1912-1923 on the quantum theory of atoms are considered. The sources of N. Bohr's works on this subject are analyzed, and the beginning of his quantum research in 1912 is described. A detailed analysis is given of N. Bohr's famous paper on the hydrogen atom theory and on the origin of spectra. The further development of Bohr's ideas on atomic structure is considered, special attention is being payed to his postulates of stationary states and of radiation transitions and to the development of the correspondence principle. It is shown how well N. Bohr understood the difficulties of the model theory and how be tried to obtain a deep understanding of quantum phenomena

  16. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  17. On the relativity and equivalence principles in the gauge theory of gravitation

    International Nuclear Information System (INIS)

    Ivanenko, D.; Sardanashvily, G.

    1981-01-01

    One sees the basic ideas of the gauge gravitation theory still not generally accepted in spite of more than twenty years of its history. The chief reason lies in the fact that the gauge character of gravity is connected with the whole complex of problems of Einstein General Relativity: about the reference system definition, on the (3+1)-splitting, on the presence (or absence) of symmetries in GR, on the necessity (or triviality) of general covariance, on the meaning of equivalence principle, which led Einstein from Special to General Relativity |1|. The real actuality of this complex of interconnected problems is demonstrated by the well-known work of V. Fock, who saw no symmetries in General Relativity, declared the unnecessary Equivalence principle and proposed even to substitute the designation ''chronogeometry'' instead of ''general relativity'' (see also P. Havas). Developing this line, H. Bondi quite recently also expressed doubts about the ''relativity'' in Einstein theory of gravitation. All proposed versions of the gauge gravitation theory must clarify the discrepancy between Einstein gravitational field being a pseudo-Riemannian metric field, and the gauge potentials representing connections on some fiber bundles and there exists no group, whose gauging would lead to the purely gravitational part of connection (Christoffel symbols or Fock-Ivenenko-Weyl spinorial coefficients). (author)

  18. A Trustworthiness Evaluation Method for Software Architectures Based on the Principle of Maximum Entropy (POME and the Grey Decision-Making Method (GDMM

    Directory of Open Access Journals (Sweden)

    Rong Jiang

    2014-09-01

    Full Text Available As the early design decision-making structure, a software architecture plays a key role in the final software product quality and the whole project. In the software design and development process, an effective evaluation of the trustworthiness of a software architecture can help making scientific and reasonable decisions on the architecture, which are necessary for the construction of highly trustworthy software. In consideration of lacking the trustworthiness evaluation and measurement studies for software architecture, this paper provides one trustworthy attribute model of software architecture. Based on this model, the paper proposes to use the Principle of Maximum Entropy (POME and Grey Decision-making Method (GDMM as the trustworthiness evaluation method of a software architecture and proves the scientificity and rationality of this method, as well as verifies the feasibility through case analysis.

  19. Right to Place: A Political Theory of Animal Rights in Harmony with Environmental and Ecological Principles

    Directory of Open Access Journals (Sweden)

    Eleni Panagiotarakou

    2014-09-01

    Full Text Available The focus of this paper is on the “right to place” as a political theory of wild animal rights. Out of the debate between terrestrial cosmopolitans inspired by Kant and Arendt and rooted cosmopolitan animal right theorists, the right to place emerges from the fold of rooted cosmopolitanism in tandem with environmental and ecological principles. Contrary to terrestrial cosmopolitans—who favour extending citizenship rights to wild animals and advocate at the same time large-scale humanitarian interventions and unrestricted geographical mobility—I argue that the well-being of wild animals is best served by the right to place theory on account of its sovereignty model. The right to place theory advocates human non-interference in wildlife communities, opposing even humanitarian interventions, which carry the risk of unintended consequences. The right to place theory, with its emphasis on territorial sovereignty, bases its opposition to unrestricted geographical mobility on two considerations: (a the non-generalist nature of many species and (b the potential for abuse via human encroachment. In a broader context, the advantage of the right to place theory lies in its implicit environmental demands: human population control and sustainable lifestyles.

  20. Five-dimensional projective unified theory and the principle of equivalence

    International Nuclear Information System (INIS)

    De Sabbata, V.; Gasperini, M.

    1984-01-01

    We investigate the physical consequences of a new five-dimensional projective theory unifying gravitation and electromagnetism. Solving the field equations in the linear approximation and in the static limit, we find that a celestial body would act as a source of a long-range scalar field, and that macroscopic test bodies with different internal structure would accelerate differently in the solar gravitational field; this seems to be in disagreement with the equivalence principle. To avoid this contradiction, we suggest a possible modification of the geometrical structure of the five-dimensional projective space

  1. A Critique of Social Bonding and Control Theory of Delinquency Using the Principles of Psychology of Mind.

    Science.gov (United States)

    Kelley, Thomas M.

    1996-01-01

    Describes the refined principles of Psychology of Mind and shows how their logical interaction can help explain the comparative amounts of deviant and conforming behavior of youthful offenders. The logic of these principles is used to examine the major assumptions of social bonding and control theory of delinquency focusing predominantly on the…

  2. COIN Goes GLOCAL: Traditional COIN With a Global Perspective: Does the Current US Strategy Reflect COIN Theory, Doctrine and Principles

    Science.gov (United States)

    2007-05-17

    COIN goes “ GLOCAL ”: Traditional COIN with a Global Perspective: Does the Current US Strategy Reflect COIN Theory, Doctrine and Principles? A...TITLE AND SUBTITLE COIN goes “ GLOCAL ”: Traditional COIN with a Global P ti D th C t US St t R fl t COIN 5a. CONTRACT NUMBER Perspective: Does...Monograph: COIN goes “ GLOCAL ”: Traditional COIN with a Global Perspective: Does the Current US Strategy Reflect COIN Theory, Doctrine and Principles

  3. The Nature of the Chemical Process. 1. Symmetry Evolution – Revised Information Theory, Similarity Principle and Ugly Symmetry

    Directory of Open Access Journals (Sweden)

    Shu-Kun Lin

    2001-03-01

    Full Text Available Abstract: Symmetry is a measure of indistinguishability. Similarity is a continuous measure of imperfect symmetry. Lewis' remark that “gain of entropy means loss of information” defines the relationship of entropy and information. Three laws of information theory have been proposed. Labeling by introducing nonsymmetry and formatting by introducing symmetry are defined. The function L ( L=lnw, w is the number of microstates, or the sum of entropy and information, L=S+I of the universe is a constant (the first law of information theory. The entropy S of the universe tends toward a maximum (the second law law of information theory. For a perfect symmetric static structure, the information is zero and the static entropy is the maximum (the third law law of information theory. Based on the Gibbs inequality and the second law of the revised information theory we have proved the similarity principle (a continuous higher similarity−higher entropy relation after the rejection of the Gibbs paradox and proved the Curie-Rosen symmetry principle (a higher symmetry−higher stability relation as a special case of the similarity principle. The principles of information minimization and potential energy minimization are compared. Entropy is the degree of symmetry and information is the degree of nonsymmetry. There are two kinds of symmetries: dynamic and static symmetries. Any kind of symmetry will define an entropy and, corresponding to the dynamic and static symmetries, there are static entropy and dynamic entropy. Entropy in thermodynamics is a special kind of dynamic entropy. Any spontaneous process will evolve towards the highest possible symmetry, either dynamic or static or both. Therefore the revised information theory can be applied to characterizing all kinds of structural stability and process spontaneity. Some examples in chemical physics have been given. Spontaneous processes of all kinds of molecular

  4. Schwinger variational principle in the nuclear two-body problem and multichannel theory

    International Nuclear Information System (INIS)

    Zubarev, A.L.; Podkopaev, A.P.

    1978-01-01

    The aim of the investigation is to study the Schwinger variational principle in the nuclear two-body problem and the multichannel theory. An approach is proposed to problems of the potential scattering based on the substitution of the exact potential operator V by the finite rank operator Vsup((n)) with which the dynamic equations are solved exactly. The functionals obtained for observed values coincide with corresponding expressions derived by the Schwinger variational principle with the set of test functions. The determination of the Schwinger variational principle is given. The method is given for finding amplitude of the double-particle scattering with the potential Vsup((n)). The corresponding amplitudes are constructed within the framework of the multichannel potential model. Interpolation formula for determining amplitude, which describes with high accuracy a process of elastic scattering for any energies, is obtained. On the basis of the above method high-energy amplitude may be obtained within the range of small and large scattering angles

  5. Communication: Towards first principles theory of relaxation in supercooled liquids formulated in terms of cooperative motion.

    Science.gov (United States)

    Freed, Karl F

    2014-10-14

    A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, "The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition" [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.

  6. Communication: Towards first principles theory of relaxation in supercooled liquids formulated in terms of cooperative motion

    Energy Technology Data Exchange (ETDEWEB)

    Freed, Karl F., E-mail: freed@uchicago.edu [James Franck Institute and Department of Chemistry, University of Chicago, 929 East 57 Street, Chicago, Illinois 60637 (United States)

    2014-10-14

    A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, “The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition” [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.

  7. Adapting evidence-based interventions using a common theory, practices, and principles.

    Science.gov (United States)

    Rotheram-Borus, Mary Jane; Swendeman, Dallas; Becker, Kimberly D

    2014-01-01

    Hundreds of validated evidence-based intervention programs (EBIP) aim to improve families' well-being; however, most are not broadly adopted. As an alternative diffusion strategy, we created wellness centers to reach families' everyday lives with a prevention framework. At two wellness centers, one in a middle-class neighborhood and one in a low-income neighborhood, popular local activity leaders (instructors of martial arts, yoga, sports, music, dancing, Zumba), and motivated parents were trained to be Family Mentors. Trainings focused on a framework that taught synthesized, foundational prevention science theory, practice elements, and principles, applied to specific content areas (parenting, social skills, and obesity). Family Mentors were then allowed to adapt scripts and activities based on their cultural experiences but were closely monitored and supervised over time. The framework was implemented in a range of activities (summer camps, coaching) aimed at improving social, emotional, and behavioral outcomes. Successes and challenges are discussed for (a) engaging parents and communities; (b) identifying and training Family Mentors to promote children and families' well-being; and (c) gathering data for supervision, outcome evaluation, and continuous quality improvement. To broadly diffuse prevention to families, far more experimentation is needed with alternative and engaging implementation strategies that are enhanced with knowledge harvested from researchers' past 30 years of experience creating EBIP. One strategy is to train local parents and popular activity leaders in applying robust prevention science theory, common practice elements, and principles of EBIP. More systematic evaluation of such innovations is needed.

  8. Hidden crossing theory of charge exchange in H+ + He+(1 s) collisions in vicinity of maximum of cross section

    Science.gov (United States)

    Grozdanov, Tasko P.; Solov'ev, Evgeni A.

    2018-04-01

    Within the framework of dynamical adiabatic approach the hidden crossing theory of inelastic transitions is applied to charge exchange in H+ + He+(1 s) collisions in the wide range of center of mass collision energies E cm = (1.6 -70) keV. The good agreement with experiment and molecular close coupling calculations is obtained. At low energies our 4-state results are closest to the experiment and correctly reproduce the shoulder in energy dependence of the cross section around E cm = 6 keV. The 2-state results correctly predict the position of the maximum of the cross section at E cm ≈ 40 keV, whereas 4-state results fail to correctly describe the region around the maximum. The reason for this is the fact that adiabatic approximation for a given two-state hidden crossing is applicable for values of the Schtueckelberg parameter >1. But with increase of principal quantum number N the Schtueckelberg parameter decreases as N -3. That is why the 4-state approach involving higher excited states fails at smaller collision energies E cm ≈ 15 keV, while the 2-state approximation which involves low lying states can be extended to higher collision energies.

  9. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  10. Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states

    CERN Document Server

    Dewar, R

    2003-01-01

    Jaynes' information theory formalism of statistical mechanics is applied to the stationary states of open, non-equilibrium systems. First, it is shown that the probability distribution p subGAMMA of the underlying microscopic phase space trajectories GAMMA over a time interval of length tau satisfies p subGAMMA propor to exp(tau sigma subGAMMA/2k sub B) where sigma subGAMMA is the time-averaged rate of entropy production of GAMMA. Three consequences of this result are then derived: (1) the fluctuation theorem, which describes the exponentially declining probability of deviations from the second law of thermodynamics as tau -> infinity; (2) the selection principle of maximum entropy production for non-equilibrium stationary states, empirical support for which has been found in studies of phenomena as diverse as the Earth's climate and crystal growth morphology; and (3) the emergence of self-organized criticality for flux-driven systems in the slowly-driven limit. The explanation of these results on general inf...

  11. The Context and Values Inherent in Human Capital as Core Principles for New Economic Theory

    Directory of Open Access Journals (Sweden)

    Winston P. Nagan

    2018-05-01

    Full Text Available This paper has a specific focus on the core foundation of New Economic Theory. That is, the focus on human capital and its implications for the theory and method of the new form of political economy. The central issue that is underlined is the importance of scientific and technological innovation and its necessary interdependence on global values and value analysis. The paper discusses the issue of scientific consciousness as a generator of technological value, and places scientific process at the heart of human consciousness. It discusses the complex interdependence of human relational subjectivity, scientific consciousness, and modern science. The paper draws attention to the problems of observation and participation, and the influence of modern quantum physics in drawing attention to aspects of human consciousness that go beyond the points of conventional science, and open up concern for the principle of non-locality. It explores human subjectivity in terms of the way in which “emotionalized behaviors” have effects on scientific objectivity. It also briefly touches on consciousness and its observable scientific role in the possible reconstruction of some aspects of reality. Mention is made of the Copenhagen perspective, the Many Worlds perspective, and the Penrose interpretation. These insights challenge us to explore human consciousness and innovation in economic organization. The discussion also brings in the principle of relational inter-subjectivity, emotion, and consciousness as a potential driver of human capital and value. In short, positive emotions can influence economic decision-making, as can negative emotions. These challenges stress the problem of human relational subjectivity, values, and technology as the tools to better understand the conflicts and potentials of human capital for New Economic Theory. The issue of value-analysis has both a descriptive and normative dimension. Both of these aspects raise important challenges

  12. Analytical study of Yang–Mills theory in the infrared from first principles

    Energy Technology Data Exchange (ETDEWEB)

    Siringo, Fabio, E-mail: fabio.siringo@ct.infn.it

    2016-06-15

    Pure Yang–Mills SU(N) theory is studied in the Landau gauge and four dimensional space. While leaving the original Lagrangian unmodified, a double perturbative expansion is devised, based on a massive free-particle propagator. In dimensional regularization, all diverging mass terms cancel exactly in the double expansion, without the need to include mass counterterms that would spoil the symmetry of the Lagrangian. No free parameters are included that were not in the original theory, yielding a fully analytical approach from first principles. The expansion is safe in the infrared and is equivalent to the standard perturbation theory in the UV. At one-loop, explicit analytical expressions are given for the propagators and the running coupling and are found in excellent agreement with the data of lattice simulations. A universal scaling property is predicted for the inverse propagators and shown to be satisfied by the lattice data. Higher loops are found to be negligible in the infrared below 300 MeV where the coupling becomes small and the one-loop approximation is under full control.

  13. Exact multiple scattering theory of two-nucleus collisions including the Pauli principle

    International Nuclear Information System (INIS)

    Gurvitz, S.A.

    1981-01-01

    Exact equations for two-nucleus scattering are derived in which the effects of the Pauli principle are fully included. Our method exploits a modified equation for the scattering of two identical nucleons, which is obtained at the beginning. Considering proton-nucleus scattering we found that the resulting amplitude has two components, one resembling a multiple scattering series for distinguishable particles, and the other a distorted (A-1) nucleon cluster exchange. For elastic pA scattering the multiple scattering amplitude is found in the form of an optical potential expansion. We show that the Kerman-McManus-Thaler theory of the optical potential could be easily modified to include the effects of antisymmetrization of the projectile with the target nucleons. Nucleus-nucleus scattering is studied first for distinguishable target and beam nucleus. Afterwards the Pauli principle is included, where only the case of deuteron-nucleus scattering is discussed in detail. The resulting amplitude has four components. Two of them correspond to modified multiple scattering expansions and the others are distorted (A-1)- and (A-2)- nucleon cluster exchange. The result for d-A scattering is extended to the general case of nucleus-nucleus scattering. The equations are simple to use and as such constitute an improvement over existing schemes

  14. Transport methods: general. 6. A Flux-Limited Diffusion Theory Derived from the Maximum Entropy Eddington Factor

    International Nuclear Information System (INIS)

    Yin, Chukai; Su, Bingjing

    2001-01-01

    The Minerbo's maximum entropy Eddington factor (MEEF) method was proposed as a low-order approximation to transport theory, in which the first two moment equations are closed for the scalar flux f and the current F through a statistically derived nonlinear Eddington factor f. This closure has the ability to handle various degrees of anisotropy of angular flux and is well justified both numerically and theoretically. Thus, a lot of efforts have been made to use this approximation in transport computations, especially in the radiative transfer and astrophysics communities. However, the method suffers numerical instability and may lead to anomalous solutions if the equations are solved by certain commonly used (implicit) mesh schemes. Studies on numerical stability in one-dimensional cases show that the MEEF equations can be solved satisfactorily by an implicit scheme (of treating δΦ/δx) if the angular flux is not too anisotropic so that f 32 , the classic diffusion solution P 1 , the MEEF solution f M obtained by Riemann solvers, and the NFLD solution D M for the two problems, respectively. In Fig. 1, NFLD and MEEF quantitatively predict very close results. However, the NFLD solution is qualitatively better because it is continuous while MEEF predicts unphysical jumps near the middle of the slab. In Fig. 2, the NFLD and MEEF solutions are almost identical, except near the material interface. In summary, the flux-limited diffusion theory derived from the MEEF description is quantitatively as accurate as the MEEF method. However, it is more qualitatively correct and user-friendly than the MEEF method and can be applied efficiently to various steady-state problems. Numerical tests show that this method is widely valid and overall predicts better results than other low-order approximations for various kinds of problems, including eigenvalue problems. Thus, it is an appealing approximate solution technique that is fast computationally and yet is accurate enough for a

  15. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basing finite element methods on variational principles, especially if, as maximum and minimum principles, these can provide bounds and hence estimates of accuracy. The non-symmetric (and hence stationary rather than extremum principles) are seen however to play a significant role in optimisation theory. (Orig./A.B.)

  16. First-principles Theory of Magnetic Multipoles in Condensed Matter Systems

    Science.gov (United States)

    Suzuki, Michi-To; Ikeda, Hiroaki; Oppeneer, Peter M.

    2018-04-01

    The multipole concept, which characterizes the spacial distribution of scalar and vector objects by their angular dependence, has already become widely used in various areas of physics. In recent years it has become employed to systematically classify the anisotropic distribution of electrons and magnetization around atoms in solid state materials. This has been fuelled by the discovery of several physical phenomena that exhibit unusual higher rank multipole moments, beyond that of the conventional degrees of freedom as charge and magnetic dipole moment. Moreover, the higher rank electric/magnetic multipole moments have been suggested as promising order parameters in exotic hidden order phases. While the experimental investigations of such anomalous phases have provided encouraging observations of multipolar order, theoretical approaches have developed at a slower pace. In particular, a materials' specific theory has been missing. The multipole concept has furthermore been recognized as the key quantity which characterizes the resultant configuration of magnetic moments in a cluster of atomic moments. This cluster multipole moment has then been introduced as macroscopic order parameter for a noncollinear antiferromagnetic structure in crystals that can explain unusual physical phenomena whose appearance is determined by the magnetic point group symmetry. It is the purpose of this review to discuss the recent developments in the first-principles theory investigating multipolar degrees of freedom in condensed matter systems. These recent developments exemplify that ab initio electronic structure calculations can unveil detailed insight in the mechanism of physical phenomena caused by the unconventional, multipole degree of freedom.

  17. Precautionary discourse. Thinking through the distinction between the precautionary principle and the precautionary approach in theory and practice.

    Science.gov (United States)

    Dinneen, Nathan

    2013-01-01

    This paper addresses the distinction, arising from the different ways the European Union and United States have come to adopt precaution regarding various environmental and health-related risks, between the precautionary principle and the precautionary approach in both theory and practice. First, this paper addresses how the precautionary principle has been variously defined, along with an exploration of some of the concepts with which it has been associated. Next, it addresses how the distinction between the precautionary principle and precautionary approach manifested itself within the political realm. Last, it considers the theoretical foundation of the precautionary principle in the philosophy of Hans Jonas, considering whether the principled-pragmatic distinction regarding precaution does or doesn't hold up in Jonas' thought.

  18. The principles of quantum theory, from Planck's quanta to the Higgs boson the nature of quantum reality and the spirit of Copenhagen

    CERN Document Server

    Plotnitsky, Arkady

    2016-01-01

    The book considers foundational thinking in quantum theory, focusing on the role the fundamental principles and principle thinking there, including thinking that leads to the invention of new principles, which is, the book contends, one of the ultimate achievements of theoretical thinking in physics and beyond. The focus on principles, prominent during the rise and in the immediate aftermath of quantum theory, has been uncommon in more recent discussions and debates concerning it. The book argues, however, that exploring the fundamental principles and principle thinking is exceptionally helpful in addressing the key issues at stake in quantum foundations and the seemingly interminable debates concerning them. Principle thinking led to major breakthroughs throughout the history of quantum theory, beginning with the old quantum theory and quantum mechanics, the first definitive quantum theory, which it remains within its proper (nonrelativistic) scope. It has, the book also argues, been equally important in qua...

  19. Development of a mathematical model of the heating phase of rubber mixture and development of the synthesis of the heating control algorithm using the Pontryagin maximum principle

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2017-01-01

    Full Text Available The article is devoted to the development of the algorithm of the heating phase control of a rubber compound for CJSC “Voronezh tyre plant”. The algorithm is designed for implementation on basis of controller Siemens S-300 to control the RS-270 mixer. To compile the algorithm a systematic analysis of the heating process has been performed as a control object, also the mathematical model of the heating phase has been developed on the basis of the heat balance equation, which describes the process of heating of a heat-transfer agent in the heat exchanger and further heating of the mixture in the mixer. The dynamic characteristics of temperature of the heat exchanger and the rubber mixer have been obtained. Taking into account the complexity and nonlinearity of the control object – a rubber mixer, as well as the availability of methods and great experience in managing this machine in an industrial environment, the algorithm has been implemented using the Pontryagin maximum principle. The optimization problem is reduced to determining the optimal control (heating steam supply and the optimal path of the object’s output coordinates (the temperature of the mixture which ensure the least flow of steam while heating a rubber compound in a limited time. To do this, the mathematical model of the heating phase has been written in matrix form. Coefficients matrices for each state of the control, control and disturbance vectors have been created, the Hamilton function has been obtained and time switching points have been found for constructing an optimal control and escape path of the object. Analysis of the model experiments and practical research results in the process of programming of the controller have showed a decrease in the heating steam consumption by 24.4% during the heating phase of the rubber compound.

  20. High-Performance First-Principles Molecular Dynamics for Predictive Theory and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gygi, Francois [Univ. of California, Davis, CA (United States). Dept. of Computer Science; Galli, Giulia [Univ. of Chicago, IL (United States); Schwegler, Eric [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-03

    This project focused on developing high-performance software tools for First-Principles Molecular Dynamics (FPMD) simulations, and applying them in investigations of materials relevant to energy conversion processes. FPMD is an atomistic simulation method that combines a quantum-mechanical description of electronic structure with the statistical description provided by molecular dynamics (MD) simulations. This reliance on fundamental principles allows FPMD simulations to provide a consistent description of structural, dynamical and electronic properties of a material. This is particularly useful in systems for which reliable empirical models are lacking. FPMD simulations are increasingly used as a predictive tool for applications such as batteries, solar energy conversion, light-emitting devices, electro-chemical energy conversion devices and other materials. During the course of the project, several new features were developed and added to the open-source Qbox FPMD code. The code was further optimized for scalable operation of large-scale, Leadership-Class DOE computers. When combined with Many-Body Perturbation Theory (MBPT) calculations, this infrastructure was used to investigate structural and electronic properties of liquid water, ice, aqueous solutions, nanoparticles and solid-liquid interfaces. Computing both ionic trajectories and electronic structure in a consistent manner enabled the simulation of several spectroscopic properties, such as Raman spectra, infrared spectra, and sum-frequency generation spectra. The accuracy of the approximations used allowed for direct comparisons of results with experimental data such as optical spectra, X-ray and neutron diffraction spectra. The software infrastructure developed in this project, as applied to various investigations of solids, liquids and interfaces, demonstrates that FPMD simulations can provide a detailed, atomic-scale picture of structural, vibrational and electronic properties of complex systems

  1. Bonding in Heavier Group 14 Zero-Valent Complexes-A Combined Maximum Probability Domain and Valence Bond Theory Approach.

    Science.gov (United States)

    Turek, Jan; Braïda, Benoît; De Proft, Frank

    2017-10-17

    The bonding in heavier Group 14 zero-valent complexes of a general formula L 2 E (E=Si-Pb; L=phosphine, N-heterocyclic and acyclic carbene, cyclic tetrylene and carbon monoxide) is probed by combining valence bond (VB) theory and maximum probability domain (MPD) approaches. All studied complexes are initially evaluated on the basis of the structural parameters and the shape of frontier orbitals revealing a bent structural motif and the presence of two lone pairs at the central E atom. For the VB calculations three resonance structures are suggested, representing the "ylidone", "ylidene" and "bent allene" structures, respectively. The influence of both ligands and central atoms on the bonding situation is clearly expressed in different weights of the resonance structures for the particular complexes. In general, the bonding in the studied E 0 compounds, the tetrylones, is best described as a resonating combination of "ylidone" and "ylidene" structures with a minor contribution of the "bent allene" structure. Moreover, the VB calculations allow for a straightforward assessment of the π-backbonding (E→L) stabilization energy. The validity of the suggested resonance model is further confirmed by the complementary MPD calculations focusing on the E lone pair region as well as the E-L bonding region. Likewise, the MPD method reveals a strong influence of the σ-donating and π-accepting properties of the ligand. In particular, either one single domain or two symmetrical domains are found in the lone pair region of the central atom, supporting the predominance of either the "ylidene" or "ylidone" structures having one or two lone pairs at the central atom, respectively. Furthermore, the calculated average populations in the lone pair MPDs correlate very well with the natural bond orbital (NBO) populations, and can be related to the average number of electrons that is backdonated to the ligands. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  3. On the impossibility of a small violation of the Pauli principle within the local quantum field theory

    International Nuclear Information System (INIS)

    Govorkov, A.B.

    1988-01-01

    It is shown that the local quantum field theory of free fields allows only the generalizations of the conventional quantizations (corresponding to the Fermi and Bose statistics) that correspond to the para-Fermi and para-Bose statistics and does not permit ''small'' violation of the Pauli principle

  4. Improving the Quality of Online Discussion: The Effects of Strategies Designed Based on Cognitive Load Theory Principles

    Science.gov (United States)

    Darabi, Aubteen; Jin, Li

    2013-01-01

    This article focuses on heavy cognitive load as the reason for the lack of quality associated with conventional online discussion. Using the principles of cognitive load theory, four online discussion strategies were designed specifically aiming at reducing the discussants' cognitive load and thus enhancing the quality of their online discussion.…

  5. Neural principles of memory and a neural theory of analogical insight

    Science.gov (United States)

    Lawson, David I.; Lawson, Anton E.

    1993-12-01

    Grossberg's principles of neural modeling are reviewed and extended to provide a neural level theory to explain how analogies greatly increase the rate of learning and can, in fact, make learning and retention possible. In terms of memory, the key point is that the mind is able to recognize and recall when it is able to match sensory input from new objects, events, or situations with past memory records of similar objects, events, or situations. When a match occurs, an adaptive resonance is set up in which the synaptic strengths of neurons are increased; thus a long term record of the new input is formed in memory. Systems of neurons called outstars and instars are presumably the underlying units that enable this to occur. Analogies can greatly facilitate learning and retention because they activate the outstars (i.e., the cells that are sampling the to-be-learned pattern) and cause the neural activity to grow exponentially by forming feedback loops. This increased activity insures the boost in synaptic strengths of neurons, thus causing storage and retention in long-term memory (i.e., learning).

  6. Algorithm Preserving Mass Fraction Maximum Principle for Multi-component Flows%多组份流动质量分数保极值原理算法

    Institute of Scientific and Technical Information of China (English)

    唐维军; 蒋浪; 程军波

    2014-01-01

    We propose a new method for compressible multi⁃component flows with Mie⁃Gruneisen equation of state based on mass fraction. The model preserves conservation law of mass, momentum and total energy for mixture flows. It also preserves conservation of mass of all single components. Moreover, it prevents pressure and velocity from jumping across interface that separate regions of different fluid components. Wave propagation method is used to discretize this quasi⁃conservation system. Modification of numerical method is adopted for conservative equation of mass fraction. This preserves the maximum principle of mass fraction. The wave propagation method which is not modified for conservation equations of flow components mass, cannot preserve the mass fraction in the interval [0,1]. Numerical results confirm validity of the method.%对基于质量分数的Mie⁃Gruneisen状态方程多流体组份模型提出了新的数值方法。该模型保持混合流体的质量、动量、和能量守恒,保持各组份分质量守恒,在多流体组份界面处保持压力和速度一致。该模型是拟守恒型方程系统。对该模型系统的离散采用波传播算法。与直接对模型中所有守恒方程采用相同算法不同的是,在处理分介质质量守恒方程时,对波传播算法进行了修正,使之满足质量分数保极值原理。而不作修改的算法则不能保证质量分数在[0,1]范围。数值实验验证了该方法有效。

  7. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)

    2015-05-26

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  8. Transferring and practicing the correspondence principle in the old quantum theory: Franck, Hund and the Ramsauer effect

    Energy Technology Data Exchange (ETDEWEB)

    Jaehnert, Martin [MPIWG, Berlin (Germany)

    2013-07-01

    In 1922 Niels Bohr wrote a letter to Arnold Sommerfeld complaining that: ''[i]n the last years my attempts to develop the principles of quantum theory were met with very little understanding.'' Looking for the correspondence idea in publications, one finds that the principle was indeed hardly applied by physicists outside of Copenhagen. Only by 1922 physicists from wider research networks of quantum theory started to transfer the principle into their research fields, often far removed from its initial realm of atomic spectroscopy. How and why did physicists suddenly become interested in the idea that Bohr*s writings had been promoting since 1918? How was the correspondence principle transferred to these fields and how did its transfer affect these fields and likewise the correspondence principle itself? To discuss these questions, my talk focuses on the work of James Franck and Friedrich Hund on the Ramsauer effect in 1922 and follows the interrelation of the developing understanding of a newly found effect and the adaptation of the correspondence idea in a new conceptual and sociological context.

  9. Schwinger's quantum action principle from Dirac’s formulation through Feynman’s path integrals, the Schwinger-Keldysh method, quantum field theory, to source theory

    CERN Document Server

    Milton, Kimball A

    2015-01-01

    Starting from the earlier notions of stationary action principles, these tutorial notes shows how Schwinger’s Quantum Action Principle descended from Dirac’s formulation, which independently led Feynman to his path-integral formulation of quantum mechanics. Part I brings out in more detail the connection between the two formulations, and applications are discussed. Then, the Keldysh-Schwinger time-cycle method of extracting matrix elements is described. Part II will discuss the variational formulation of quantum electrodynamics and the development of source theory.

  10. Revisiting a theory of negotiation: the utility of Markiewicz (2005) proposed six principles.

    Science.gov (United States)

    McDonald, Diane

    2008-08-01

    their differences and be willing to move on. But the problem is that evaluators are not necessarily equipped with the technical or personal skills required for effective negotiation. In addition, the time and effort that are required to undertake this mediating role are often not sufficiently understood by those who commission a review. With such issues in mind Markiewicz, A. [(2005). A balancing act: Resolving multiple stakeholder interests in program evaluation. Evaluation Journal of Australasia, 4(1-2), 13-21] has proposed six principles upon which to build a case for negotiation to be integrated into the evaluation process. This paper critiques each of these principles in the context of an evaluation undertaken of a youth program. In doing so it challenges the view that stakeholder consensus is always possible if program improvement is to be achieved. This has led to some refinement and further extension of the proposed theory of negotiation that is seen to be instrumental to the role of an evaluator.

  11. Ground-state densities from the Rayleigh-Ritz variation principle and from density-functional theory.

    Science.gov (United States)

    Kvaal, Simen; Helgaker, Trygve

    2015-11-14

    The relationship between the densities of ground-state wave functions (i.e., the minimizers of the Rayleigh-Ritz variation principle) and the ground-state densities in density-functional theory (i.e., the minimizers of the Hohenberg-Kohn variation principle) is studied within the framework of convex conjugation, in a generic setting covering molecular systems, solid-state systems, and more. Having introduced admissible density functionals as functionals that produce the exact ground-state energy for a given external potential by minimizing over densities in the Hohenberg-Kohn variation principle, necessary and sufficient conditions on such functionals are established to ensure that the Rayleigh-Ritz ground-state densities and the Hohenberg-Kohn ground-state densities are identical. We apply the results to molecular systems in the Born-Oppenheimer approximation. For any given potential v ∈ L(3/2)(ℝ(3)) + L(∞)(ℝ(3)), we establish a one-to-one correspondence between the mixed ground-state densities of the Rayleigh-Ritz variation principle and the mixed ground-state densities of the Hohenberg-Kohn variation principle when the Lieb density-matrix constrained-search universal density functional is taken as the admissible functional. A similar one-to-one correspondence is established between the pure ground-state densities of the Rayleigh-Ritz variation principle and the pure ground-state densities obtained using the Hohenberg-Kohn variation principle with the Levy-Lieb pure-state constrained-search functional. In other words, all physical ground-state densities (pure or mixed) are recovered with these functionals and no false densities (i.e., minimizing densities that are not physical) exist. The importance of topology (i.e., choice of Banach space of densities and potentials) is emphasized and illustrated. The relevance of these results for current-density-functional theory is examined.

  12. The Form of Law: Practical Principles and the Foundations of Kant’s Moral Theory

    OpenAIRE

    Reckner, William Leland

    2017-01-01

    Immanuel Kant argued that morality requires us to act on principles that we can will as universal laws. However, there has always been profound disagreement about how to apply this requirement, and about why this demand should be morally fundamental. This dissertation offers new answers to these questions by developing a deeper understanding of the “practical” principles that Kant wants us to be able to will as universal laws.My primary thesis is that practical principles state three things: ...

  13. The effect of implementing cognitive load theory-based design principles in virtual reality simulation training of surgical skills: a randomized controlled trial

    DEFF Research Database (Denmark)

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars

    2016-01-01

    Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation training...

  14. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  15. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1991-01-01

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual

  16. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1992-01-01

    In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual

  17. The Precautionary Principle, Evidence-Based Medicine, and Decision Theory in Public Health Evaluation

    Science.gov (United States)

    Fischer, Alastair J.; Ghelardi, Gemma

    2016-01-01

    The precautionary principle (PP) has been used in the evaluation of the effectiveness and/or cost-effectiveness of interventions designed to prevent future harms in a range of activities, particularly in the area of the environment. Here, we provide details of circumstances under which the PP can be applied to the topic of harm reduction in Public Health. The definition of PP that we use says that the PP reverses the onus of proof of effectiveness between an intervention and its comparator when the intervention has been designed to reduce harm. We first describe the two frameworks used for health-care evaluation: evidence-based medicine (EBM) and decision theory (DT). EBM is usually used in treatment effectiveness evaluation, while either EBM or DT may be used in evaluating the effectiveness of the prevention of illness. For cost-effectiveness, DT is always used. The expectation in Public Health is that interventions employed to reduce harm will not actually increase harm, where “harm” in this context does not include opportunity cost. That implies that an intervention’s effectiveness can often be assumed. Attention should therefore focus on its cost-effectiveness. This view is consistent with the conclusions of DT. It is also very close to the PP notion of reversing the onus of proof, but is not consistent with EBM as normally practiced, where the onus is on showing a new practice to be superior to usual practice with a sufficiently high degree of certainty. Under our definitions, we show that where DT and the PP differ in their evaluation is in cost-effectiveness, but only for decisions that involve potential catastrophic circumstances, where the nation-state will act as if it is risk-averse. In those cases, it is likely that the state will pay more, and possibly much more, than DT would allow, in an attempt to mitigate impending disaster. That is, the rules that until now have governed all cost-effectiveness analyses are shown not to apply to catastrophic

  18. Design Principles for Serious Video Games in Mathematics Education: From Theory to Practice

    OpenAIRE

    Konstantinos Chorianopoulos; Michail Giannakos

    2014-01-01

    There is growing interest in the employment of serious video games in science education, but there are no clear design principles. After surveying previous work in serious video game design, we highlighted the following design principles: 1) engage the students with narrative (hero, story), 2) employ familiar gameplay mechanics from popular video games, 3) engage students into constructive trial and error game-play and 4) situate collaborative learning. As illustrated examples we designed two...

  19. Revisiting maximum-a-posteriori estimation in log-concave models: from differential geometry to decision theory

    OpenAIRE

    Pereyra, Marcelo

    2016-01-01

    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is addressed by using models that are log-concave and whose posterior mode can be computed efficiently by using convex optimisation algorithms. However, despite its success and rapid adoption, MAP estimation is not theoretically well understood yet, and the prevalent view is that it is generally not proper ...

  20. [Inheritance on and innovation of traditional Chinese medicine (TCM) flavor theory and TCM flavor standardization principle flavor theory in Compendium of Materia Medica].

    Science.gov (United States)

    Zhang, Wei; Zhang, Rui-xian; Li, Jian

    2015-12-01

    All previous literatures about Chinese herbal medicines show distinctive traditional Chinese medicine (TCM) flavors. Compendium of Materia Medica is an influential book in TCM history. The TCM flavor theory and flavor standardization principle in this book has important significance for modern TCM flavor standardization. Compendium of Materia Medica pays attention to the flavor theory, explain the relations between the flavor of medicine and its therapeutic effects by means of Neo-Confucianism of the Song and Ming Dynasties. However,the book has not reflected and further developed the systemic theory, which originated in the Jin and Yuan dynasty. In Compendium of Materia Medica , flavor are standardized just by tasting medicines, instead of deducing flavors. Therefore, medicine tasting should be adopted as the major method to standardize the flavor of medicine.

  1. On the application of motivation theory to human factors/ergonomics: motivational design principles for human-technology interaction.

    Science.gov (United States)

    Szalma, James L

    2014-12-01

    Motivation is a driving force in human-technology interaction. This paper represents an effort to (a) describe a theoretical model of motivation in human technology interaction, (b) provide design principles and guidelines based on this theory, and (c) describe a sequence of steps for the. evaluation of motivational factors in human-technology interaction. Motivation theory has been relatively neglected in human factors/ergonomics (HF/E). In both research and practice, the (implicit) assumption has been that the operator is already motivated or that motivation is an organizational concern and beyond the purview of HF/E. However, technology can induce task-related boredom (e.g., automation) that can be stressful and also increase system vulnerability to performance failures. A theoretical model of motivation in human-technology interaction is proposed, based on extension of the self-determination theory of motivation to HF/E. This model provides the basis for both future research and for development of practical recommendations for design. General principles and guidelines for motivational design are described as well as a sequence of steps for the design process. Human motivation is an important concern for HF/E research and practice. Procedures in the design of both simple and complex technologies can, and should, include the evaluation of motivational characteristics of the task, interface, or system. In addition, researchers should investigate these factors in specific human-technology domains. The theory, principles, and guidelines described here can be incorporated into existing techniques for task analysis and for interface and system design.

  2. Anomalous singularities in the complex Kohn variational principle of quantum scattering theory

    International Nuclear Information System (INIS)

    Lucchese, R.R.

    1989-01-01

    Variational principles for symmetric complex scattering matrices (e.g., the S matrix or the T matrix) based on the Kohn variational principle have been thought to be free from anomalous singularities. We demonstrate that singularities do exist for these variational principles by considering single and multichannel model problems based on exponential interaction potentials. The singularities are found by considering simultaneous variations in two nonlinear parameters in the variational calculation (e.g., the energy and the cutoff function for the irregular continuum functions). The singularities are found when the cutoff function for the irregular continuum functions extends over a range of the radial coordinate where the square-integrable basis set does not have sufficient flexibility. Effects of these singularities generally should not appear in applications of the complex Kohn method where a fixed variational basis set is considered and only the energy is varied

  3. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  4. Scaling theory put into practice: First-principles modeling of transport in doped silicon nanowires

    DEFF Research Database (Denmark)

    Markussen, Troels; Rurali, R.; Jauho, Antti-Pekka

    2007-01-01

    We combine the ideas of scaling theory and universal conductance fluctuations with density-functional theory to analyze the conductance properties of doped silicon nanowires. Specifically, we study the crossover from ballistic to diffusive transport in boron or phosphorus doped Si nanowires...

  5. Andragogy and Motivation: An Examination of the Principles of Andragogy through Two Motivation Theories

    Science.gov (United States)

    Houde, Joseph

    2006-01-01

    Andragogy, originally proposed by Malcolm Knowles, has been criticized as an atheoretical model. Validation of andragogy has been advocated by scholars, and this paper explores one method for that process. Current motivation theory, specifically socioemotional selectivity and self-determination theory correspond with aspects of andragogy. In…

  6. The complex itinerary of Leibniz’s planetary theory physical convictions, metaphysical principles and Keplerian inspiration

    CERN Document Server

    Bussotti, Paolo

    2015-01-01

    This book presents new insights into Leibniz’s research on planetary theory and his system of pre-established harmony. Although some aspects of this theory have been explored in the literature, others are less well known. In particular, the book offers new contributions on the connection between the planetary theory and the theory of gravitation. It also provides an in-depth discussion of Kepler’s influence on Leibniz’s planetary theory and, more generally, on Leibniz’s concept of pre-established harmony. Three initial chapters presenting the mathematical and physical details of Leibniz’s works provide a frame of reference. The book then goes on to discuss research on Leibniz’s conception of gravity and the connection between Leibniz and Kepler. .

  7. Theory of quasi-Chaplygin unstable media and evolutionary principle for selecting spontaneous solutions

    International Nuclear Information System (INIS)

    Zhdanov, S.K.; Trubnikov, B.A.; Institut Atomnoi Energii, Moscow, USSR)

    1986-01-01

    A one-dimensional ideal gas with negative compressibility described by quasi-Chaplygin equations is discussed. Its reduction to a Laplace equation is shown, and an evolutionary principle for selecting spontaneous solutions is summarized. Three extremely simple spontaneous solutions are obtained along with multidimensional self-similar solutions. The Buneman instability in a plasma is considered as an example. 17 references

  8. Reframing the Principle of Specialisation in Legitimation Code Theory: A Blended Learning Perspective

    Science.gov (United States)

    Owusu-Agyeman, Yaw; Larbi-Siaw, Otu

    2017-01-01

    This study argues that in developing a robust framework for students in a blended learning environment, Structural Alignment (SA) becomes the third principle of specialisation in addition to Epistemic Relation (ER) and Social Relation (SR). We provide an extended code: (ER+/-, SR+/-, SA+/-) that present strong classification and framing to the…

  9. Hamilton-Ostrogradsky principle in the theory of nonlinear elasticity with the combined approach

    International Nuclear Information System (INIS)

    Sporykhin, A.N.

    1995-01-01

    The assignment of a portion of the edge conditions in the deformed state and a portion of them in the initial state so that the initial and deformed states of the body are unknowns is a characteristic feature of the statement of a number of technological problems. Haber and Haber and Abel have performed studies in this direction, where constitutive relationships have been constructed within the framework of a linearly elastic material. Use of the displacements of individual particles as variable parameters in these relationships has required additional conditions that do not follow from the formulated problem. Use of familiar variational principles described in Euler coordinates is rendered difficult by the complexity of edge-condition formulation in the special case when the initial state is unknown. The latter is governed by the fact that variational principles are derived from the initial formulations open-quotes in Lagrangian coordinates,close quotes by recalculating the operation functional. Using Lagrange's principle, Novikov and Sporykhin constructed constitutive equations in the general case of a nonlinearly elastic body with edge conditions assigned in different configurations. An analogous problem is solved in this paper using the Hamilton-Ostrogradsky principle

  10. Organizing principles as tools for bridging the gap between system theory and biological experimentation.

    Science.gov (United States)

    Mekios, Constantinos

    2016-04-01

    Twentieth-century theoretical efforts towards the articulation of general system properties came short of having the significant impact on biological practice that their proponents envisioned. Although the latter did arrive at preliminary mathematical formulations of such properties, they had little success in showing how these could be productively incorporated into the research agenda of biologists. Consequently, the gap that kept system-theoretic principles cut-off from biological experimentation persisted. More recently, however, simple theoretical tools have proved readily applicable within the context of systems biology. In particular, examples reviewed in this paper suggest that rigorous mathematical expressions of design principles, imported primarily from engineering, could produce experimentally confirmable predictions of the regulatory properties of small biological networks. But this is not enough for contemporary systems biologists who adopt the holistic aspirations of early systemologists, seeking high-level organizing principles that could provide insights into problems of biological complexity at the whole-system level. While the presented evidence is not conclusive about whether this strategy could lead to the realization of the lofty goal of a comprehensive explanatory integration, it suggests that the ongoing quest for organizing principles is pragmatically advantageous for systems biologists. The formalisms postulated in the course of this process can serve as bridges between system-theoretic concepts and the results of molecular experimentation: they constitute theoretical tools for generalizing molecular data, thus producing increasingly accurate explanations of system-wide phenomena.

  11. Design Principles for Serious Video Games in Mathematics Education: From Theory to Practice

    Directory of Open Access Journals (Sweden)

    Konstantinos Chorianopoulos

    2014-09-01

    Full Text Available There is growing interest in the employment of serious video games in science education, but there are no clear design principles. After surveying previous work in serious video game design, we highlighted the following design principles: 1 engage the students with narrative (hero, story, 2 employ familiar gameplay mechanics from popular video games, 3 engage students into constructive trial and error game-play and 4 situate collaborative learning. As illustrated examples we designed two math video games targeted to primary education students. The gameplay of the math video games embeds addition operations in a seamless way, which has been inspired by that of classic platform games. In this way, the students are adding numbers as part of popular gameplay mechanics and as a means to reach the video game objective, rather than as an end in itself. The employment of well-defined principles in the design of math video games should facilitate the evaluation of learning effectiveness by researchers. Moreover, educators can deploy alternative versions of the games in order to engage students with diverse learning styles. For example, some students might be motived and benefited by narrative, while others by collaboration, because it is unlikely that one type of serious video game might fit all learning styles. The proposed principles are not meant to be an exhaustive list, but a starting point for extending the list and applying them in other cases of serious video games beyond mathematics and learning.

  12. The precautionary principle and EMF: from the theory to the practice

    International Nuclear Information System (INIS)

    Lambrozo, J.

    2002-01-01

    In 1992 the United Nations Declaration on the Environment stated that where there are threats of serious or irreversible damage, lack of full scientific certainty will not be used as a reason for postponing cost-effective measures to prevent environmental degradation. Since this interpretation has been reaffirmed within numerous framework conventions and national environment law in a number of countries has begun to incorporate it. The contents of the precautionary principle: There are in fact two completely different ideas about the principle: the absolute: the precautionary principle would aim to guarantee complete harmless. The aim is zero risk and even a minimal suspicion of risk should result in a moratorium or a definitive ban, the moderate: its implementation is subject to a scientifically credible statement of hypothetical risk. It also gives priority to positive measures particularly research to provide a better assessment of the risk. In every case, before any decision a statement of cost and of advantages should be drawn up. The concept of prudent avoidance introduced in 1989 by G. Morgan and adopted by some states (Sweden, Australia) seems to be a specific application of the precautionary principle to EMF, taking into account the cost of the policy. The EMF research: After more than 20 years of research (epidemiological residential and professional studies, in vitro studies and laboratory animal studies) the scientific uncertainty has been considerably reduced but the possibility of some adverse effects remains. This fact and the public concern about EMF (partly explained by the ubiquity of exposure) explain the temptations in applying the precautionary principle to the EMF issue

  13. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  14. To the elementary theory of critical (maximum) flow rate of two-phase mixture in channels with various sections

    International Nuclear Information System (INIS)

    Nigmatulin, B.I.; Soplenkov, K.I.

    1978-01-01

    On the basis of the concepts of two-phase dispersive flow with various structures (bubble, vapour-drop etc) in the framework of the two-speed and two-temperature one-dimension stationary model of the current with provision for phase transitions the conditions, under which a critical (maximum) flow rate of two-phase mixture is achieved during its outflowing from a channel with the pre-set geometry, have been determined. It is shown, that for the choosen set of two-phase flow equations with the known parameters of deceleration and structure one of the critical conditions is satisfied: either solution of the set of equations corresponding a critical flow rate is a special one, i.e. passes through a special point locating between minimum and outlet channel sections where the carrying phase velocity approaches the value of decelerated sound speed in the mixture or the determinator of the initial set of equations equals zero for the outlet channel sections, i.e. gradients of the main flow parameters tend to +-infinity in this section, and carrying phase velocity also approaches the value of the decelerated sound velocity in the mixture

  15. In search of a principled theory of the 'value' of knowledge.

    Science.gov (United States)

    Castelfranchi, Cristiano

    2016-01-01

    A theory of the Value/Utility of information and knowledge (K) is not really there. This would require a theory of the centrality of Goals in minds, and of the role of K relative to Goals and their dynamics. K value is a notion relative to Goal value. Inf/K is precisely a resource, a means and the value of means depends on the value of their possible functions and uses. The claim of this paper is that Ks have a Value and Utility, they can be more or less 'precious'; they have a cost and imply some risks; they can be not only useful but negative and dangerous. We also examine the 'quality' of this resource: its reliability; and its crucial role in goal processing: activating goals, abandoning, choosing, planning, formulating intentions, decide to act. 'Relevance theory', Information theory, Epistemic Utility theory, etc. are not enough for providing a theory of the Value/Utility of K. And also truthfulness is not 'the' Value of K. Even true information can be noxious for the subject.

  16. Geometric derivation of string field theory from first principles: Closed strings and modular invariance

    International Nuclear Information System (INIS)

    Kaku, M.

    1988-01-01

    We present an entirely new approach to closed-string field theory, called Igeometric string field theory R, which avoids the complications found in Becchi-Rouet-Stora-Tyutin string field theory (e.g., ghost counting, infinite overcounting of diagrams, midpoints, lack of modular invariance). Following the analogy with general relativity and Yang-Mills theory, we define a new infinite-dimensional local gauge group, called the unified string group, which uniquely specifies the connection fields, the curvature tensor, the measure and tensor calculus, and finally the action itself. Geometric field theory, when gauge fixed, yields an entirely new class of gauges called the interpolating gauge which allows us to smoothly interpolate between the midpoint gauge and the end-point gauge (''covariantized light-cone gauge''). We can show that geometric string field theory reproduces one copy of the Shapiro-Virasoro model. Surprisingly, after the gauge is broken, a new Iclosed four-string interactionR emerges as the counterpart of the instantaneous four-fermion Coulomb term in QED. This term restores modular invariance and precisely fills the missing region of the complex plane

  17. The psychological behaviorism theory of pain and the placebo: its principles and results of research application.

    Science.gov (United States)

    Staats, Peter S; Hekmat, Hamid; Staats, Arthur W

    2004-01-01

    The psychological behaviorism theory of pain unifies biological, behavioral, and cognitive-behavioral theories of pain and facilitates development of a common vocabulary for pain research across disciplines. Pain investigation proceeds in seven interacting realms: basic biology, conditioned learning, language cognition, personality differences, pain behavior, the social environment, and emotions. Because pain is an emotional response, examining the bidirectional impact of emotion is pivotal to understanding pain. Emotion influences each of the other areas of interest and causes the impact of each factor to amplify or diminish in an additive fashion. Research based on this theory of pain has revealed the ameliorating impact on pain of (1) improving mood by engaging in pleasant sexual fantasies, (2) reducing anxiety, and (3) reducing anger through various techniques. Application of the theory to therapy improved the results of treatment of osteoarthritic pain. The psychological behaviorism theory of the placebo considers the placebo a stimulus conditioned to elicit a positive emotional response. This response is most powerful if it is elicited by conditioned language. Research based on this theory of the placebo that pain is ameliorated by a placebo suggestion and augmented by a nocebo suggestion and that pain sensitivity and pain anxiety increase susceptibility to a placebo.

  18. Theory of group extension, Shubnikov-Curie principle and phase transformations

    International Nuclear Information System (INIS)

    Koptsik, V.A.; Talis, A.L.

    1983-01-01

    It is shown, that the generalized Curie principle (GCP) is the principle of nondecreasing abstract symmetry under structural transformations in (quasi) isolated physical systems. Asymmetry of such systems at any structural level is compensated by their symmetrization at another one, transformation of the old and appearance of qualitatively new symmetries. A corresponding situation is preserved also at the description level (mathematical simulation) of physical systems. Structural levels of substance arrangement and forms of connection between them, reflected by the Shubnikov-Curie (SCP) and GCP are inexhaustible. With the discovery of new structural levels and new forms of relations between them can be discovered and new forms of SCP, which can not be exhausted in the given work

  19. Microbial control and food Preservation: Theory and practice:Principles of food preservation

    Science.gov (United States)

    Food preservation is an action or method used to maintain foods at certain desirable properties or quality to obtain maximum benefit. A good method of food preservation is one that slows down or prevents altogether the action of the agents of spoilage without damaging the food. To achieve this, cert...

  20. Using the IRPA Guiding Principles on Stakeholder Engagement: putting theory into practice.

    Science.gov (United States)

    Jones, C Rick

    2011-11-01

    The International Radiation Protection Association (IRPA) published their Guiding Principles for Radiation Protection Professionals on Stakeholder Engagement in February 2009. The publication of this document is the culmination of four years of work by the Spanish Society for Radiological Protection, the French Society of Radioprotection, the United Kingdom Society of Radiological Protection, and the IRPA organization, with full participation by the Italian Associate Society and the Nuclear Energy Agency's Committee on Radiation Protection and Public Health. The Guiding Principles provide field-tested and sound counsel to the radiation protection profession to aid it in successfully engaging with stakeholders in decision-making processes that result in mutually agreeable and sustainable decisions. Stakeholders in the radiation protection decision making process are now being recognized as a spectrum of individuals and organizations specific to the situation. It is also important to note that stakeholder engagement is not needed or advised in all decision making situations, although it has been shown to be a tool of first choice in dealing with such topics as intervention and chronic exposure situations, as well as situations that have reached an impasse using traditional approaches to decision-making. To enhance the contribution of the radiation protection profession, it is important for radiation protection professionals and their national professional societies to embrace and implement the IRPA Guiding Principles in a sustainable way by making them a cornerstone of their operations and an integral part of day-to-day activities.

  1. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials.

    Science.gov (United States)

    Tsyshevsky, Roman V; Sharia, Onise; Kuklja, Maija M

    2016-02-19

    This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.

  2. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials

    Directory of Open Access Journals (Sweden)

    Roman V. Tsyshevsky

    2016-02-01

    Full Text Available This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.

  3. RENEWAL OF BASIC LAWS AND PRINCIPLES FOR POLAR CONTINUUM THEORIES (Ⅵ)-CONSERVATION LAWS OF MASS AND INERTIA

    Institute of Scientific and Technical Information of China (English)

    戴安民

    2003-01-01

    The purpose is to reestablish the coupled conservation laws, the local conservation equations and the jump conditions of mass and inertia for polar continuum theories. In this connection the new material derivatives of the deformation gradient, the line element, the surface element and the volume element were derived and the generalized Reynolds transport theorem was presented. Combining these conservation laws of mass and inertia with the balance laws of momentum, angular momentum and energy derived in our previous papers of this series, a rather complete system of coupled basic laws and principles for polar continuum theories is constituted on the whole. From this system the coupled nonlocal balance equations of mass, inertia, momentum, angular momentum and energy may be obtained by the usual localization.

  4. Lattice instability and martensitic transformation in LaAg predicted from first-principles theory

    DEFF Research Database (Denmark)

    Vaitheeswaran, G.; Kanchana, V.; Zhang, X.

    2012-01-01

    The electronic structure, elastic constants and lattice dynamics of the B2 type intermetallic compound LaAg are studied by means of density functional theory calculations with the generalized gradient approximation for exchange and correlation. The calculated equilibrium properties and elastic......, calculated using density functional perturbation theory, are in good agreement with available inelastic neutron scattering data. Under pressure, the phonon dispersions develop imaginary frequencies, starting at around 2.3 GPa, in good accordance with the martensitic instability observed above 3.4 GPa...

  5. Chemical analysis using coincidence Doppler broadening and supporting first-principles theory: Applications to vacancy defects in compound semiconductors

    International Nuclear Information System (INIS)

    Makkonen, I.; Rauch, C.; Mäki, J.-M.; Tuomisto, F.

    2012-01-01

    The Doppler broadening of the positron annihilation radiation contains information on the chemical environment of vacancy defects trapping positrons in solids. The measured signal can, for instance, reveal impurity atoms situated next to vacancies. As compared to integrated quantities such as the positron annihilation rate or the annihilation line shape parameters, the full Doppler spectrum measured in the coincidence mode contains much more useful information for defect identification. This information, however, is indirect and complementary understanding is needed to fully interpret the results. First-principles calculations are a valuable tool in the analysis of measured spectra. One can construct an atomic-scale model for a given candidate defect, calculate from first principles the corresponding Doppler spectrum, and directly compare results between experiment and theory. In this paper we discuss recent examples of successful combinations of coincidence Doppler broadening measurements and supporting first-principles calculations. These demonstrate the predictive power of state-of-the-art calculations and the usefulness of such an approach in the chemical analysis of vacancy defects.

  6. Coexistence of different vacua in the effective quantum field theory and multiple point principle

    International Nuclear Information System (INIS)

    Volovik, G.E.

    2004-01-01

    According to the multiple point principle our Universe in on the coexistence curve of two or more phases of the quantum vacuum. The coexistence of different quantum vacua can be regulated by the exchange of the global fermionic charges between the vacua. If the coexistence is regulated by the baryonic charge, all the coexisting vacua exhibit the baryonic asymmetry. Due to the exchange of the baryonic charge between the vacuum and matter which occurs above the electroweak transition, the baryonic asymmetry of the vacuum induces the baryonic asymmetry of matter in our Standard-Model phase of the quantum vacuum [ru

  7. Focal Points Revisited: Team Reasoning, the Principle of Insufficient Reason and Cognitive Hierarchy Theory

    NARCIS (Netherlands)

    Bardsley, N.; Ule, A.

    It is well-established that people can coordinate their behaviour on focal points in games with multiple equilibria, but it is not firmly established how. Much coordination game data might be explained by team reasoning, a departure from individualistic choice theory. However, a less exotic

  8. Stir Bar Sorptive Extraction (SBSE), a novel extraction technique for aqueous samples: theory and principles

    NARCIS (Netherlands)

    Baltussen, H.A.; Sandra, P.J.F.; David, F.; Cramers, C.A.M.G.

    1999-01-01

    The theory and practice of a novel approach for sample enrichment, namely the application of stir bars coated with the sorbent polydimethylsiloxane (PDMS) and referred to as stir bar sorptive extraction (SBSE) are presented. Stir bars with a length of 10 and 40 mm coated with 55 and 219 L of PDMS

  9. Aspects of psychoanalytic theory: drives, defense, and the pleasure-unpleasure principle.

    Science.gov (United States)

    Brenner, Charles

    2008-07-01

    Freud explained certain fundamentally important aspects of mental motivation by assuming the existence of two drives, one libidinal and the other aggressive/destructive. Elements of this theory that seem invalid are identified and discussed, and revisions are proposed that appear to have more validity and greater clinical usefulness.

  10. Geodesign From Theory to Practice: In the Search for Geodesign Principles in Italian Planning Regulations

    Directory of Open Access Journals (Sweden)

    Michele Campagna

    2014-05-01

    Full Text Available Geodesign is a trans-disciplinary concept emerging in a growing debate among scholars in North America, Europe and Asia with the aim of bridging the gap between landscape architecture, spatial planning and design, and Geographic Information Science. The concept entails the application of methods and techniques for planning sustainable development in an integrated process, from project conceptualization to analysis, simulation and evaluation, from scenario design to impact assessment, in a process including stakeholder participation and collaboration in decision-making strongly relaying on the use of digital information technologies. As such, the concept may be not entirely new. However, it is argued here, its application have not reached expected results so far. Hence, more research is needed in order to better understand methodological, technical, organizational, professional and institutional issues for a fruitful application of Geodesign principles and method in the practices. In line with the above assumptions, this paper is aimed at supplying early critical insights as a contribution towards a clearer understanding of the relationships between Geodesign concepts and planning regulations. The auspice with this first endeavour along this research issue is to make a more explicit and robust link between policy principles and planning, design and decision-making methods and tools, possibly as a small contribution to bring innovation in the planning education, governance and practice.

  11. Effects of Modality and Redundancy Principles on the Learning and Attitude of a Computer-Based Music Theory Lesson among Jordanian Primary Pupils

    Science.gov (United States)

    Aldalalah, Osamah Ahmad; Fong, Soon Fook

    2010-01-01

    The purpose of this study was to investigate the effects of modality and redundancy principles on the attitude and learning of music theory among primary pupils of different aptitudes in Jordan. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The…

  12. Cognitive Theory of Multimedia Learning, Instructional Design Principles, and Students with Learning Disabilities in Computer-Based and Online Learning Environments

    Science.gov (United States)

    Greer, Diana L.; Crutchfield, Stephen A.; Woods, Kari L.

    2013-01-01

    Struggling learners and students with Learning Disabilities often exhibit unique cognitive processing and working memory characteristics that may not align with instructional design principles developed with typically developing learners. This paper explains the Cognitive Theory of Multimedia Learning and underlying Cognitive Load Theory, and…

  13. Inelastic transport theory from first principles: Methodology and application to nanoscale devices

    DEFF Research Database (Denmark)

    Frederiksen, Thomas; Paulsson, Magnus; Brandbyge, Mads

    2007-01-01

    the density-functional codes SIESTA and TRANSIESTA that use atomic basis sets. The inelastic conductance characteristics are calculated using the nonequilibrium Green’s function formalism, and the electron-phonon interaction is addressed with perturbation theory up to the level of the self-consistent Born...... approximation. While these calculations often are computationally demanding, we show how they can be approximated by a simple and efficient lowest order expansion. Our method also addresses effects of energy dissipation and local heating of the junction via detailed calculations of the power flow. We...... the inelastic current through different hydrocarbon molecules between gold electrodes. Both for the wires and the molecules our theory is in quantitative agreement with experiments, and characterizes the system-specific mode selectivity and local heating....

  14. The application of sustainable development principles to the theory and practice of property valuation

    OpenAIRE

    Lorenz, David Philipp

    2006-01-01

    This dissertation is an exploration into the fields of sustainable development, property investment and valuation. It investigates the rationale for immediately and rigorously integrating sustainability issues into property valuation theory and practice and proposes theoretical and practical options for valuers on how to address sustainability issues within valuation reports. It is argued that the perception of property as a commodity is changing to emphasize sustainable design features and p...

  15. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    Science.gov (United States)

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  16. A New Monotone Iteration Principle in the Theory of Nonlinear Fractional Differential Equations

    Directory of Open Access Journals (Sweden)

    Bapurao C. Dhage

    2015-08-01

    Full Text Available In this paper the author proves the algorithms for the existence as well as approximations of the solutions for the initial value problems of nonlinear fractional differential equations using the operator theoretic techniques in a partially ordered metric space. The main results rely on the Dhage iteration principle embodied in the recent hybrid fixed point theorems of Dhage (2014 in a partially ordered normed linear space and the existence and approximations of the solutions of the considered nonlinear fractional differential equations are obtained under weak mixed partial continuity and partial Lipschitz conditions. Our hypotheses and existence and approximation results are also well illustrated by some numerical examples.

  17. First-principles many-body theory for ultra-cold atoms

    International Nuclear Information System (INIS)

    Drummond, Peter D.; Hu Hui; Liu Xiaji

    2010-01-01

    Recent breakthroughs in the creation of ultra-cold atoms in the laboratory have ushered in unprecedented changes in physical science. These enormous changes in the coldest temperatures available in the laboratory mean that many novel experiments are possible. There is unprecedented control and simplicity in these novel systems, meaning that quantum many-body theory is now facing severe challenges in quantitatively understanding these new results. We discuss some of the new experiments and recently developed theoretical techniques required to predict the results obtained.

  18. First principles density functional theory study of Pb doped α-MnO2 catalytic materials

    Science.gov (United States)

    Song, Zilin; Yan, Zhiguo; Yang, Xiaojun; Bai, Hang; Duan, Yuhua; Yang, Bin; Leng, Li

    2018-03-01

    The impact of Pb in the tunnels of manganese oxide octahedral molecular sieves on chemical state of Mn species and lattice oxygen were investigated utilizing density functional theory calculations. The results show that the Pb dopant in the tunnels of OMS-2 could reduce the average valence states of Mn. The lower energy required for bulk oxygen defects formation in Pb-OMS-2 validates the activation of lattice oxygen by inclusion of tunnel dopant. The inclusion of Pb promotes the catalytic oxidation activity of OMS-2 by reducing the energy required for the surface lattice oxygen migration during the Mars - van Krevelen oxidation process.

  19. Minimum current principle and variational method in theory of space charge limited flow

    Energy Technology Data Exchange (ETDEWEB)

    Rokhlenko, A. [Department of Mathematics, Rutgers University, Piscataway, New Jersey 08854-8019 (United States)

    2015-10-21

    In spirit of the principle of least action, which means that when a perturbation is applied to a physical system, its reaction is such that it modifies its state to “agree” with the perturbation by “minimal” change of its initial state. In particular, the electron field emission should produce the minimum current consistent with boundary conditions. It can be found theoretically by solving corresponding equations using different techniques. We apply here the variational method for the current calculation, which can be quite effective even when involving a short set of trial functions. The approach to a better result can be monitored by the total current that should decrease when we on the right track. Here, we present only an illustration for simple geometries of devices with the electron flow. The development of these methods can be useful when the emitter and/or anode shapes make difficult the use of standard approaches. Though direct numerical calculations including particle-in-cell technique are very effective, but theoretical calculations can provide an important insight for understanding general features of flow formation and even sometimes be realized by simpler routines.

  20. Second Harmonic Correlation Spectroscopy: Theory and Principles for Determining Surface Binding Kinetics.

    Science.gov (United States)

    Sly, Krystal L; Conboy, John C

    2017-06-01

    A novel application of second harmonic correlation spectroscopy (SHCS) for the direct determination of molecular adsorption and desorption kinetics to a surface is discussed in detail. The surface-specific nature of second harmonic generation (SHG) provides an efficient means to determine the kinetic rates of adsorption and desorption of molecular species to an interface without interference from bulk diffusion, which is a significant limitation of fluorescence correlation spectroscopy (FCS). The underlying principles of SHCS for the determination of surface binding kinetics are presented, including the role of optical coherence and optical heterodyne mixing. These properties of SHCS are extremely advantageous and lead to an increase in the signal-to-noise (S/N) of the correlation data, increasing the sensitivity of the technique. The influence of experimental parameters, including the uniformity of the TEM00 laser beam, the overall photon flux, and collection time are also discussed, and are shown to significantly affect the S/N of the correlation data. Second harmonic correlation spectroscopy is a powerful, surface-specific, and label-free alternative to other correlation spectroscopic methods for examining surface binding kinetics.

  1. Basic Knowledge for Market Principle: Approaches to the Price Coordination Mechanism by Using Optimization Theory and Algorithm

    Science.gov (United States)

    Aiyoshi, Eitaro; Masuda, Kazuaki

    On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.

  2. The Cooperative Principle: Is Grice’s Theory Suitable to Indonesian Language Culture?

    Directory of Open Access Journals (Sweden)

    Agnes Herawati

    2013-05-01

    Full Text Available Article discussed how native speakers of Indonesian observed Grice’s maxims. One hundred conversations contributed in live talk show from varied Indonesia television channels were analysed. The results show that Grice’s maxims are fulfilled in many conversations. Nevertheless, in other situations, two kinds of non-fulfilment of the maxims are observed. First, the speaker deliberately exploits a maxim, which is suitable to Grice’s theory. Second, the speaker fails to observe but does not exploit a maxim, which leads to some interpretations of the cultural patterns of the Indonesian language: communicative politeness, high context culture and the needs of harmony in communication that are considered as the manifesting of Indonesian culture.

  3. [The system theory of aging: methodological principles, basic tenets and applications].

    Science.gov (United States)

    Krut'ko, V N; Dontsov, V I; Zakhar'iashcheva, O V

    2009-01-01

    The paper deals with the system theory of aging constructed on the basis of present-day scientific methodology--the system approach. The fundamental cause for aging is discrete existence of individual life forms, i.e. living organisms which, from the thermodynamic point of view, are not completely open systems. The primary aging process (build-up of chaos and system disintegration of aging organism) obeys the second law of thermodynamics or the law of entropy increase in individual partly open systems. In living organisms the law is exhibited as synergy of four main aging mechanisms: system "pollution" of organism, loss of non-regenerative elements, accumulation of damages and deformations, generation of variability on all levels, and negative changes in regulation processes and consequent degradation of the organism systematic character. These are the general aging mechanisms; however, the regulatory mechanisms may be important equally for organism aging and search for ways to prolong active life.

  4. Effect of Interface Structure on Thermal Boundary Conductance by using First-principles Density Functional Perturbation Theory

    Institute of Scientific and Technical Information of China (English)

    GAO Xue; ZHANG Yue; SHANG Jia-Xiang

    2011-01-01

    We choose a Si/Ge interface as a research object to investigate the infiuence of interface disorder on thermal boundary conductance. In the calculations, the diffuse mismatch model is used to study thermal boundary conductance between two non-metallic materials, while the phonon dispersion relationship is calculated by the first-principles density functional perturbation theory. The results show that interface disorder limits thermal transport. The increase of atomic spacing at the interface results in weakly coupled interfaces and a decrease in the thermal boundary conductance. This approach shows a simplistic method to investigate the relationship between microstructure and thermal conductivity.%We choose a Si/Ge interface as a research object to investigate the influence of interface disorder on thermal boundary conductance.In the calculations,the diffuse mismatch model is used to study thermal boundary conductance between two non-metallic materials,while the phonon dispersion relationship is calculated by the first-principles density functional perturbation theory.The results show that interface disorder limits thermal transport.The increase of atomic spacing at the interface results in weakly coupled interfaces and a decrease in the thermal boundary conductance.This approach shows a simplistic method to investigate the relationship between microstructure and thermal conductivity.It is well known that interfaces can play a dominant role in the overall thermal transport characteristics of structures whose length scale is less than the phonon mean free path.When heat flows across an interface between two different materials,there exists a temperature jump at the interface.Thermal boundary conductance (TBC),which describes the efficiency of heat flow at material interfaces,plays an importance role in the transport of thermal energy in nanometerscale devices,semiconductor superlattices,thin film multilayers and nanocrystalline materials.[1

  5. Local causal structures, Hadamard states and the principle of local covariance in quantum field theory

    Energy Technology Data Exchange (ETDEWEB)

    Dappiaggi, Claudio [Erwin Schroedinger Institut fuer Mathematische Physik, Wien (Austria); Pinamonti, Nicola [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Porrmann, Martin [KwaZulu-Natal Univ. (South Africa). Quantum Research Group, School of Physics; National Institute for Theoretical Physics, Durban (South Africa)

    2010-01-15

    In the framework of the algebraic formulation, we discuss and analyse some new features of the local structure of a real scalar quantum field theory in a strongly causal spacetime. In particular we use the properties of the exponential map to set up a local version of a bulk-to-boundary correspondence. The bulk is a suitable subset of a geodesic neighbourhood of any but fixed point p of the underlying background, while the boundary is a part of the future light cone having p as its own tip. In this regime, we provide a novel notion for the extended *-algebra of Wick polynomials on the said cone and, on the one hand, we prove that it contains the information of the bulk counterpart via an injective *-homomorphism while, on the other hand, we associate to it a distinguished state whose pull-back in the bulk is of Hadamard form. The main advantage of this point of view arises if one uses the universal properties of the exponential map and of the light cone in order to show that, for any two given backgrounds M and M{sup '} and for any two subsets of geodesic neighbourhoods of two arbitrary points, it is possible to engineer the above procedure such that the boundary extended algebras are related via a restriction homomorphism. This allows for the pull-back of boundary states in both spacetimes and, thus, to set up a machinery which permits the comparison of expectation values of local field observables in M and M{sup '}. (orig.)

  6. Local causal structures, Hadamard states and the principle of local covariance in quantum field theory

    International Nuclear Information System (INIS)

    Dappiaggi, Claudio; Pinamonti, Nicola

    2010-01-01

    In the framework of the algebraic formulation, we discuss and analyse some new features of the local structure of a real scalar quantum field theory in a strongly causal spacetime. In particular we use the properties of the exponential map to set up a local version of a bulk-to-boundary correspondence. The bulk is a suitable subset of a geodesic neighbourhood of any but fixed point p of the underlying background, while the boundary is a part of the future light cone having p as its own tip. In this regime, we provide a novel notion for the extended *-algebra of Wick polynomials on the said cone and, on the one hand, we prove that it contains the information of the bulk counterpart via an injective *-homomorphism while, on the other hand, we associate to it a distinguished state whose pull-back in the bulk is of Hadamard form. The main advantage of this point of view arises if one uses the universal properties of the exponential map and of the light cone in order to show that, for any two given backgrounds M and M ' and for any two subsets of geodesic neighbourhoods of two arbitrary points, it is possible to engineer the above procedure such that the boundary extended algebras are related via a restriction homomorphism. This allows for the pull-back of boundary states in both spacetimes and, thus, to set up a machinery which permits the comparison of expectation values of local field observables in M and M ' . (orig.)

  7. Maximum neutron flux in thermal reactors; Maksimum neutronskog fluksa kod termalnih reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Strugar, P V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1968-07-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples.

  8. Hand rim wheelchair propulsion training using biomechanical real-time visual feedback based on motor learning theory principles.

    Science.gov (United States)

    Rice, Ian; Gagnon, Dany; Gallagher, Jere; Boninger, Michael

    2010-01-01

    As considerable progress has been made in laboratory-based assessment of manual wheelchair propulsion biomechanics, the necessity to translate this knowledge into new clinical tools and treatment programs becomes imperative. The objective of this study was to describe the development of a manual wheelchair propulsion training program aimed to promote the development of an efficient propulsion technique among long-term manual wheelchair users. Motor learning theory principles were applied to the design of biomechanical feedback-based learning software, which allows for random discontinuous real-time visual presentation of key spatiotemporal and kinetic parameters. This software was used to train a long-term wheelchair user on a dynamometer during 3 low-intensity wheelchair propulsion training sessions over a 3-week period. Biomechanical measures were recorded with a SmartWheel during over ground propulsion on a 50-m level tile surface at baseline and 3 months after baseline. Training software was refined and administered to a participant who was able to improve his propulsion technique by increasing contact angle while simultaneously reducing stroke cadence, mean resultant force, peak and mean moment out of plane, and peak rate of rise of force applied to the pushrim after training. The proposed propulsion training protocol may lead to favorable changes in manual wheelchair propulsion technique. These changes could limit or prevent upper limb injuries among manual wheelchair users. In addition, many of the motor learning theory-based techniques examined in this study could be applied to training individuals in various stages of rehabilitation to optimize propulsion early on.

  9. First-principle study of quantum confinement effect on small sized silicon quantum dots using density-functional theory

    International Nuclear Information System (INIS)

    Anas, M. M.; Othman, A. P.; Gopir, G.

    2014-01-01

    Density functional theory (DFT), as a first-principle approach has successfully been implemented to study nanoscale material. Here, DFT by numerical basis-set was used to study the quantum confinement effect as well as electronic properties of silicon quantum dots (Si-QDs) in ground state condition. Selection of quantum dot models were studied intensively before choosing the right structure for simulation. Next, the computational result were used to examine and deduce the electronic properties and its density of state (DOS) for 14 spherical Si-QDs ranging in size up to ∼ 2 nm in diameter. The energy gap was also deduced from the HOMO-LUMO results. The atomistic model of each silicon QDs was constructed by repeating its crystal unit cell of face-centered cubic (FCC) structure, and reconstructed until the spherical shape obtained. The core structure shows tetrahedral (T d ) symmetry structure. It was found that the model need to be passivated, and hence it was noticed that the confinement effect was more pronounced. The model was optimized using Quasi-Newton method for each size of Si-QDs to get relaxed structure before it was simulated. In this model the exchange-correlation potential (V xc ) of the electrons was treated by Local Density Approximation (LDA) functional and Perdew-Zunger (PZ) functional

  10. Anisotropic thermal expansion of SnSe from first-principles calculations based on Grüneisen's theory.

    Science.gov (United States)

    Liu, Gang; Zhou, Jian; Wang, Hui

    2017-06-14

    Based on Grüneisen's theory, the elastic properties and thermal expansion of bulk SnSe with the Pnma phase are investigated by using first-principles calculations. Our numerical results indicate that the linear thermal expansion coefficient along the a direction is smaller than the one along the b direction, while the one along the c direction shows a significant negative value, even at high temperature. The numerical results are in good accordance with experimental results. In addition, generalized and macroscopic Grüneisen parameters are also presented. It is also found that SnSe possesses negative Possion's ratio. The contributions of different phonon modes to NTE along the c direction are investigated, and it is found that the two modes which make the most important contributions to NTE are transverse vibrations perpendicular to the c direction. Finally, we analyze the relation of elastic constants to negative thermal expansion, and demonstrate that negative thermal expansion can also occur even with all positive macroscopic Grüneisen parameters.

  11. Human disease mortality kinetics are explored through a chain model embodying principles of extreme value theory and competing risks.

    Science.gov (United States)

    Juckett, D A; Rosenberg, B

    1992-04-21

    The distributions for human disease-specific mortality exhibit two striking characteristics: survivorship curves that intersect near the longevity limit; and, the clustering of best-fitting Weibull shape parameter values into groups centered on integers. Correspondingly, we have hypothesized that the distribution intersections result from either competitive processes or population partitioning and the integral clustering in the shape parameter results from the occurrence of a small number of rare, rate-limiting events in disease progression. In this report we initiate a theoretical examination of these questions by exploring serial chain model dynamics and parameteric competing risks theory. The links in our chain models are composed of more than one bond, where the number of bonds in a link are denoted the link size and are the number of events necessary to break the link and, hence, the chain. We explored chains with all links of the same size or with segments of the chain composed of different size links (competition). Simulations showed that chain breakage dynamics depended on the weakest-link principle and followed kinetics of extreme-values which were very similar to human mortality kinetics. In particular, failure distributions for simple chains were Weibull-type extreme-value distributions with shape parameter values that were identifiable with the integral link size in the limit of infinite chain length. Furthermore, for chains composed of several segments of differing link size, the survival distributions for the various segments converged at a point in the S(t) tails indistinguishable from human data. This was also predicted by parameteric competing risks theory using Weibull underlying distributions. In both the competitive chain simulations and the parametric competing risks theory, however, the shape values for the intersecting distributions deviated from the integer values typical of human data. We conclude that rare events can be the source of

  12. Principles of nucleation theory

    International Nuclear Information System (INIS)

    Clement, C.F.; Wood, M.H.

    1980-01-01

    The nucleation of small stable species is described in the problem of void growth by discrete rate equations. When gas is being produced the problem reduces to one of calculating the incubation dose for the gas bubble to void transition. A general expression for the steady state nucleation rate is derived for the case when voids are formed by vacancy fluctuations which enable an effective nucleation barrier to be crossed. (author)

  13. Variational principles for collective motion: Relation between invariance principle of the Schroedinger equation and the trace variational principle

    International Nuclear Information System (INIS)

    Klein, A.; Tanabe, K.

    1984-01-01

    The invariance principle of the Schroedinger equation provides a basis for theories of collective motion with the help of the time-dependent variational principle. It is formulated here with maximum generality, requiring only the motion of intrinsic state in the collective space. Special cases arise when the trial vector is a generalized coherent state and when it is a uniform superposition of collective eigenstates. The latter example yields variational principles uncovered previously only within the framework of the equations of motion method. (orig.)

  14. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  15. The effect of implementing cognitive load theory-based design principles in virtual reality simulation training of surgical skills: a randomized controlled trial

    DEFF Research Database (Denmark)

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars

    2016-01-01

    training of mastoidectomy. Methods Eighteen novice medical students received 1 h of self-directed virtual reality simulation training of the mastoidectomy procedure randomized for standard instructions (control) or cognitive load theory-based instructions with a worked example followed by a problem......Background Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation....... Increased cognitive load when part tasks needed to be integrated in the post-training procedures could be a possible explanation for this. Other instructional designs and methods are needed to lower the cognitive load and improve the performance in virtual reality surgical simulation training of novices....

  16. First-principles density functional theory (DFT) study of gold nanorod and its interaction with alkanethiol ligands.

    Science.gov (United States)

    Hu, Hang; Reven, Linda; Rey, Alejandro

    2013-10-17

    The structure and mechanical properties of gold nanorods and their interactions with alkenthiolate self-assembled monolayers have been determined using a novel first-principle density functional theory simulation approach. The multifaceted, 1-dimensional, octagonal nanorod has alternate Au100 and Au110 surfaces. The structural optimization of the gold nanorods was performed with a mixed basis: the outermost layer of gold atoms used double-ζ plus polarization (DZP), the layer below used double-ζ (DZ), and the inner layers used single-ζ (SZ). The final structure compares favorably with simulations using DZP for all atoms. Phonon dispersion calculations and ab initio molecular dynamics (AIMD) were used to establish the dynamic and thermal stability of the system. From the AIMD simulations it was found that the nanorod system will undergo significant surface reconstruction at 300 K. In addition, when subjected to mechanical stress in the axial direction, the nanorod responds as an orthotropic material, with uniform expansion along the radial direction. The Young's moduli are 207 kbar in the axial direction and 631 kbar in the radial direction. The binding of alkanethiolates, ranging from methanethiol to pentanethiol, caused formation of surface point defects on the Au110 surfaces. On the Au100 surfaces, the defects occurred in the inner layer, creating a small surface island. These defects make positive and negative concavities on the gold nanorod surface, which helps the ligand to achieve a more stable state. The simulation results narrowed significant knowledge gaps on the alkanethiolate adsorption process and on their mutual interactions on gold nanorods. The mechanical characterization offers a new dimension to understand the physical chemistry of these complex nanoparticles.

  17. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  18. Variational principles

    CERN Document Server

    Moiseiwitsch, B L

    2004-01-01

    This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha

  19. Analyzing implementation dynamics using theory-driven evaluation principles: lessons learnt from a South African centralized chronic dispensing model.

    Science.gov (United States)

    Magadzire, Bvudzai Priscilla; Marchal, Bruno; Mathys, Tania; Laing, Richard O; Ward, Kim

    2017-12-04

    Centralized dispensing of essential medicines is one of South Africa's strategies to address the shortage of pharmacists, reduce patients' waiting times and reduce over-crowding at public sector healthcare facilities. This article reports findings of an evaluation of the Chronic Dispensing Unit (CDU) in one province. The objectives of this process evaluation were to: (1) compare what was planned versus the actual implementation and (2) establish the causal elements and contextual factors influencing implementation. This qualitative study employed key informant interviews with the intervention's implementers (clinicians, managers and the service provider) [N = 40], and a review of policy and program documents. Data were thematically analyzed by identifying the main influences shaping the implementation process. Theory-driven evaluation principles were applied as a theoretical framework to explain implementation dynamics. The overall participants' response about the CDU was positive and the majority of informants concurred that the establishment of the CDU to dispense large volumes of medicines is a beneficial strategy to address healthcare barriers because mechanical functions are automated and distribution of medicines much quicker. However, implementation was influenced by the context and discrepancies between planned activities and actual implementation were noted. Procurement inefficiencies at central level caused medicine stock-outs and affected CDU activities. At the frontline, actors were aware of the CDU's implementation guidelines regarding patient selection, prescription validity and management of non-collected medicines but these were adapted to accommodate practical realities and to meet performance targets attached to the intervention. Implementation success was a result of a combination of 'hardware' (e.g. training, policies, implementation support and appropriate infrastructure) and 'software' (e.g. ownership, cooperation between healthcare

  20. The Entropy Principle from Continuum Mechanics to Hyperbolic Systems of Balance Laws: The Modern Theory of Extended Thermodynamics

    Directory of Open Access Journals (Sweden)

    Tommaso Ruggeri

    2008-09-01

    Full Text Available We discuss the different roles of the entropy principle in modern thermodynamics. We start with the approach of rational thermodynamics in which the entropy principle becomes a selection rule for physical constitutive equations. Then we discuss the entropy principle for selecting admissible discontinuous weak solutions and to symmetrize general systems of hyperbolic balance laws. A particular attention is given on the local and global well-posedness of the relative Cauchy problem for smooth solutions. Examples are given in the case of extended thermodynamics for rarefied gases and in the case of a multi-temperature mixture of fluids.

  1. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  2. Comparison of four microfinance markets from the point of view of the effectuation theory, complemented by proposed musketeer principle illustrating forces within village banks

    Directory of Open Access Journals (Sweden)

    Hes Tomáš

    2017-03-01

    Full Text Available Microfinance services are essential tools of formalization of shadow economics, leveraging immature entrepreneurship with external capital. Given the importance of shadow economics for the social balance of developing countries, the importance of an answer to a question of how microfinance entities come into existence, is rather essential. While decision-taking process leading to entrepreneurship were explained by the effectuation theory developed in the 90’, these explanations were not concerned with the logics of creation of microenterprises in neither developing countries nor microfinance village banks. While the abovementioned theories explain the nascence of companies in environment of developed markets, importance of a focus on emerging markets related to large share of human society of microfinance clientele is obvious. The study provides a development streak to the effectuation Theory, adding the musketeer principle to the five effectuation principles proposed by Sarasvathy. Furthermore, the hitherto not considered relationship between social capital and effectuation related concepts is another proposal of the paper focusing on description of the nature of microfinance clientele from the point of view of effectuation theory and social capital drawing a comparison of microfinance markets in four countries, Turkey, Sierra Leone, Indonesia and Afghanistan.

  3. The principle of general tovariance

    NARCIS (Netherlands)

    Heunen, C.; Landsman, N.P.; Spitters, B.A.W.; Loja Fernandes, R.; Picken, R.

    2008-01-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance

  4. Principled Missing Data Treatments.

    Science.gov (United States)

    Lang, Kyle M; Little, Todd D

    2018-04-01

    We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.

  5. Experiences from Participants in Large-Scale Group Practice of the Maharishi Transcendental Meditation and TM-Sidhi Programs and Parallel Principles of Quantum Theory, Astrophysics, Quantum Cosmology, and String Theory: Interdisciplinary Qualitative Correspondences

    Science.gov (United States)

    Svenson, Eric Johan

    Participants on the Invincible America Assembly in Fairfield, Iowa, and neighboring Maharishi Vedic City, Iowa, practicing Maharishi Transcendental Meditation(TM) (TM) and the TM-Sidhi(TM) programs in large groups, submitted written experiences that they had had during, and in some cases shortly after, their daily practice of the TM and TM-Sidhi programs. Participants were instructed to include in their written experiences only what they observed and to leave out interpretation and analysis. These experiences were then read by the author and compared with principles and phenomena of modern physics, particularly with quantum theory, astrophysics, quantum cosmology, and string theory as well as defining characteristics of higher states of consciousness as described by Maharishi Vedic Science. In all cases, particular principles or phenomena of physics and qualities of higher states of consciousness appeared qualitatively quite similar to the content of the given experience. These experiences are presented in an Appendix, in which the corresponding principles and phenomena of physics are also presented. These physics "commentaries" on the experiences were written largely in layman's terms, without equations, and, in nearly every case, with clear reference to the corresponding sections of the experiences to which a given principle appears to relate. An abundance of similarities were apparent between the subjective experiences during meditation and principles of modern physics. A theoretic framework for understanding these rich similarities may begin with Maharishi's theory of higher states of consciousness provided herein. We conclude that the consistency and richness of detail found in these abundant similarities warrants the further pursuit and development of such a framework.

  6. The Hayes principles: learning from the national pilot of information technology and core generalisable theory in informatics.

    Science.gov (United States)

    de Lusignan, Simon; Krause, Paul

    2010-01-01

    There has been much criticism of the NHS national programme for information technology (IT); it has been an expensive programme and some elements appear to have achieved little. The Hayes report was written as an independent review of health and social care IT in England. To identify key principles for health IT implementation which may have relevance beyond the critique of NHS IT. We elicit ten principles from the Hayes report, which if followed may result in more effective IT implementation in health care. They divide into patient-centred, subsidiarity and strategic principles. The patient-centred principles are: 1) the patient must be at the centre of all information systems; 2) the provision of patient-level operational data should form the foundation - avoid the dataset mentality; 3) store health data as close to the patient as possible; 4) enable the patient to take a more active role with their health data within a trusted doctor-patient relationship. The subsidiarity principles set out to balance the local and health-system-wide needs: 5) standardise centrally - patients must be able to benefit from interoperability; 6) provide a standard procurement package and an approved process that ensures safety standards and provision of interoperable systems; 7) authorise a range of local suppliers so that health providers can select the system best meeting local needs; 8) allow local migration from legacy systems, as and when improved functionality for patients is available. And finally the strategic principles: 9) evaluate health IT systems in terms of measureable benefits to patients; 10) strategic planning of systems should reflect strategic goals for the health of patients/the population. Had the Hayes principles been embedded within our approach to health IT, and in particular to medical record implementation, we might have avoided many of the costly mistakes with the UK national programme. However, these principles need application within the modern IT

  7. The Hayes principles: learning from the national pilot of information technology and core generalisable theory in informatics

    Directory of Open Access Journals (Sweden)

    Simon de Lusignan

    2010-06-01

    Conclusions Had the Hayes principles been embedded within our approach to health IT, and in particular to medical record implementation, we might have avoided many of the costly mistakes with the UK national programme. However, these principles need application within the modern IT environment. Closeness to the patient must not be interpreted as physical but instead as a virtual patient-centred space; data will be secure within the cloud and we should dump the vault and infrastructure mentality. Health IT should be developed as an adaptive ecosystem.

  8. Johnson-Laird's mental models theory and its principles: an application with cell mental models of high school students

    OpenAIRE

    Mª Luz Rodríguez Palmero; Javier Marrero Acosta; Marco Antonio Moreira

    2001-01-01

    Following a discussion of Johnson-Laird's mental models theory, we report a study regarding high school students mental representations of cell, understood as mental models. Research findings suggest the appropriatedness of such a theory as a framework to interpret students' representations.

  9. Unconscionability, unfair exploitation and the nature of contract theory: comments on Melvin Eisenberg's ‘Foundational Principles of Contract Law'

    NARCIS (Netherlands)

    Hesselink, M.W.

    2013-01-01

    This short paper contains comments prepared for the 'Foundational Principles of Contract Law Roundtable’ held at Berkeley in January 2013. It discusses the relationships between contract law and democracy, between contract prices and human dignity, and between the American doctrine of

  10. The effect of implementing cognitive load theory-based design principles in virtual reality simulation training of surgical skills: a randomized controlled trial.

    Science.gov (United States)

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten

    2016-01-01

    Cognitive overload can inhibit learning, and cognitive load theory-based instructional design principles can be used to optimize learning situations. This study aims to investigate the effect of implementing cognitive load theory-based design principles in virtual reality simulation training of mastoidectomy. Eighteen novice medical students received 1 h of self-directed virtual reality simulation training of the mastoidectomy procedure randomized for standard instructions (control) or cognitive load theory-based instructions with a worked example followed by a problem completion exercise (intervention). Participants then completed two post-training virtual procedures for assessment and comparison. Cognitive load during the post-training procedures was estimated by reaction time testing on an integrated secondary task. Final-product analysis by two blinded expert raters was used to assess the virtual mastoidectomy performances. Participants in the intervention group had a significantly increased cognitive load during the post-training procedures compared with the control group (52 vs. 41 %, p  = 0.02). This was also reflected in the final-product performance: the intervention group had a significantly lower final-product score than the control group (13.0 vs. 15.4, p  virtual reality surgical simulation training of novices.

  11. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  12. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  13. Applying principles from the game theory to acute stroke care: Learning from the prisoner's dilemma, stag-hunt, and other strategies.

    Science.gov (United States)

    Saposnik, Gustavo; Johnston, S Claiborne

    2016-04-01

    Acute stroke care represents a challenge for decision makers. Decisions based on erroneous assessments may generate false expectations of patients and their family members, and potentially inappropriate medical advice. Game theory is the analysis of interactions between individuals to study how conflict and cooperation affect our decisions. We reviewed principles of game theory that could be applied to medical decisions under uncertainty. Medical decisions in acute stroke care are usually made under constrains: short period of time, with imperfect clinical information, limit understanding about patients and families' values and beliefs. Game theory brings some strategies to help us manage complex medical situations under uncertainty. For example, it offers a different perspective by encouraging the consideration of different alternatives through the understanding of patients' preferences and the careful evaluation of cognitive distortions when applying 'real-world' data. The stag-hunt game teaches us the importance of trust to strength cooperation for a successful patient-physician interaction that is beyond a good or poor clinical outcome. The application of game theory to stroke care may improve our understanding of complex medical situations and help clinicians make practical decisions under uncertainty. © 2016 World Stroke Organization.

  14. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  15. The relation between the general maxim of causality and the principle of uniformity in hume's theory of knowledge

    Directory of Open Access Journals (Sweden)

    José Oscar de Almeida Marques

    2012-06-01

    Full Text Available ABSTRACT When Hume, in the Treatise on Human Nature, began his examination of the relation of cause and effect, in particular, of the idea of necessary connection which is its essential constituent, he identified two preliminary questions that should guide his research: (1 For what reason we pronounce it necessary that every thing whose existence has a beginning should also have a cause and (2 Why we conclude that such particular causes must necessarily have such particular effects? (1.3.2, 14-15 Hume observes that our belief in these principles can result neither from an intuitive grasp of their truth nor from a reasoning that could establish them by demonstrative means. In particular, with respect to the first, Hume examines and rejects some arguments with which Locke, Hobbes and Clarke tried to demonstrate it, and suggests, by exclusion, that the belief that we place on it can only come from experience. Somewhat surprisingly, however, Hume does not proceed to show how that derivation of experience could be made, but proposes instead to move directly to an examination of the second principle, saying that, "perhaps, be found in the end, that the same answer will serve for both questions" (1.3.3, 9. Hume's answer to the second question is well known, but the first question is never answered in the rest of the Treatise, and it is even doubtful that it could be, which would explain why Hume has simply chosen to remove any mention of it when he recompiled his theses on causation in the Enquiry concerning Human Understanding. Given this situation, an interesting question that naturally arises is to investigate the relations of logical or conceptual implication between these two principles. Hume seems to have thought that an answer to (2 would also be sufficient to provide an answer to (1. Henry Allison, in his turn, argued (in Custom and Reason in Hume, p. 94-97 that the two questions are logically independent. My proposal here is to try to show

  16. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.; Ito, N.

    2013-01-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  17. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.

    2013-10-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  18. Pontryagin's maximum principle and optimization of the flight phase in ski jumping [Pontrjaginův princip maxima a optimalizace stylu letu ve skoku na lyžích

    Directory of Open Access Journals (Sweden)

    Radim Uhlář

    2009-09-01

    Full Text Available BACKGROUND: There are several factors (the initial ski jumper's body position and its changes at the transition to the flight phase, the magnitude and the direction of the velocity vector of the jumper's center of mass, the magnitude of the aerodynamic drag and lift forces, etc., which determine the trajectory of the jumper ski system along with the total distance of the jump. OBJECTIVE: The objective of this paper is to bring out a method based on Pontryagin's maximum principle, which allows us to obtain a solution of the optimization problem for flight style control with three constrained control variables – the angle of attack (a, body ski angle (b, and ski opening angle (V. METHODS: The flight distance was used as the optimality criterion. The borrowed regression function was taken as the source of information about the dependence of the drag (D and lift (L area on control variables with tabulated regression coefficients. The trajectories of the reference and optimized jumps were compared with the K = 125 m jumping hill profile in Frenštát pod Radhoštěm (Czech Republic and the appropriate lengths of the jumps, aerodynamic drag and lift forces, magnitudes of the ski jumper system's center of mass velocity vector and it's vertical and horizontal components were evaluated. Admissible control variables were taken at each time from the bounded set to respect the realistic posture of the ski jumper system in flight. RESULTS: It was found that a ski jumper should, within the bounded set of admissible control variables, minimize the angles (a and (b, whereas angle (V should be maximized. The length increment due to optimization is 17%. CONCLUSIONS: For future work it is necessary to determine the dependence of the aerodynamic forces acting on the ski jumper system on the flight via regression analysis of the experimental data as well as the application of the control variables related to the ski jumper's mental and motor abilities. [V

  19. Ergodicity, Maximum Entropy Production, and Steepest Entropy Ascent in the Proofs of Onsager's Reciprocal Relations

    Science.gov (United States)

    Benfenati, Francesco; Beretta, Gian Paolo

    2018-04-01

    We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).

  20. Introduction to optimal control theory

    International Nuclear Information System (INIS)

    Agrachev, A.A.

    2002-01-01

    These are lecture notes of the introductory course in Optimal Control theory treated from the geometric point of view. Optimal Control Problem is reduced to the study of controls (and corresponding trajectories) leading to the boundary of attainable sets. We discuss Pontryagin Maximum Principle, basic existence results, and apply these tools to concrete simple optimal control problems. Special sections are devoted to the general theory of linear time-optimal problems and linear-quadratic problems. (author)

  1. Outcomes of newly practicing nurses who applied principles of holistic comfort theory during the transition from school to practice: a qualitative study.

    Science.gov (United States)

    Goodwin, Miki; Candela, Lori

    2013-06-01

    The aim of this qualitative study was to explore if newly practicing nurses benefited from learning holistic comfort theory during their baccalaureate education, and to provide a conceptual framework to support the transition from school to practice. The study was conducted among graduates of an accelerated baccalaureate nursing program where holistic comfort theory was embedded as a learner-centered philosophy across the curriculum. A phenomenological process using van Manen's qualitative methodology in education involving semi-structured interviews and thematic analysis was used. The nurses recalled what holistic comfort meant to them in school, and described the lived experience of assimilating holistic comfort into their attitudes and behaviors in practice. Themes were established and a conceptual framework was developed to better understand the nurses' lived experiences. Results showed that holistic comfort was experienced as a constructive approach to transcend unavoidable difficulties during the transition from school to practice. Participants described meaningful learning and acquisition of self-strengthening behaviors using holistic comfort theory. Holistic comfort principles were credited for easing nurses into the realities of work and advocating for best patient outcomes. Patient safety and pride in patient care were incidental positive outcomes. The study offers new insights about applying holistic comfort to prepare nurses for the realities of practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Limitations of Boltzmann's principle

    International Nuclear Information System (INIS)

    Lavenda, B.H.

    1995-01-01

    The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2

  3. THE THEORY OF ESSENTIAL FACILITIES. THE PRINCIPLE OF ACCESS TO INVENTION IN CASE OF ABUSIVE REFUSAL TO LICENSE

    Directory of Open Access Journals (Sweden)

    Irina CUCER LISNIC

    2015-07-01

    Full Text Available Essential facilities designate specific inputs which are essential for the production of other downstream goods. Inputs are situated upstream and so are eligible for intellectual property protection. In order to foster competition in the downstream, holders of these inputs should be forced to give access to potential users, by offering them operating lidcenses. In other words, one must respect the exclusive right of intellectual property holder to freely exploit his invention or must he be sacrificed in favor of downstream competition ? In the present analysis we intend to analyze some of either controverted or less known judicial aspects related to the theory of essential facilities.

  4. Localized surface plasmon resonance in silver nanoparticles: Atomistic first-principles time-dependent density-functional theory calculations

    OpenAIRE

    Kuisma, Mikael; Sakko, Arto; Rossi, Tuomas P.; Larsen, Ask H.; Enkovaara, Jussi; Lehtovaara, Lauri; Rantala, Tapio T.

    2015-01-01

    We observe using ab initio methods that localized surface plasmon resonances in icosahedral silver nanoparticles enter the asymptotic region already between diameters of 1 and 2 nm, converging close to the classical quasistatic limit around 3.4 eV. We base the observation on time-dependent density-functional theory simulations of the icosahedral silver clusters Ag$_{55}$ (1.06 nm), Ag$_{147}$ (1.60 nm), Ag$_{309}$ (2.14 nm), and Ag$_{561}$ (2.68 nm). The simulation method combines the adiabat...

  5. Extremum principles for irreversible processes

    International Nuclear Information System (INIS)

    Hillert, M.; Agren, J.

    2006-01-01

    Hamilton's extremum principle is a powerful mathematical tool in classical mechanics. Onsager's extremum principle may play a similar role in irreversible thermodynamics and may also become a valuable tool. His principle may formally be regarded as a principle of maximum rate of entropy production but does not have a clear physical interpretation. Prigogine's principle of minimum rate of entropy production has a physical interpretation when it applies, but is not strictly valid except for a very special case

  6. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  7. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    Science.gov (United States)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  8. The special theory of Brownian relativity: equivalence principle for dynamic and static random paths and uncertainty relation for diffusion.

    Science.gov (United States)

    Mezzasalma, Stefano A

    2007-03-15

    The theoretical basis of a recent theory of Brownian relativity for polymer solutions is deepened and reexamined. After the problem of relative diffusion in polymer solutions is addressed, its two postulates are formulated in all generality. The former builds a statistical equivalence between (uncorrelated) timelike and shapelike reference frames, that is, among dynamical trajectories of liquid molecules and static configurations of polymer chains. The latter defines the "diffusive horizon" as the invariant quantity to work with in the special version of the theory. Particularly, the concept of universality in polymer physics corresponds in Brownian relativity to that of covariance in the Einstein formulation. Here, a "universal" law consists of a privileged observation, performed from the laboratory rest frame and agreeing with any diffusive reference system. From the joint lack of covariance and simultaneity implied by the Brownian Lorentz-Poincaré transforms, a relative uncertainty arises, in a certain analogy with quantum mechanics. It is driven by the difference between local diffusion coefficients in the liquid solution. The same transformation class can be used to infer Fick's second law of diffusion, playing here the role of a gauge invariance preserving covariance of the spacetime increments. An overall, noteworthy conclusion emerging from this view concerns the statistics of (i) static macromolecular configurations and (ii) the motion of liquid molecules, which would be much more related than expected.

  9. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  10. Use of the Principles of Effectuation and Self-Organization Theories in Realization of Economic Capacity of Sports Organizations

    Directory of Open Access Journals (Sweden)

    S. A. Ostroukhov

    2017-01-01

    Full Text Available Purpose: the aim of the article is research of mechanisms of the forming, development and use of economic capacity providing for increase in competitiveness of entrepreneurial sports organizations. Providing of sports activities efficiency for all participants is the main objective of sports management. Interest in sport in Russia considerably increased in recent years therefore sports management becomes more and more actual direction. Effective management is needed for sport also as for any other action. Methods: the research is based on use of the following methods: systematization, analysis and synthesis, generalization, method of analogies, comparative analysis, methods of classification, scientific abstraction, induction and deduction; methods of observation, graphic and tabular methods. Results: management mechanism for entrepreneurial sport organizations is developed by author. The mechanism is based on the principles of an effektuation and self-organization. The practical importance of work is that the received results can be applied as methodical base in activities of the sports organizations to development of the strategic and tactical plans providing sustainable development. Conclusions and Relevance: the offered mechanism assumes that, a fiducial component is main component in management of the sports entrepreneurial organizations. The offered mechanism is orientated on risk minimization of the main business processes and using of development uncertainty as source of competitive advantages. 

  11. First-principles theory of anharmonicity and the inverse isotope effect in superconducting palladium-hydride compounds.

    Science.gov (United States)

    Errea, Ion; Calandra, Matteo; Mauri, Francesco

    2013-10-25

    Palladium hydrides display the largest isotope effect anomaly known in the literature. Replacement of hydrogen with the heavier isotopes leads to higher superconducting temperatures, a behavior inconsistent with harmonic theory. Solving the self-consistent harmonic approximation by a stochastic approach, we obtain the anharmonic free energy, the thermal expansion, and the superconducting properties fully ab initio. We find that the phonon spectra are strongly renormalized by anharmonicity far beyond the perturbative regime. Superconductivity is phonon mediated, but the harmonic approximation largely overestimates the superconducting critical temperatures. We explain the inverse isotope effect, obtaining a -0.38 value for the isotope coefficient in good agreement with experiments, hydrogen anharmonicity being mainly responsible for the isotope anomaly.

  12. A review of the generalized uncertainty principle

    International Nuclear Information System (INIS)

    Tawfik, Abdel Nasser; Diab, Abdel Magied

    2015-01-01

    Based on string theory, black hole physics, doubly special relativity and some ‘thought’ experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed. (review)

  13. The Greater Involvement of People Living with AIDS principle: theory versus practice in Ontario's HIV/AIDS community-based research sector.

    Science.gov (United States)

    Travers, R; Wilson, M G; Flicker, S; Guta, A; Bereket, T; McKay, C; van der Meulen, A; Cleverly, S; Dickie, M; Globerman, J; Rourke, S B

    2008-07-01

    Drawing on the Greater Involvement of People with HIV/AIDS (GIPA) principle, the HIV/AIDS movement began to "democratize" research in Canada in the mid-1990s. To date, there is little evidence about the success of the community-based research (CBR) movement in relation to the implementation of GIPA. We draw on findings from a larger study examining barriers and facilitating factors in relation to HIV-related CBR in Ontario, Canada. An online survey was completed by 39 senior managers in Ontario AIDS service organizations (ASOs). Twenty-five in-depth, semi-structured interviews were then conducted to further explore the survey findings. Survey respondents reported that, compared to researchers and frontline service providers, people living with HIV/AIDS (PLWHA) tended to be the least involved in all stages (input, process and outcome) of CBR projects. AIDS service organizations with a mandate that included serving rural and urban communities reported even lower levels of PLWHA involvement in CBR. Qualitative data reveal complex barriers that make meaningful PLWHA engagement in CBR difficult, including: HIV-related stigma; health-related challenges; "credentialism"; lack of capacity to engage in research; other issues taking priority; and mistrust of researchers. Facilitating factors included valuing lived experience; training and mentoring opportunities; financial compensation; trust building; and accommodating PLWHA's needs. While there is strong support for the GIPA principles in theory, practice lags far behind.

  14. Polarization-dependent force driving the Eg mode in bismuth under optical excitation: comparison of first-principles theory with ultra-fast x-ray experiments

    Science.gov (United States)

    Fahy, Stephen; Murray, Eamonn

    2015-03-01

    Using first principles electronic structure methods, we calculate the induced force on the Eg (zone centre transverse optical) phonon mode in bismuth immediately after absorption of a ultrafast pulse of polarized light. To compare the results with recent ultra-fast, time-resolved x-ray diffraction experiments, we include the decay of the force due to carrier scattering, as measured in optical Raman scattering experiments, and simulate the optical absorption process, depth-dependent atomic driving forces, and x-ray diffraction in the experimental geometry. We find excellent agreement between the theoretical predictions and the observed oscillations of the x-ray diffraction signal, indicating that first-principles theory of optical absorption is well suited to the calculation of initial atomic driving forces in photo-excited materials following ultrafast excitation. This work is supported by Science Foundation Ireland (Grant No. 12/IA/1601) and EU Commission under the Marie Curie Incoming International Fellowships (Grant No. PIIF-GA-2012-329695).

  15. Prevalence of Principles of Piaget's Theory Among 4-7-year-old Children and their Correlation with IQ.

    Science.gov (United States)

    Marwaha, Sugandha; Goswami, Mousumi; Vashist, Binny

    2017-08-01

    Cognitive development is a major area of human development and was extensively studied by Jean Piaget. He proposed that the development of intellectual abilities occurs in a series of relatively distinct stages and that a child's way of thinking and viewing the world is different at different stages. To assess Piaget's principles of the intuitive stage of preoperational period among 4-7-year-old children relative to their Intelligence quotient (IQ). Various characteristics as described by Jean Piaget specific for the age group of 4-7 years along with those of the previous (preconceptual stage of preoperational period) and successive periods (concrete operations) were analysed using various experiments in 300 children. These characteristics included the concepts of perceptual and cognitive egocentrism, centration and reversibility. IQ of the children was measured using Seguin form board test. Inferential statistics were performed using Chi-square test and Kruskal Wallis test. The level of statistical significance was set at 0.05. The prevalence of perceptual and cognitive egocentrism was 10.7% and 31.7% based on the experiments and 33% based on the interview question. Centration was present in 96.3% of the children. About 99% children lacked the concept of reversibility according to the clay experiment while 97.7% possessed this concept according to the interview question. The mean IQ score of children who possessed perceptual egocentrism, cognitive egocentrism and egocentrism in dental setting was significantly higher than those who lacked these characteristics. Perceptual egocentrism had almost disappeared and prevalence of cognitive egocentrism decreased with increase in age. Centration and lack of reversibility were appreciated in most of the children. There was a gradual reduction in the prevalence of these characters with increasing age. Mean IQ score of children who possessed perceptual egocentrism, cognitive egocentrism and egocentrism in dental setting was

  16. Stability, elastic and magnetostrictive properties of γ-Fe{sub 4}C and its derivatives from first principles theory

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yun; Wang, Zhe [Department of Physics, Xiangtan University, Xiangtan, 411105 Hunan (China); Cao, Juexian, E-mail: jxcao@xtu.edu.cn [Department of Physics, Xiangtan University, Xiangtan, 411105 Hunan (China); Beijing Computational Science Reasearch Center, 100084 Beijing (China)

    2014-11-15

    Using the first-principles full-potential linearized augmented plane-wave method, we investigated the stability, elastic and magnetostrictive properties of γ-Fe{sub 4}C and its derivatives. From the formation energy, we show that the most preferable configuration for MFe{sub 3}C (M=Pd, Pt, Rh, Ir) is that the M atom occupies the corner 1a position rather than 3c position. These derivatives are ductile due to high B/G values except for IrFe{sub 3}C. The calculated tetragonal magnetostrictive coefficient λ{sub 001} value for γ-Fe{sub 4}C is −380 ppm, which is larger than the value of Fe{sub 83}Ga{sub 17} (+207 ppm). Due to the strong SOC coupling strength constant (ξ) of Pt, the calculated λ{sub 001} of PtFe{sub 3}C is −691 ppm, which is increased by 80% compared to that of γ-Fe{sub 4}C. We demonstrate the origin of giant magnetostriction coefficient in terms of electronic structures and their responses to the tetragonal lattice distortion. - Highlights: • The most preferable site for M atom of MFe{sub 3}C (M=Pd, Pt, Rh, Ir) is the corner position. • The magnetostrictive coefficient for γ-Fe{sub 4}C is −380 ppm, larger than the value of Fe{sub 83}Ga{sub 17}. • The calculated λ{sub 001} of PtFe{sub 3}C is −691 ppm, which is increased by 80% compared to that of γ-Fe{sub 4}C.

  17. The principle of equivalence

    International Nuclear Information System (INIS)

    Unnikrishnan, C.S.

    1994-01-01

    Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs

  18. A new computing principle

    International Nuclear Information System (INIS)

    Fatmi, H.A.; Resconi, G.

    1988-01-01

    In 1954 while reviewing the theory of communication and cybernetics the late Professor Dennis Gabor presented a new mathematical principle for the design of advanced computers. During our work on these computers it was found that the Gabor formulation can be further advanced to include more recent developments in Lie algebras and geometric probability, giving rise to a new computing principle

  19. Principles of dynamics

    CERN Document Server

    Hill, Rodney

    2013-01-01

    Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics

  20. On the invariance principle

    Energy Technology Data Exchange (ETDEWEB)

    Moller-Nielsen, Thomas [University of Oxford (United Kingdom)

    2014-07-01

    Physicists and philosophers have long claimed that the symmetries of our physical theories - roughly speaking, those transformations which map solutions of the theory into solutions - can provide us with genuine insight into what the world is really like. According to this 'Invariance Principle', only those quantities which are invariant under a theory's symmetries should be taken to be physically real, while those quantities which vary under its symmetries should not. Physicists and philosophers, however, are generally divided (or, indeed, silent) when it comes to explaining how such a principle is to be justified. In this paper, I spell out some of the problems inherent in other theorists' attempts to justify this principle, and sketch my own proposed general schema for explaining how - and when - the Invariance Principle can indeed be used as a legitimate tool of metaphysical inference.

  1. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  2. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  3. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  4. Bernoulli's Principle

    Science.gov (United States)

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  5. Fuzzy tracking algorithm with feedback based on maximum entropy principle%带反馈多传感器模糊最大熵单目标跟踪算法

    Institute of Scientific and Technical Information of China (English)

    刘智; 陈丰; 黄继平

    2012-01-01

    针对矩阵加权融合算法计算量大、传感器数量不易扩充的特点,提出了一种带反馈的模糊最大熵融合算法.该算法采用模糊C-均值算法和最大熵原理计算状态向量中每一分量的权值,不但从整体考虑各分量对融合估计的影响,而且减少了复杂的矩阵运算过程,实时性较好.与矩阵加权算法相比,该融合算法还具有容易扩充的特点,能够直接应用于传感器数量大于2时的融合计算.实验仿真结果表明,融合估计的准确性与矩阵加权融合算法基本一致,算法的有效性得到了验证.%Aiming at the disadvantages of high computation overhead and bad extensibility in matrix weighted fusion methods, this paper proposed a multisensor fusion algorithm with feedback based on fuzzy C-means( FCM) clustering and tnaximun en-tropy principle( MEP). This algorithm combined FCM and MEP to calculate fusion matrix weight of state vector considering ev-ery components integratedly. What' s more, this algoritm had a good real-time performance due to less matrix computation and good extensibility which showed it could directly be applied into tracking system comprising more than two sensors. Experi-ments and results reveal that the tracking accuracy of fusion estimate is consistent with that of matrix weighted fusion methods.

  6. Principles of Activity Theory in analysing the process of construction of pedagogic activities with the use of mobile devices in the Chemistry learning

    Directory of Open Access Journals (Sweden)

    Liliane da Silva Coelho Jacon

    2014-06-01

    Full Text Available Mobile devices emerge as the major players to ensure a favorable resource to connect, minimizing the limitation space-time constraints among people and enabling the use of emerging mobile learning (m-learning. The use of mobile devices in pedagogic praxis implies in a closer link between teachers in their initial development and their teacher educator in order to enable the incorporation of this mobile technology in undergraduate courses. This “approach” means the facilitation of meeting to discuss, reflect and talk about the incorporation of this technology in the teaching learning process. In this research, two professors had meetings to discuss and reflect about the employment of this mobile technology in the undergraduate course. One of them, a Chemistry teacher educator and the other is a computer and education teacher-research. The methodological approach is based on a qualitative method with some elements of action-research based on theoretical assumptions of the Activity Theory (ENGESTRÖM, 1999. Therefore, the study based on the debates over the use of mobile devices in the teaching of chemistry was developed as part of the undergraduate course in Chemistry at the Federal University of Rondonia. Among a set of activities, in which students and professors were present with their objects of specific activities, was presented the Activity system related to the construction of those activities. The analysis of SA from the perspective of the 5 principles of Activity Theory points out that the process of collaborative participation in the meetings, the implementation of activities with the students of the degree course and the preparation of scientific papers demonstrated the qualitative evolution of the chemistry teacher educator

  7. Chemical hardness and density functional theory

    Indian Academy of Sciences (India)

    Unknown

    RALPH G PEARSON. Chemistry Department, University of California, Santa Barbara, CA 93106, USA. Abstract. The concept of chemical hardness is reviewed from a personal point of view. Keywords. Hardness; softness; hard & soft acids bases (HSAB); principle of maximum hardness. (PMH) density functional theory (DFT) ...

  8. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  9. Assessing photocatalytic power of g-C3N4 for solar fuel production: A first-principles study involving quasi-particle theory and dispersive forces.

    Science.gov (United States)

    Osorio-Guillén, J M; Espinosa-García, W F; Moyses Araujo, C

    2015-09-07

    First-principles quasi-particle theory has been employed to assess catalytic power of graphitic carbon nitride, g-C3N4, for solar fuel production. A comparative study between g-h-triazine and g-h-heptazine has been carried out taking also into account van der Waals dispersive forces. The band edge potentials have been calculated using a recently developed approach where quasi-particle effects are taken into account through the GW approximation. First, it was found that the description of ground state properties such as cohesive and surface formation energies requires the proper treatment of dispersive interaction. Furthermore, through the analysis of calculated band-edge potentials, it is shown that g-h-triazine has high reductive power reaching the potential to reduce CO2 to formic acid, coplanar g-h-heptazine displays the highest thermodynamics force toward H2O/O2 oxidation reaction, and corrugated g-h-heptazine exhibits a good capacity for both reactions. This rigorous theoretical study shows a route to further improve the catalytic performance of g-C3N4.

  10. Hydrodynamic Relaxation of an Electron Plasma to a Near-Maximum Entropy State

    International Nuclear Information System (INIS)

    Rodgers, D. J.; Servidio, S.; Matthaeus, W. H.; Mitchell, T. B.; Aziz, T.; Montgomery, D. C.

    2009-01-01

    Dynamical relaxation of a pure electron plasma in a Malmberg-Penning trap is studied, comparing experiments, numerical simulations and statistical theories of weakly dissipative two-dimensional (2D) turbulence. Simulations confirm that the dynamics are approximated well by a 2D hydrodynamic model. Statistical analysis favors a theoretical picture of relaxation to a near-maximum entropy state with constrained energy, circulation, and angular momentum. This provides evidence that 2D electron fluid relaxation in a turbulent regime is governed by principles of maximum entropy.

  11. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  12. Principles of Fourier analysis

    CERN Document Server

    Howell, Kenneth B

    2001-01-01

    Fourier analysis is one of the most useful and widely employed sets of tools for the engineer, the scientist, and the applied mathematician. As such, students and practitioners in these disciplines need a practical and mathematically solid introduction to its principles. They need straightforward verifications of its results and formulas, and they need clear indications of the limitations of those results and formulas.Principles of Fourier Analysis furnishes all this and more. It provides a comprehensive overview of the mathematical theory of Fourier analysis, including the development of Fourier series, "classical" Fourier transforms, generalized Fourier transforms and analysis, and the discrete theory. Much of the author''s development is strikingly different from typical presentations. His approach to defining the classical Fourier transform results in a much cleaner, more coherent theory that leads naturally to a starting point for the generalized theory. He also introduces a new generalized theory based ...

  13. The principle of general covariance and the principle of equivalence: two distinct concepts

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    It is shown how to construct a theory with general covariance but without the equivalence principle. Such a theory is in disagreement with experiment, but it serves to illustrate the independence of the former principle from the latter one [pt

  14. Physical Premium Principle: A New Way for Insurance Pricing

    Directory of Open Access Journals (Sweden)

    Amir H. Darooneh

    2005-02-01

    Full Text Available Abstract: In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.

  15. Physical Premium Principle: A New Way for Insurance Pricing

    Science.gov (United States)

    Darooneh, Amir H.

    2005-03-01

    In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical) definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.

  16. Mach's holographic principle

    International Nuclear Information System (INIS)

    Khoury, Justin; Parikh, Maulik

    2009-01-01

    Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.

  17. A Principle of Intentionality.

    Science.gov (United States)

    Turner, Charles K

    2017-01-01

    The mainstream theories and models of the physical sciences, including neuroscience, are all consistent with the principle of causality. Wholly causal explanations make sense of how things go, but are inherently value-neutral, providing no objective basis for true beliefs being better than false beliefs, nor for it being better to intend wisely than foolishly. Dennett (1987) makes a related point in calling the brain a syntactic (procedure-based) engine. He says that you cannot get to a semantic (meaning-based) engine from there. He suggests that folk psychology revolves around an intentional stance that is independent of the causal theories of the brain, and accounts for constructs such as meanings, agency, true belief, and wise desire. Dennett proposes that the intentional stance is so powerful that it can be developed into a valid intentional theory. This article expands Dennett's model into a principle of intentionality that revolves around the construct of objective wisdom. This principle provides a structure that can account for all mental processes, and for the scientific understanding of objective value. It is suggested that science can develop a far more complete worldview with a combination of the principles of causality and intentionality than would be possible with scientific theories that are consistent with the principle of causality alone.

  18. Gyro precession and Mach's principle

    International Nuclear Information System (INIS)

    Eby, P.

    1979-01-01

    The precession of a gyroscope is calculated in a nonrelativistic theory due to Barbour which satisfies Mach's principle. It is shown that the theory predicts both the geodetic and motional precession of general relativity to within factors of order 1. The significance of the gyro experiment is discussed from the point of view of metric theories of gravity and this is contrasted with its significance from the point of view of Mach's principle. (author)

  19. Developing principles of growth

    DEFF Research Database (Denmark)

    Neergaard, Helle; Fleck, Emma

    of the principles of growth among women-owned firms. Using an in-depth case study methodology, data was collected from women-owned firms in Denmark and Ireland, as these countries are similar in contextual terms, e.g. population and business composition, dominated by micro, small and medium-sized enterprises....... Extending on principles put forward in effectuation theory, we propose that women grow their firms according to five principles which enable women’s enterprises to survive in the face of crises such as the current financial world crisis....

  20. Quasiparticles and phonon satellites in spectral functions of semiconductors and insulators: Cumulants applied to the full first-principles theory and the Fröhlich polaron

    Science.gov (United States)

    Nery, Jean Paul; Allen, Philip B.; Antonius, Gabriel; Reining, Lucia; Miglio, Anna; Gonze, Xavier

    2018-03-01

    The electron-phonon interaction causes thermal and zero-point motion shifts of electron quasiparticle (QP) energies ɛk(T ) . Other consequences of interactions, visible in angle-resolved photoemission spectroscopy (ARPES) experiments, are broadening of QP peaks and appearance of sidebands, contained in the electron spectral function A (k ,ω ) =-ℑ m GR(k ,ω ) /π , where GR is the retarded Green's function. Electronic structure codes (e.g., using density-functional theory) are now available that compute the shifts and start to address broadening and sidebands. Here we consider MgO and LiF, and determine their nonadiabatic Migdal self-energy. The spectral function obtained from the Dyson equation makes errors in the weight and energy of the QP peak and the position and weight of the phonon-induced sidebands. Only one phonon satellite appears, with an unphysically large energy difference (larger than the highest phonon energy) with respect to the QP peak. By contrast, the spectral function from a cumulant treatment of the same self-energy is physically better, giving a quite accurate QP energy and several satellites approximately spaced by the LO phonon energy. In particular, the positions of the QP peak and first satellite agree closely with those found for the Fröhlich Hamiltonian by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000), 10.1103/PhysRevB.62.6317] using diagrammatic Monte Carlo. We provide a detailed comparison between the first-principles MgO and LiF results and those of the Fröhlich Hamiltonian. Such an analysis applies widely to materials with infrared(IR)-active phonons.

  1. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  2. Os princípios constitucionais entre deontologia e axiologia: pressupostos para uma teoria hermenêutica democrática The constitutional principles between deontology and axiology: theoretical assumptions towards a democratic hermeneutic theory

    Directory of Open Access Journals (Sweden)

    Fábio Portela Lopes de Almeida

    2008-12-01

    Full Text Available O artigo tem por propósito discutir a natureza dos princípios constitucionais a partir de duas teorias hermenêuticas distintas: a axiologia e a deontologia. A perspectiva axiológica é descrita a partir da teoria dos princípios delineada por Robert Alexy em sua Teoria dos direitos fundamentais e criticada por ser incapaz de lidar democraticamente com o fato do pluralismo, isto é, com a circunstância de que as sociedades contemporâneas não se estruturam em torno de valores éticos compartilhados intersubjetivamente por todos os cidadãos. Como alternativa a esse modelo, sugere-se, a partir das obras de John Rawls, Ronald Dworkin e Jürgen Habermas, que a adoção de uma perspectiva deontológica, que assume a distinção entre princípios e valores, supera as dificuldades da teoria axiológica. Ao assumir como premissa central a possibilidade de legitimação do direito a partir de princípios justificados a partir de critérios aceitáveis por todos os cidadãos, uma teoria deontológica dos princípios se torna capaz de lidar com a pluralidade de concepções de bem presentes nas sociedades contemporâneas. Nesse sentido, o artigo se situa no campo de estudos próprio da teoria da Constituição.The article discusses the nature of the constitutional principles by opposing two distinct hermeneutic theories: axiology and deontology. the theory of principles proposed by robert alexy is assumed as an ideal example of axiological theory, and criticized for being unable to deal democratically with the fact of pluralism, i. e., the fact that the contemporary societies are not structured on ethical values shared by all the citizens. As an alternative to the axiological model, I suggest, based on a particular reading of the theories of John Rawls, Ronald Dworkin and Jürgen Habermas, that the adoption of a deontological perspective, which assumes a strict distinction between principles and values, overcomes the difficulties of the axiological

  3. The Cost of Economic Literacy: How Well Does a Literacy-Targeted Principles of Economics Course Prepare Students for Intermediate Theory Courses?

    Science.gov (United States)

    Gilleskie, Donna B.; Salemi, Michael K.

    2012-01-01

    In a typical economics principles course, students encounter a large number of concepts. In a literacy-targeted course, students study a "short list" of concepts that they can use for the rest of their lives. While a literacy-targeted principles course provides better education for nonmajors, it may place economic majors at a…

  4. On discrete maximum principles for nonlinear elliptic problems

    Czech Academy of Sciences Publication Activity Database

    Karátson, J.; Korotov, S.; Křížek, Michal

    2007-01-01

    Roč. 76, č. 1 (2007), s. 99-108 ISSN 0378-4754 R&D Projects: GA MŠk 1P05ME749; GA AV ČR IAA1019201 Institutional research plan: CEZ:AV0Z10190503 Keywords : nonlinear elliptic problem * mixed boundary conditions * finite element method Subject RIV: BA - General Mathematics Impact factor: 0.738, year: 2007

  5. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  6. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  7. String theory

    International Nuclear Information System (INIS)

    Chan Hongmo.

    1987-10-01

    The paper traces the development of the String Theory, and was presented at Professor Sir Rudolf Peierls' 80sup(th) Birthday Symposium. The String theory is discussed with respect to the interaction of strings, the inclusion of both gauge theory and gravitation, inconsistencies in the theory, and the role of space-time. The physical principles underlying string theory are also outlined. (U.K.)

  8. Actinide collisions for QED and superheavy elements with the time-dependent Hartree-Fock theory and the Balian-Vénéroni variational principle

    Directory of Open Access Journals (Sweden)

    Kedziora David J.

    2011-10-01

    Full Text Available Collisions of actinide nuclei form, during very short times of few zs (10−21 s, the heaviest ensembles of interacting nucleons available on Earth. Such collisions are used to produce super-strong electric fields by the huge number of interacting protons to test spontaneous positron-electron pair emission (vacuum decay predicted by the quantum electrodynamics (QED theory. Multi-nucleon transfer in actinide collisions could also be used as an alternative way to fusion in order to produce neutron-rich heavy and superheavy elements thanks to inverse quasifission mechanisms. Actinide collisions are studied in a dynamical quantum microscopic approach. The three-dimensional time-dependent Hartree-Fock (TDHF code tdhf3d is used with a full Skyrme energy density functional to investigate the time evolution of expectation values of one-body operators, such as fragment position and particle number. This code is also used to compute the dispersion of the particle numbers (e.g., widths of fragment mass and charge distributions from TDHF transfer probabilities, on the one hand, and using the BalianVeneroni variational principle, on the other hand. A first application to test QED is discussed. Collision times in 238U+238U are computed to determine the optimum energy for the observation of the vacuum decay. It is shown that the initial orientation strongly affects the collision times and reaction mechanism. The highest collision times predicted by TDHF in this reaction are of the order of ~ 4 zs at a center of mass energy of 1200 MeV. According to modern calculations based on the Dirac equation, the collision times at Ecm > 1 GeV are sufficient to allow spontaneous electron-positron pair emission from QED vacuum decay, in case of bare uranium ion collision. A second application of actinide collisions to produce neutron-rich transfermiums is discussed. A new inverse quasifission mechanism associated to a specific orientation of the nuclei is proposed to

  9. The Principles of Readability

    Science.gov (United States)

    DuBay, William H.

    2004-01-01

    The principles of readability are in every style manual. Readability formulas are in every writing aid. What is missing is the research and theory on which they stand. This short review of readability research spans 100 years. The first part covers the history of adult literacy studies in the U.S., establishing the stratified nature of the adult…

  10. Principles of electrodynamics

    CERN Document Server

    Schwartz, Melvin

    1972-01-01

    This advanced undergraduate- and graduate-level text by the 1988 Nobel Prize winner establishes the subject's mathematical background, reviews the principles of electrostatics, then introduces Einstein's special theory of relativity and applies it throughout the book in topics ranging from Gauss' theorem and Coulomb's law to electric and magnetic susceptibility.

  11. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  12. The Pauli Exclusion Principle

    Indian Academy of Sciences (India)

    his exclusion principle, the quantum theory was a mess. Moreover, it could ... This is a function of all the coordinates and 'internal variables' such as spin, of all the ... must remain basically the same (ie change by a phase factor at most) if we ...

  13. The Principle of General Tovariance

    Science.gov (United States)

    Heunen, C.; Landsman, N. P.; Spitters, B.

    2008-06-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.

  14. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    In this article survey of variational principles has been given. Variational principles play a significant role in mathematical theory with emphasis on the physical aspects. There are two principals used i.e. to represent the equation of the system in a succinct way and to enable a particular computation in the system to be carried out with greater accuracy. The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basic finite element methods on variational principles. (A.B.)

  15. Mach's principle and rotating universes

    International Nuclear Information System (INIS)

    King, D.H.

    1990-01-01

    It is shown that the Bianchi 9 model universe satisfies the Mach principle. These closed rotating universes were previously thought to be counter-examples to the principle. The Mach principle is satisfied because the angular momentum of the rotating matter is compensated by the effective angular momentum of gravitational waves. A new formulation of the Mach principle is given that is based on the field theory interpretation of general relativity. Every closed universe with 3-sphere topology is shown to satisfy this formulation of the Mach principle. It is shown that the total angular momentum of the matter and gravitational waves in a closed 3-sphere topology universe is zero

  16. Principles of quantum chemistry

    CERN Document Server

    George, David V

    2013-01-01

    Principles of Quantum Chemistry focuses on the application of quantum mechanics in physical models and experiments of chemical systems.This book describes chemical bonding and its two specific problems - bonding in complexes and in conjugated organic molecules. The very basic theory of spectroscopy is also considered. Other topics include the early development of quantum theory; particle-in-a-box; general formulation of the theory of quantum mechanics; and treatment of angular momentum in quantum mechanics. The examples of solutions of Schroedinger equations; approximation methods in quantum c

  17. Introductory remote sensing principles and concepts principles and concepts

    CERN Document Server

    Gibson, Paul

    2013-01-01

    Introduction to Remote Sensing Principles and Concepts provides a comprehensive student introduction to both the theory and application of remote sensing. This textbook* introduces the field of remote sensing and traces its historical development and evolution* presents detailed explanations of core remote sensing principles and concepts providing the theory required for a clear understanding of remotely sensed images.* describes important remote sensing platforms - including Landsat, SPOT and NOAA * examines and illustrates many of the applications of remotely sensed images in various fields.

  18. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  19. Twist operators in N=4 beta-deformed theory

    NARCIS (Netherlands)

    de Leeuw, M.; Łukowski, T.

    2010-01-01

    In this paper we derive both the leading order finite size corrections for twist-2 and twist-3 operators and the next-to-leading order finite-size correction for twist-2 operators in beta-deformed SYM theory. The obtained results respect the principle of maximum transcendentality as well as

  20. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  1. Principles of Optics

    Science.gov (United States)

    Born, Max; Wolf, Emil

    1999-10-01

    Principles of Optics is one of the classic science books of the twentieth century, and probably the most influential book in optics published in the past forty years. This edition has been thoroughly revised and updated, with new material covering the CAT scan, interference with broad-band light and the so-called Rayleigh-Sommerfeld diffraction theory. This edition also details scattering from inhomogeneous media and presents an account of the principles of diffraction tomography to which Emil Wolf has made a basic contribution. Several new appendices are also included. This new edition will be invaluable to advanced undergraduates, graduate students and researchers working in most areas of optics.

  2. Communication Theory.

    Science.gov (United States)

    Penland, Patrick R.

    Three papers are presented which delineate the foundation of theory and principles which underlie the research and instructional approach to communications at the Graduate School of Library and Information Science, University of Pittsburgh. Cybernetic principles provide the integration, and validation is based in part on a situation-producing…

  3. On the theory of optimal processes

    International Nuclear Information System (INIS)

    Goldenberg, P.; Provenzano, V.

    1975-01-01

    The theory of optimal processes is a recent mathematical formalism that is used to solve an important class of problems in science and in technology, that cannot be solved by classical variational techniques. An example of such processes would be the control of a nuclear reactor. Certain features of the theory of optimal processes are discussed, emphasizing the central contribution of Pontryagin with his formulation of the maximum principle. An application of the theory of optimum control is presented. The example is a time optimum problem applied to a simplified model of a nuclear reactor. It deals with the question of changing the equilibrium power level of the reactor in an optimum time

  4. Schaum's outline of theory and problems of Lagrangian dynamics with a treatment of Euler's equations of motion, Hamilton's equations and Hamilton's principle

    CERN Document Server

    Wells, Dare A

    1967-01-01

    The book clearly and concisely explains the basic principles of Lagrangian dynamicsand provides training in the actual physical and mathematical techniques of applying Lagrange's equations, laying the foundation for a later study of topics that bridge the gap between classical and quantum physics, engineering, chemistry and applied mathematics, and for practicing scientists and engineers.

  5. Calculus of variations and optimal control theory a concise introduction

    CERN Document Server

    Liberzon, Daniel

    2011-01-01

    This textbook offers a concise yet rigorous introduction to calculus of variations and optimal control theory, and is a self-contained resource for graduate students in engineering, applied mathematics, and related subjects. Designed specifically for a one-semester course, the book begins with calculus of variations, preparing the ground for optimal control. It then gives a complete proof of the maximum principle and covers key topics such as the Hamilton-Jacobi-Bellman theory of dynamic programming and linear-quadratic optimal control. Calculus of Variations and Optimal Control Theory also traces the historical development of the subject and features numerous exercises, notes and references at the end of each chapter, and suggestions for further study. Offers a concise yet rigorous introduction Requires limited background in control theory or advanced mathematics Provides a complete proof of the maximum principle Uses consistent notation in the exposition of classical and modern topics Traces the h...

  6. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  7. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  8. No-Hypersignaling Principle

    Science.gov (United States)

    Dall'Arno, Michele; Brandsen, Sarah; Tosini, Alessandro; Buscemi, Francesco; Vedral, Vlatko

    2017-07-01

    A paramount topic in quantum foundations, rooted in the study of the Einstein-Podolsky-Rosen (EPR) paradox and Bell inequalities, is that of characterizing quantum theory in terms of the spacelike correlations it allows. Here, we show that to focus only on spacelike correlations is not enough: we explicitly construct a toy model theory that, while not contradicting classical and quantum theories at the level of spacelike correlations, still displays an anomalous behavior in its timelike correlations. We call this anomaly, quantified in terms of a specific communication game, the "hypersignaling" phenomena. We hence conclude that the "principle of quantumness," if it exists, cannot be found in spacelike correlations alone: nontrivial constraints need to be imposed also on timelike correlations, in order to exclude hypersignaling theories.

  9. Principles of Uncertainty

    CERN Document Server

    Kadane, Joseph B

    2011-01-01

    An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus

  10. Principles of artificial intelligence

    CERN Document Server

    Nilsson, Nils J

    1980-01-01

    A classic introduction to artificial intelligence intended to bridge the gap between theory and practice, Principles of Artificial Intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Rather than focusing on the subject matter of the applications, the book is organized around general computational concepts involving the kinds of data structures used, the types of operations performed on the data structures, and the properties of th

  11. General principles of quantum mechanics

    International Nuclear Information System (INIS)

    Pauli, W.

    1980-01-01

    This book is a textbook for a course in quantum mechanics. Starting from the complementarity and the uncertainty principle Schroedingers equation is introduced together with the operator calculus. Then stationary states are treated as eigenvalue problems. Furthermore matrix mechanics are briefly discussed. Thereafter the theory of measurements is considered. Then as approximation methods perturbation theory and the WKB approximation are introduced. Then identical particles, spin, and the exclusion principle are discussed. There after the semiclassical theory of radiation and the relativistic one-particle problem are discussed. Finally an introduction is given into quantum electrodynamics. (HSI)

  12. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  13. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  14. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  15. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  16. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  17. Safety Principles

    Directory of Open Access Journals (Sweden)

    V. A. Grinenko

    2011-06-01

    Full Text Available The offered material in the article is picked up so that the reader could have a complete representation about concept “safety”, intrinsic characteristics and formalization possibilities. Principles and possible strategy of safety are considered. A material of the article is destined for the experts who are taking up the problems of safety.

  18. Maquet principle

    Energy Technology Data Exchange (ETDEWEB)

    Levine, R.B.; Stassi, J.; Karasick, D.

    1985-04-01

    Anterior displacement of the tibial tubercle is a well-accepted orthopedic procedure in the treatment of certain patellofemoral disorders. The radiologic appearance of surgical procedures utilizing the Maquet principle has not been described in the radiologic literature. Familiarity with the physiologic and biochemical basis for the procedure and its postoperative appearance is necessary for appropriate roentgenographic evaluation and the radiographic recognition of complications.

  19. Cosmological principle

    International Nuclear Information System (INIS)

    Wesson, P.S.

    1979-01-01

    The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution

  20. The principle of finiteness – a guideline for physical laws

    International Nuclear Information System (INIS)

    Sternlieb, Abraham

    2013-01-01

    I propose a new principle in physics-the principle of finiteness (FP). It stems from the definition of physics as a science that deals with measurable dimensional physical quantities. Since measurement results including their errors, are always finite, FP postulates that the mathematical formulation of legitimate laws in physics should prevent exactly zero or infinite solutions. I propose finiteness as a postulate, as opposed to a statement whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories or principles. Some consequences of FP are discussed, first in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The corrected Lorentz transformations include an additional translation term depending on the minimum length epsilon. The relativistic gamma is replaced by a corrected gamma, that is finite for v=c. To comply with FP, physical laws should include the relevant extremum finite values in their mathematical formulation. An important prediction of FP is that there is a maximum attainable relativistic mass/energy which is the same for all subatomic particles, meaning that there is a maximum theoretical value for cosmic rays energy. The Generalized Uncertainty Principle required by Quantum Gravity is actually a necessary consequence of FP at Planck's scale. Therefore, FP may possibly contribute to the axiomatic foundation of Quantum Gravity.

  1. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  2. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. II. The rejection of common mode forces

    International Nuclear Information System (INIS)

    Comandi, G.L.; Toncelli, R.; Chiofalo, M.L.; Bramanti, D.; Nobili, A.M.

    2006-01-01

    'Galileo Galilei on the ground' (GGG) is a fast rotating differential accelerometer designed to test the equivalence principle (EP). Its sensitivity to differential effects, such as the effect of an EP violation, depends crucially on the capability of the accelerometer to reject all effects acting in common mode. By applying the theoretical and simulation methods reported in Part I of this work, and tested therein against experimental data, we predict the occurrence of an enhanced common mode rejection of the GGG accelerometer. We demonstrate that the best rejection of common mode disturbances can be tuned in a controlled way by varying the spin frequency of the GGG rotor

  3. First principles based transport theory. Report on the IAEA technical committee meeting held at Kloster Seeon, Germany, 21-23 June 1999

    International Nuclear Information System (INIS)

    Biskamp, D.; Nuehrenberg, J.; Diamond, P.H.; Garbet, X.; Lin, Z.; Rogers, R.N.

    2000-01-01

    This IAEA Technical Committee Meeting on plasma transport theory was organized jointly by the Max-Planck-Institute for Plasma Physics, Garching, and the IAEA, Vienna. It took place on 21-23 June 1999 in Kloster Seeon, Germany. The topics were: 1. Turbulent transport in the tokamak core plasma; 2. Turbulence suppression, shear amplification and transport bifurcation dynamics; 3. Turbulence transport in the tokamak edge plasma; 4. Global aspects of turbulent transport in tokamak plasmas; 5. Neoclassical transport, in particular in stellarators

  4. Applying Psychosocial Theories of Terrorism to the Radicalization Process: A Mapping of De La Corte’s Seven Principles to Homegrown Radicals

    Science.gov (United States)

    2011-06-01

    tradition continues unabated in mosques today (Joseph & Najmabadi, 2005). To note this fact is by no means to suggest that gender segregation is unique...Raja Dahir because there came to him news of a Muslim women who was raped!!! and today our beloved Prophet (Katimun Nabieen Mohammad al-Ameen) PBUH... Feminism meets queer theory. Bloomington, IN: Indiana University Press. White, J. R. (2011). Terrorism and homeland security. New York: Cengage

  5. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  6. Iron on GaN(0001) pseudo-1 × 1 (1+1/(12) ) investigated by scanning tunneling microscopy and first-principles theory

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wenzhi; Mandru, Andrada-Oana; Smith, Arthur R., E-mail: smitha2@ohio.edu [Department of Physics and Astronomy, Nanoscale and Quantum Phenomena Institute, Ohio University, Athens, Ohio 45701 (United States); Takeuchi, Noboru [Centro de Nanociencias y Nanotecnologia, Universidad Nacional Autonoma de Mexico Apartado Postal 14, Ensenada Baja California, Codigo Postal 22800 (Mexico); Al-Brithen, Hamad A. H. [Physics and Astronomy Department, King Abdulah Institute for Nanotechnology, King Saud University, Riyadh, Saudi Arabia, and National Center for Nano Technology, KACST, Riyadh (Saudi Arabia)

    2014-04-28

    We have investigated sub-monolayer iron deposition on atomically smooth GaN(0001) pseudo-1 × 1 (1+1/(12) ). The iron is deposited at a substrate temperature of 360 °C, upon which reflection high energy electron diffraction shows a transformation to a √(3)×√(3)-R30° pattern. After cooling to room temperature, the pattern transforms to a 6 × 6, and scanning tunneling microscopy reveals 6 × 6 reconstructed regions decorating the GaN step edges. First-principles theoretical calculations have been carried out for a range of possible structural models, one of the best being a Ga dimer model consisting of 2/9 monolayer of Fe incorporated into 7/3 monolayer of Ga in a relaxed but distorted structure.

  7. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  8. The principles of analysis of competitiveness and control schemes in transport services

    Directory of Open Access Journals (Sweden)

    A. Žvirblis

    2003-04-01

    Full Text Available Under the conditions of constantly growing competition transportation companies are faced with theoretical and practical problems associated with the quality and stability of transport services, competitiveness on the market and marketing problems. Road transport services are considered in the terms of value analysis while the assessment of their competitiveness is based on the Pontriagin maximum principle. A model for transport risk analysis is constructed, taking into account the principles of correlation and co - variation of transport services. Formalized models of automated control of services in the system of marketing allowing the analysis of stability and other parameters to be made in the framework of automatic control theory are offered.

  9. Zymography Principles.

    Science.gov (United States)

    Wilkesman, Jeff; Kurz, Liliana

    2017-01-01

    Zymography, the detection, identification, and even quantification of enzyme activity fractionated by gel electrophoresis, has received increasing attention in the last years, as revealed by the number of articles published. A number of enzymes are routinely detected by zymography, especially with clinical interest. This introductory chapter reviews the major principles behind zymography. New advances of this method are basically focused towards two-dimensional zymography and transfer zymography as will be explained in the rest of the chapters. Some general considerations when performing the experiments are outlined as well as the major troubleshooting and safety issues necessary for correct development of the electrophoresis.

  10. Basic principles

    International Nuclear Information System (INIS)

    Wilson, P.D.

    1996-01-01

    Some basic explanations are given of the principles underlying the nuclear fuel cycle, starting with the physics of atomic and nuclear structure and continuing with nuclear energy and reactors, fuel and waste management and finally a discussion of economics and the future. An important aspect of the fuel cycle concerns the possibility of ''closing the back end'' i.e. reprocessing the waste or unused fuel in order to re-use it in reactors of various kinds. The alternative, the ''oncethrough'' cycle, discards the discharged fuel completely. An interim measure involves the prolonged storage of highly radioactive waste fuel. (UK)

  11. PRINCIPLES OF CONTENT FORMATION EDUCATIONAL ELECTRONIC RESOURCE

    Directory of Open Access Journals (Sweden)

    О Ю Заславская

    2017-12-01

    Full Text Available The article considers modern possibilities of information and communication technologies for the design of electronic educational resources. The conceptual basis of the open educational multimedia system is based on the modular architecture of the electronic educational resource. The content of the electronic training module can be implemented in several versions of the modules: obtaining information, practical exercises, control. The regularities in the teaching process in modern pedagogical theory are considered: general and specific, and the principles for the formation of the content of instruction at different levels are defined, based on the formulated regularities. On the basis of the analysis, the principles of the formation of the electronic educational resource are determined, taking into account the general and didactic patterns of teaching.As principles of the formation of educational material for obtaining information for the electronic educational resource, the article considers: the principle of methodological orientation, the principle of general scientific orientation, the principle of systemic nature, the principle of fundamentalization, the principle of accounting intersubject communications, the principle of minimization. The principles of the formation of the electronic training module of practical studies in the article include: the principle of systematic and dose based consistency, the principle of rational use of study time, the principle of accessibility. The principles of the formation of the module for monitoring the electronic educational resource can be: the principle of the operationalization of goals, the principle of unified identification diagnosis.

  12. Two conceptions of legal principles

    Directory of Open Access Journals (Sweden)

    Spaić Bojan

    2017-01-01

    Full Text Available The paper discusses the classical understanding of legal principles as the most general norms of a legal order, confronting it with Dworkin's and Alexy's understanding of legal principles as prima facie, unconditional commands. The analysis shows that the common, classical conception brings into question the status of legal principles as norms, by disreguarding their usefulness in judicial reasoning, while, conversely, the latterhas significant import forlegal practice and consequently for legal dogmatics. It is argued that the heuristic fruitfulness of understanding principles as optimization commands thusbecomesapparent. When we understand the relation of priciples to the idea of proportionality, as thespecific mode of their application, which is different from the supsumtive mode of applying rules, the theory of legal principles advanced by Dworkin and Alexy appears therefore to be descriptively better than others, but not without its flaws.

  13. Quantum principles in field interactions

    International Nuclear Information System (INIS)

    Shirkov, D.V.

    1986-01-01

    The concept of quantum principle is intruduced as a principle whosee formulation is based on specific quantum ideas and notions. We consider three such principles, viz. those of quantizability, local gauge symmetry, and supersymmetry, and their role in the development of the quantum field theory (QFT). Concerning the first of these, we analyze the formal aspects and physical contents of the renormalization procedure in QFT and its relation to ultraviolet divergences and the renorm group. The quantizability principle is formulated as an existence condition of a self-consistent quantum version with a given mechanism of the field interaction. It is shown that the consecutive (from a historial point of view) use of these quantum principles puts still larger limitations on possible forms of field interactions

  14. Kinetic theory of weakly ionized dilute gas of hydrogen-like atoms of the first principles of quantum statistics and dispersion laws of eigenwaves

    Science.gov (United States)

    Slyusarenko, Yurii V.; Sliusarenko, Oleksii Yu.

    2017-11-01

    We develop a microscopic approach to the construction of the kinetic theory of dilute weakly ionized gas of hydrogen-like atoms. The approach is based on the statements of the second quantization method in the presence of bound states of particles. The basis of the derivation of kinetic equations is the method of reduced description of relaxation processes. Within the framework of the proposed approach, a system of common kinetic equations for the Wigner distribution functions of free oppositely charged fermions of two kinds (electrons and cores) and their bound states—hydrogen-like atoms— is obtained. Kinetic equations are used to study the spectra of elementary excitations in the system when all its components are non-degenerate. It is shown that in such a system, in addition to the typical plasma waves, there are longitudinal waves of matter polarization and the transverse ones with a behavior characteristic of plasmon polaritons. The expressions for the dependence of the frequencies and Landau damping coefficients on the wave vector for all branches of the oscillations discovered are obtained. Numerical evaluation of the elementary perturbation parameters in the system on an example of a weakly ionized dilute gas of the 23Na atoms using the D2-line characteristics of the natrium atom is given. We note the possibility of using the results of the developed theory to describe the properties of a Bose condensate of photons in the diluted weakly ionized gas of hydrogen-like atoms.

  15. Principlism and its alleged competitors.

    Science.gov (United States)

    Beauchamp, Tom L

    1995-09-01

    Principles that provide general normative frameworks in bioethics have been criticized since the late 1980s, when several different methods and types of moral philosophy began to be proposed as alternatives or substitutes. Several accounts have emerged in recent years, including: (1) Impartial Rule Theory (supported in this issue by K. Danner Clouser), (2) Casuistry (supported in this issue by Albert Jonsen), and (3) Virtue Ethics (supported in this issue by Edmund D. Pellegrino). Although often presented as rival methods or theories, these approaches are consistent with and should not be considered adversaries of a principle-based account.

  16. Gauge theories

    International Nuclear Information System (INIS)

    Kenyon, I.R.

    1986-01-01

    Modern theories of the interactions between fundamental particles are all gauge theories. In the case of gravitation, application of this principle to space-time leads to Einstein's theory of general relativity. All the other interactions involve the application of the gauge principle to internal spaces. Electromagnetism serves to introduce the idea of a gauge field, in this case the electromagnetic field. The next example, the strong force, shows unique features at long and short range which have their origin in the self-coupling of the gauge fields. Finally the unification of the description of the superficially dissimilar electromagnetic and weak nuclear forces completes the picture of successes of the gauge principle. (author)

  17. Gesture & Principle

    DEFF Research Database (Denmark)

    Hvejsel, Marie Frier

    2018-01-01

    as a student, professional, or architectural researcher. It is the hypothesis of this paper that, in its initial questioning of the task of the Greek tekton (as a masterbuilder) capable of bringing together aesthetics and technique in a given context, tectonic theory has unique potential in this matter......, installations, and equipment threaten to undermine the primary spatial purpose and quality of architecture as a sensuous enrichment of everyday life. This calls for continuous critical positioning within the field as well as a systematic method for acquiring knowledge about an architectural problem, whether....... This potential is investigated through a rereading of the development of tectonic theory in architecture, which is done in relation to the present conditions and methodological challenges facing the discipline. As a result, this paper outlines a direction for the repositioning, development, and application...

  18. Optical Properties of Gallium-Doped Zinc Oxide—A Low-Loss Plasmonic Material: First-Principles Theory and Experiment

    Directory of Open Access Journals (Sweden)

    Jongbum Kim

    2013-12-01

    Full Text Available Searching for better materials for plasmonic and metamaterial applications is an inverse design problem where theoretical studies are necessary. Using basic models of impurity doping in semiconductors, transparent conducting oxides (TCOs are identified as low-loss plasmonic materials in the near-infrared wavelength range. A more sophisticated theoretical study would help not only to improve the properties of TCOs but also to design further lower-loss materials. In this study, optical functions of one such TCO, gallium-doped zinc oxide (GZO, are studied both experimentally and by first-principles density-functional calculations. Pulsed-laser-deposited GZO films are studied by the x-ray diffraction and generalized spectroscopic ellipsometry. Theoretical studies are performed by the total-energy-minimization method for the equilibrium atomic structure of GZO and random phase approximation with the quasiparticle gap correction. Plasma excitation effects are also included for optical functions. This study identifies mechanisms other than doping, such as alloying effects, that significantly influence the optical properties of GZO films. It also indicates that ultraheavy Ga doping of ZnO results in a new alloy material, rather than just degenerately doped ZnO. This work is the first step to achieve a fundamental understanding of the connection between material, structural, and optical properties of highly doped TCOs to tailor those materials for various plasmonic applications.

  19. Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price

    Directory of Open Access Journals (Sweden)

    M. E. Haji Abadi

    2013-09-01

    Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.

  20. The interaction of psychological and physiological homeostatic drives and role of general control principles in the regulation of physiological systems, exercise and the fatigue process - The Integrative Governor theory.

    Science.gov (United States)

    St Clair Gibson, A; Swart, J; Tucker, R

    2018-02-01

    Either central (brain) or peripheral (body physiological system) control mechanisms, or a combination of these, have been championed in the last few decades in the field of Exercise Sciences as how physiological activity and fatigue processes are regulated. In this review, we suggest that the concept of 'central' or 'peripheral' mechanisms are both artificial constructs that have 'straight-jacketed' research in the field, and rather that competition between psychological and physiological homeostatic drives is central to the regulation of both, and that governing principles, rather than distinct physical processes, underpin all physical system and exercise regulation. As part of the Integrative Governor theory we develop in this review, we suggest that both psychological and physiological drives and requirements are underpinned by homeostatic principles, and that regulation of the relative activity of each is by dynamic negative feedback activity, as the fundamental general operational controller. Because of this competitive, dynamic interplay, we propose that the activity in all systems will oscillate, that these oscillations create information, and comparison of this oscillatory information with either prior information, current activity, or activity templates create efferent responses that change the activity in the different systems in a similarly dynamic manner. Changes in a particular system are always the result of perturbations occurring outside the system itself, the behavioural causative 'history' of this external activity will be evident in the pattern of the oscillations, and awareness of change occurs as a result of unexpected rather than planned change in physiological activity or psychological state.

  1. Complex Correspondence Principle

    International Nuclear Information System (INIS)

    Bender, Carl M.; Meisinger, Peter N.; Hook, Daniel W.; Wang Qinghai

    2010-01-01

    Quantum mechanics and classical mechanics are distinctly different theories, but the correspondence principle states that quantum particles behave classically in the limit of high quantum number. In recent years much research has been done on extending both quantum and classical mechanics into the complex domain. These complex extensions continue to exhibit a correspondence, and this correspondence becomes more pronounced in the complex domain. The association between complex quantum mechanics and complex classical mechanics is subtle and demonstrating this relationship requires the use of asymptotics beyond all orders.

  2. Principles of thermodynamics

    CERN Document Server

    Kaufman, Myron

    2002-01-01

    Ideal for one- or two-semester courses that assume elementary knowledge of calculus, This text presents the fundamental concepts of thermodynamics and applies these to problems dealing with properties of materials, phase transformations, chemical reactions, solutions and surfaces. The author utilizes principles of statistical mechanics to illustrate key concepts from a microscopic perspective, as well as develop equations of kinetic theory. The book provides end-of-chapter question and problem sets, some using Mathcad™ and Mathematica™; a useful glossary containing important symbols, definitions, and units; and appendices covering multivariable calculus and valuable numerical methods.

  3. Prevalence of Principles of Piaget’s Theory Among 4-7-year-old Children and their Correlation with IQ

    Science.gov (United States)

    Marwaha, Sugandha; Vashist, Binny

    2017-01-01

    Introduction Cognitive development is a major area of human development and was extensively studied by Jean Piaget. He proposed that the development of intellectual abilities occurs in a series of relatively distinct stages and that a child’s way of thinking and viewing the world is different at different stages. Aim To assess Piaget’s principles of the intuitive stage of preoperational period among 4-7-year-old children relative to their Intelligence quotient (IQ). Materials and Methods Various characteristics as described by Jean Piaget specific for the age group of 4-7 years along with those of the previous (preconceptual stage of preoperational period) and successive periods (concrete operations) were analysed using various experiments in 300 children. These characteristics included the concepts of perceptual and cognitive egocentrism, centration and reversibility. IQ of the children was measured using Seguin form board test. Inferential statistics were performed using Chi-square test and Kruskal Wallis test. The level of statistical significance was set at 0.05. Results The prevalence of perceptual and cognitive egocentrism was 10.7% and 31.7% based on the experiments and 33% based on the interview question. Centration was present in 96.3% of the children. About 99% children lacked the concept of reversibility according to the clay experiment while 97.7% possessed this concept according to the interview question. The mean IQ score of children who possessed perceptual egocentrism, cognitive egocentrism and egocentrism in dental setting was significantly higher than those who lacked these characteristics. Conclusion Perceptual egocentrism had almost disappeared and prevalence of cognitive egocentrism decreased with increase in age. Centration and lack of reversibility were appreciated in most of the children. There was a gradual reduction in the prevalence of these characters with increasing age. Mean IQ score of children who possessed perceptual egocentrism

  4. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. I. The normal modes

    International Nuclear Information System (INIS)

    Comandi, G.L.; Chiofalo, M.L.; Toncelli, R.; Bramanti, D.; Polacco, E.; Nobili, A.M.

    2006-01-01

    Recent theoretical work suggests that violation of the equivalence principle might be revealed in a measurement of the fractional differential acceleration η between two test bodies-of different compositions, falling in the gravitational field of a source mass--if the measurement is made to the level of η≅10 -13 or better. This being within the reach of ground based experiments gives them a new impetus. However, while slowly rotating torsion balances in ground laboratories are close to reaching this level, only an experiment performed in a low orbit around the Earth is likely to provide a much better accuracy. We report on the progress made with the 'Galileo Galilei on the ground' (GGG) experiment, which aims to compete with torsion balances using an instrument design also capable of being converted into a much higher sensitivity space test. In the present and following articles (Part I and Part II), we demonstrate that the dynamical response of the GGG differential accelerometer set into supercritical rotation-in particular, its normal modes (Part I) and rejection of common mode effects (Part II)-can be predicted by means of a simple but effective model that embodies all the relevant physics. Analytical solutions are obtained under special limits, which provide the theoretical understanding. A simulation environment is set up, obtaining a quantitative agreement with the available experimental data on the frequencies of the normal modes and on the whirling behavior. This is a needed and reliable tool for controlling and separating perturbative effects from the expected signal, as well as for planning the optimization of the apparatus

  5. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  6. Principles of modern physics

    CERN Document Server

    Saxena, A K

    2014-01-01

    Principles of Modern Physics, divided into twenty one chapters, begins with quantum ideas followed by discussions on special relativity, atomic structure, basic quantum mechanics, hydrogen atom (and Schrodinger equation) and periodic table, the three statistical distributions, X-rays, physics of solids, imperfections in crystals, magnetic properties of materials, superconductivity, Zeeman-, Stark- and Paschen Back- effects, Lasers, Nuclear physics (Yukawa's meson theory and various nuclear models), radioactivity and nuclear reactions, nuclear fission, fusion and plasma, particle accelerators and detectors, the universe, Elementary particles (classification, eight fold way and quark model, standard model and fundamental interactions), cosmic rays, deuteron problem in nuclear physics, and cathode ray oscilloscope. NEW TO THE FOURTH EDITION: The CO2 Laser Theory of magnetic moments on the basis of shell model Geological dating Laser Induced fusion and laser fusion reactor. Hawking radiation The cosmological red ...

  7. First-principles theory of short-range order in size-mismatched metal alloys: Cu-Au, Cu-Ag, and Ni-Au

    International Nuclear Information System (INIS)

    Wolverton, C.; Ozolins, V.; Zunger, A.

    1998-01-01

    We describe a first-principles technique for calculating the short-range order (SRO) in disordered alloys, even in the presence of large anharmonic atomic relaxations. The technique is applied to several alloys possessing large size mismatch: Cu-Au, Cu-Ag, Ni-Au, and Cu-Pd. We find the following: (i) The calculated SRO in Cu-Au alloys peaks at (or near) the left-angle 100 right-angle point for all compositions studied, in agreement with diffuse scattering measurements. (ii) A fourfold splitting of the X-point SRO exists in both Cu 0.75 Au 0.25 and Cu 0.70 Pd 0.30 , although qualitative differences in the calculated energetics for these two alloys demonstrate that the splitting in Cu 0.70 Pd 0.30 may be accounted for by T=0 K energetics while T≠0 K configurational entropy is necessary to account for the splitting in Cu 0.75 Au 0.25 . Cu 0.75 Au 0.25 shows a significant temperature dependence of the splitting, in agreement with recent in situ measurements, while the splitting in Cu 0.70 Pd 0.30 is predicted to have a much smaller temperature dependence. (iii) Although no measurements exist, the SRO of Cu-Ag alloys is predicted to be of clustering type with peaks at the left-angle 000 right-angle point. Streaking of the SRO peaks in the left-angle 100 right-angle and left-angle 1 (1) /(2) 0 right-angle directions for Ag- and Cu-rich compositions, respectively, is correlated with the elastically soft directions for these compositions. (iv) Even though Ni-Au phase separates at low temperatures, the calculated SRO pattern in Ni 0.4 Au 0.6 , like the measured data, shows a peak along the left-angle ζ00 right-angle direction, away from the typical clustering-type left-angle 000 right-angle point. (v) The explicit effect of atomic relaxation on SRO is investigated and it is found that atomic relaxation can produce significant qualitative changes in the SRO pattern, changing the pattern from ordering to clustering type, as in the case of Cu-Ag. copyright 1998 The American

  8. Teorias políticas medievais e a construção do conceito de unidade Medieval political theories and principles of construction of unit

    Directory of Open Access Journals (Sweden)

    Fátima Regina Fernandes

    2009-01-01

    Full Text Available Analisamos neste trabalho o sentido das construções teóricas medievais na definição dos limites de autoridade régia e sua função na institucionalização da monarquia. À luz disto, destacamos a recepção de valores e tradições clássicas nos tratados doutrinais baixo-medievais e a intermediação atualizadora dos agentes produtores destas obras. Iniciativas que visavam à eliminação de potenciais perigos ligados à excessiva concentração de poderes, num debate entre teorias defensoras de uma base colegiada do poder e a centralização monárquica.I reviewed this work a sense of medieval buildings in the theoretical definition of the limits of royal authority and its role in the institutionalization of the monarchy. In light of this, highlight the reception of classical values and traditions treated in low-medieval doctrinal updater intermediation of agents and producers of these works. Initiatives aimed at the elimination of potential hazards related to excessive concentration of power in a debate between advocates of a basic theory of power and centralization collegiate monarchy.

  9. Thermodynamic and redox properties of graphene oxides for lithium-ion battery applications: a first principles density functional theory modeling approach.

    Science.gov (United States)

    Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon

    2016-07-27

    Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials.

  10. Interpolation theory

    CERN Document Server

    Lunardi, Alessandra

    2018-01-01

    This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

  11. Design principles and theory of paramagnetic fluorine-labelled lanthanide complexes as probes for (19)F magnetic resonance: a proof-of-concept study.

    Science.gov (United States)

    Chalmers, Kirsten H; De Luca, Elena; Hogg, Naomi H M; Kenwright, Alan M; Kuprov, Ilya; Parker, David; Botta, Mauro; Wilson, J Ian; Blamire, Andrew M

    2010-01-04

    The synthesis and spectroscopic properties of a series of CF(3)-labelled lanthanide(III) complexes (Ln=Gd, Tb, Dy, Ho, Er, Tm) with amide-substituted ligands based on 1,4,7,10-tetraazacyclododecane are described. The theoretical contributions of the (19)F magnetic relaxation processes in these systems are critically assessed and selected volumetric plots are presented. These plots allow an accurate estimation of the increase in the rates of longitudinal and transverse relaxation as a function of the distance between the Ln(III) ion and the fluorine nucleus, the applied magnetic field, and the re-rotational correlation time of the complex, for a given Ln(III) ion. Selected complexes exhibit pH-dependent chemical shift behaviour, and a pK(a) of 7.0 was determined in one example based on the holmium complex of an ortho-cyano DO3A-monoamide ligand, which allowed the pH to be assessed by measuring the difference in chemical shift (varying by over 14 ppm) between two (19)F resonances. Relaxation analyses of variable-temperature and variable-field (19)F, (17)O and (1)H NMR spectroscopy experiments are reported, aided by identification of salient low-energy conformers by using density functional theory. The study of fluorine relaxation rates, over a field range of 4.7 to 16.5 T allowed precise computation of the distance between the Ln(III) ion and the CF(3) reporter group by using global fitting methods. The sensitivity benefits of using such paramagnetic fluorinated probes in (19)F NMR spectroscopic studies are quantified in preliminary spectroscopic and imaging experiments with respect to a diamagnetic yttrium(III) analogue.

  12. Measuring and managing radiologist workload: application of lean and constraint theories and production planning principles to planning radiology services in a major tertiary hospital.

    Science.gov (United States)

    MacDonald, Sharyn L S; Cowan, Ian A; Floyd, Richard; Mackintosh, Stuart; Graham, Rob; Jenkins, Emma; Hamilton, Richard

    2013-10-01

    We describe how techniques traditionally used in the manufacturing industry (lean management, the theory of constraints and production planning) can be applied to planning radiology services to reduce the impact of constraints such as limited radiologist hours, and to subsequently reduce delays in accessing imaging and in report turnaround. Targets for imaging and reporting were set aligned with clinical needs. Capacity was quantified for each modality and for radiologists and recorded in activity lists. Demand was quantified and forecasting commenced based on historical referral rates. To try and mitigate the impact of radiologists as a constraint, lean management processes were applied to radiologist workflows. A production planning process was implemented. Outpatient waiting times to access imaging steadily decreased. Report turnaround times improved with the percentage of overnight/on-call reports completed by a 1030 target time increased from approximately 30% to 80 to 90%. The percentage of emergency and inpatient reports completed within one hour increased from approximately 15% to approximately 50% with 80 to 90% available within 4 hours. The number of unreported cases on the radiologist work-list at the end of the working day reduced. The average weekly accuracy for demand forecasts for emergency and inpatient CT, MRI and plain film imaging was 91%, 83% and 92% respectively. For outpatient CT, MRI and plain film imaging the accuracy was 60%, 55% and 77% respectively. Reliable routine weekly and medium to longer term service planning is now possible. Tools from industry can be successfully applied to diagnostic imaging services to improve performance. They allow an accurate understanding of the demands on a service, capacity, and can reliably predict the impact of changes in demand or capacity on service delivery. © 2013 The Royal Australian and New Zealand College of Radiologists.

  13. First-principles calculation of the transition temperature Tc for HgBa2CuO4+δ high-temperature superconductors via dipolon theory

    International Nuclear Information System (INIS)

    Downs, D.; Sharma, R.R.

    1995-01-01

    First numerical evaluations of T c for oxygenated and argon-reduced single-layered HgBa 2 CuO 4+δ superconductors have been presented. Our calculations are based on the dipolon theory and are found to provide an explanation for the occurrence of superconductivity in single-layered high-T c superconductors. Relevant expressions useful for the evaluation of T c have been given. Since the polarizabilities of the ions are not known exactly for the present systems we have performed calculations making use of Pauling's as well as Tessman, Kahn, and Shockley's polarizabilities in order to estimate the uncertainties in the calculated values of T c associated with uncertainties in the polarizabilities. The effective charges on the ions required for the evaluation of dipoles and dipolon frequencies have been obtained by means of the bond-valence sums. Without fitting with any parameters, our calculations yield T c values equal to 80±21 K for the oxygenated and 50±27 K for the argon-reduced HgBa 2 CuO 4+δ superconductors, in agreement with the corresponding experimental values 95 and 59 K. The uncertainties in the calculated values of T c arise because of the uncertainties in various physical parameters (including polarizabilities) used and due to errors involved in the calculations. The present results are consistent with the observed electronic Raman-scattering intensities which show anomalously broad peaks extended up to several electron volts in cuprate high-T c superconductors. Our calculated dipolon density of states predict four optical absorption peaks at about 77 cm -1 , 195 cm -1 , 1.6 eV, and 2.5 eV

  14. Measuring and managing radiologist workload: application of lean and constraint theories and production planning principles to planning radiology services in mahjor tertiary hospital

    International Nuclear Information System (INIS)

    MacDonald, Sharyn L.S.; Cowan, Ian A.; Floyd, Richard; Mackintosh, Stuart; Graham, Rob; Jenkins, Emma; Hamilton, Richard

    2013-01-01

    We describe how techniques traditionally used in the manufacturing industry (lean management, the theory of constraints and production planning) can be applied to planning radiology services to reduce the impact of constraints such as limited radiologist hours, and to subsequently reduce delays in accessing imaging and in report turnaround. Targets for imaging and reporting were set aligned with clinical needs. Capacity was quantified for each modality and for radiologists and recorded in activity lists. Demand was quantified and forecasting commenced based on historical referral rates. To try and mitigate the impact of radiologists as a constraint, lean management processes were applied to radiologist workflows. A production planning process was implemented. Outpatient waiting times to access imaging steadily decreased. Report turnaround times improved with the percentage of overnight/on-call reports completed by a 1030 target time increased from approximately 30% to 80 to 90%. The percentage of emergency and inpatient reports completed within one hour increased from approximately 15% to approximately 50% with 80 to 90% available within 4 hours. The number of unreported cases on the radiologist work-list at the end of the working day reduced. The average weekly accuracy for demand forecasts for emergency and inpatient CT, MRI and plain film imaging was 91%, 83% and 92% respectively. For outpatient CT, MRI and plain film imaging the accuracy was 60%, 55% and 77% respectively. Reliable routine weekly and medium to longer term service planning is now possible. Tools from industry can be successfully applied to diagnostic imaging services to improve performance. They allow an accurate understanding of the demands on a service, capacity, and can reliably predict the impact of changes in demand or capacity on service delivery.

  15. The Principle of Least Action

    Indian Academy of Sciences (India)

    THOLASI

    Reproduced from the book A Survey of Physical Theory (formerly titled: A Survey ... The dynamical laws for physical systems are usually expressed in the form of ... The reason for the difference in the results derived from the two principles lies ...

  16. On the Dirichlet's Box Principle

    Science.gov (United States)

    Poon, Kin-Keung; Shiu, Wai-Chee

    2008-01-01

    In this note, we will focus on several applications on the Dirichlet's box principle in Discrete Mathematics lesson and number theory lesson. In addition, the main result is an innovative game on a triangular board developed by the authors. The game has been used in teaching and learning mathematics in Discrete Mathematics and some high schools in…

  17. Principle of minimum distance in space of states as new principle in quantum physics

    International Nuclear Information System (INIS)

    Ion, D. B.; Ion, M. L. D.

    2007-01-01

    The mathematician Leonhard Euler (1707-1783) appears to have been a philosophical optimist having written: 'Since the fabric of universe is the most perfect and is the work of the most wise Creator, nothing whatsoever take place in this universe in which some relation of maximum or minimum does not appear. Wherefore, there is absolutely no doubt that every effect in universe can be explained as satisfactory from final causes themselves the aid of the method of Maxima and Minima, as can from the effective causes'. Having in mind this kind of optimism in the papers mentioned in this work we introduced and investigated the possibility to construct a predictive analytic theory of the elementary particle interaction based on the principle of minimum distance in the space of quantum states (PMD-SQS). So, choosing the partial transition amplitudes as the system variational variables and the distance in the space of the quantum states as a measure of the system effectiveness, we obtained the results presented in this paper. These results proved that the principle of minimum distance in space of quantum states (PMD-SQS) can be chosen as variational principle by which we can find the analytic expressions of the partial transition amplitudes. In this paper we present a description of hadron-hadron scattering via principle of minimum distance PMD-SQS when the distance in space of states is minimized with two directional constraints: dσ/dΩ(±1) = fixed. Then by using the available experimental (pion-nucleon and kaon-nucleon) phase shifts we obtained not only consistent experimental tests of the PMD-SQS optimality, but also strong experimental evidences for new principles in hadronic physics such as: Principle of nonextensivity conjugation via the Riesz-Thorin relation (1/2p + 1/2q = 1) and a new Principle of limited uncertainty in nonextensive quantum physics. The strong experimental evidence obtained here for the nonextensive statistical behavior of the [J,

  18. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  19. Superstring theory

    International Nuclear Information System (INIS)

    Schwarz, J.H.

    1985-01-01

    Dual string theories, initially developed as phenomenological models of hadrons, now appear more promising as candidates for a unified theory of fundamental interactions. Type I superstring theory (SST I), is a ten-dimensional theory of interacting open and closed strings, with one supersymmetry, that is free from ghosts and tachyons. It requires that an SO(eta) or Sp(2eta) gauge group be used. A light-cone-gauge string action with space-time supersymmetry automatically incorporates the superstring restrictions and leads to the discovery of type II superstring theory (SST II). SST II is an interacting theory of closed strings only, with two D=10 supersymmetries, that is also free from ghosts and tachyons. By taking six of the spatial dimensions to form a compact space, it becomes possible to reconcile the models with our four-dimensional perception of spacetime and to define low-energy limits in which SST I reduces to N=4, D=4 super Yang-Mills theory and SST II reduces to N=8, D=4 supergravity theory. The superstring theories can be described by a light-cone-gauge action principle based on fields that are functionals of string coordinates. With this formalism any physical quantity should be calculable. There is some evidence that, unlike any conventional field theory, the superstring theories provide perturbatively renormalizable (SST I) or finite (SST II) unifications of gravity with other interactions

  20. From Entropic Dynamics to Quantum Theory

    International Nuclear Information System (INIS)

    Caticha, Ariel

    2009-01-01

    Non-relativistic quantum theory is derived from information codified into an appropriate statistical model. The basic assumption is that there is an irreducible uncertainty in the location of particles so that the configuration space is a statistical manifold. The dynamics then follows from a principle of inference, the method of Maximum Entropy. The concept of time is introduced as a convenient way to keep track of change. The resulting theory resembles both Nelson's stochastic mechanics and general relativity. The statistical manifold is a dynamical entity: its geometry determines the evolution of the probability distribution which, in its turn, reacts back and determines the evolution of the geometry. There is a new quantum version of the equivalence principle: 'osmotic' mass equals inertial mass. Mass and the phase of the wave function are explained as features of purely statistical origin.

  1. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  2. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  3. MWH's water treatment: principles and design

    National Research Council Canada - National Science Library

    Crittenden, John C

    2012-01-01

    ... with additional worked problems and new treatment approaches. It covers both the principles and theory of water treatment as well as the practical considerations of plant design and distribution...

  4. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  5. Le Chatelier principle in replicator dynamics

    OpenAIRE

    Allahverdyan, Armen E.; Galstyan, Aram

    2011-01-01

    The Le Chatelier principle states that physical equilibria are not only stable, but they also resist external perturbations via short-time negative-feedback mechanisms: a perturbation induces processes tending to diminish its results. The principle has deep roots, e.g., in thermodynamics it is closely related to the second law and the positivity of the entropy production. Here we study the applicability of the Le Chatelier principle to evolutionary game theory, i.e., to perturbations of a Nas...

  6. Three principles of competitive nonlinear pricing

    OpenAIRE

    Page Junior, Frank H.; Monteiro, P. K.

    2002-01-01

    We make three contributions to the theory of contracting under asymmetric information. First , we establish a competitive analog to the revelation principle which we call the implementation principle. This principle provides a complete characterization of all incentive compatible, indirect contracting mechanisms in terms of contract catalogs (or menus), and allows us to conclude that in competitive contracting situations, firms in choosing their contracting strategies can restrict attention, ...

  7. Principle of coincidence method and application in activity measurement

    International Nuclear Information System (INIS)

    Li Mou; Dai Yihua; Ni Jianzhong

    2008-01-01

    The basic principle of coincidence method was discussed. The basic principle was generalized by analysing the actual example, and the condition in theory of coincidence method was brought forward. The cause of variation of efficiency curve and the effect of dead-time in activity measurement were explained using the above principle and condition. This principle of coincidence method provides the foundation in theory for activity measurement. (authors)

  8. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    sufficiently strong interpretations of the second law of thermodynamics to define the approach to and the nature of patterned stable steady states. For many pattern-forming systems these principles define quantifiable stable states as maxima or minima (or both) in the dissipation. An elementary statistical-mechanical proof is offered. To turn the argument full circle, the transformations of the partitions and classes which are predicated upon such minimax entropic paths can through digital modeling be directly identified with the syntactic and inferential elements of deductive logic. It follows therefore that all self-organizing or pattern-forming systems which possess stable steady states approach these states according to the imperatives of formal logic, the optimum pattern with its rich endowment of equivalence relations representing the central theorem of the associated calculus. Logic is thus ``the stuff of the universe,'' and biological evolution with its culmination in the human brain is the most significant example of all the irreversible pattern-forming processes. We thus conclude with a few remarks on the relevance of the contribution to the theory of evolution and to research on artificial intelligence.

  9. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  10. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  11. Cooling towers principles and practice

    CERN Document Server

    Hill, G B; Osborn, Peter D

    1990-01-01

    Cooling Towers: Principles and Practice, Third Edition, aims to provide the reader with a better understanding of the theory and practice, so that installations are correctly designed and operated. As with all branches of engineering, new technology calls for a level of technical knowledge which becomes progressively higher; this new edition seeks to ensure that the principles and practice of cooling towers are set against a background of up-to-date technology. The book is organized into three sections. Section A on cooling tower practice covers topics such as the design and operation of c

  12. Floyd's principle, correctness theories and program equivalence

    NARCIS (Netherlands)

    Bergstra, J.A.; Tiuryn, J.; Tucker, J.V.

    1982-01-01

    A programming system is a language made from a fixed class of data abstractions and a selection of familiar deterministic control and assignment constructs. It is shown that the sets of all ‘before-after’ first-order assertions which are true of programs in any such language can uniquely determine

  13. Principles of Transactional Memory The Theory

    CERN Document Server

    Guerraoui, Rachid

    2010-01-01

    Transactional memory (TM) is an appealing paradigm for concurrent programming on shared memory architectures. With a TM, threads of an application communicate, and synchronize their actions, via in-memory transactions. Each transaction can perform any number of operations on shared data, and then either commit or abort. When the transaction commits, the effects of all its operations become immediately visible to other transactions; when it aborts, however, those effects are entirely discarded. Transactions are atomic: programmers get the illusion that every transaction executes all its operati

  14. Unifying generative and discriminative learning principles

    Directory of Open Access Journals (Sweden)

    Strickert Marc

    2010-02-01

    Full Text Available Abstract Background The recognition of functional binding sites in genomic DNA remains one of the fundamental challenges of genome research. During the last decades, a plethora of different and well-adapted models has been developed, but only little attention has been payed to the development of different and similarly well-adapted learning principles. Only recently it was noticed that discriminative learning principles can be superior over generative ones in diverse bioinformatics applications, too. Results Here, we propose a generalization of generative and discriminative learning principles containing the maximum likelihood, maximum a posteriori, maximum conditional likelihood, maximum supervised posterior, generative-discriminative trade-off, and penalized generative-discriminative trade-off learning principles as special cases, and we illustrate its efficacy for the recognition of vertebrate transcription factor binding sites. Conclusions We find that the proposed learning principle helps to improve the recognition of transcription factor binding sites, enabling better computational approaches for extracting as much information as possible from valuable wet-lab data. We make all implementations available in the open-source library Jstacs so that this learning principle can be easily applied to other classification problems in the field of genome and epigenome analysis.

  15. Developing an Asteroid Rotational Theory

    Science.gov (United States)

    Geis, Gena; Williams, Miguel; Linder, Tyler; Pakey, Donald

    2018-01-01

    The goal of this project is to develop a theoretical asteroid rotational theory from first principles. Starting at first principles provides a firm foundation for computer simulations which can be used to analyze multiple variables at once such as size, rotation period, tensile strength, and density. The initial theory will be presented along with early models of applying the theory to the asteroid population. Early results confirm previous work by Pravec et al. (2002) that show the majority of the asteroids larger than 200m have negligible tensile strength and have spin rates close to their critical breakup point. Additionally, results show that an object with zero tensile strength has a maximum rotational rate determined by the object’s density, not size. Therefore, an iron asteroid with a density of 8000 kg/m^3 would have a minimum spin period of 1.16h if the only forces were gravitational and centrifugal. The short-term goal is to include material forces in the simulations to determine what tensile strength will allow the high spin rates of asteroids smaller than 150m.

  16. Principles of e-learning systems engineering

    CERN Document Server

    Gilbert, Lester

    2008-01-01

    The book integrates the principles of software engineering with the principles of educational theory, and applies them to the problems of e-learning development, thus establishing the discipline of E-learning systems engineering. For the first time, these principles are collected and organised into the coherent framework that this book provides. Both newcomers to and established practitioners in the field are provided with integrated and grounded advice on theory and practice. The book presents strong practical and theoretical frameworks for the design and development of technology-based mater

  17. Spectrum unfolding, sensitivity analysis and propagation of uncertainties with the maximum entropy deconvolution code MAXED

    CERN Document Server

    Reginatto, M; Neumann, S

    2002-01-01

    MAXED was developed to apply the maximum entropy principle to the unfolding of neutron spectrometric measurements. The approach followed in MAXED has several features that make it attractive: it permits inclusion of a priori information in a well-defined and mathematically consistent way, the algorithm used to derive the solution spectrum is not ad hoc (it can be justified on the basis of arguments that originate in information theory), and the solution spectrum is a non-negative function that can be written in closed form. This last feature permits the use of standard methods for the sensitivity analysis and propagation of uncertainties of MAXED solution spectra. We illustrate its use with unfoldings of NE 213 scintillation detector measurements of photon calibration spectra, and of multisphere neutron spectrometer measurements of cosmic-ray induced neutrons at high altitude (approx 20 km) in the atmosphere.

  18. Coexisting principles and logics of elder care

    DEFF Research Database (Denmark)

    Dahl, Hanne Marlene; Eskelinen, Leena; Boll Hansen, Eigil

    2015-01-01

    Healthy and active ageing has become an ideal in Western societies. In the Nordic countries, this ideal has been supported through a policy of help to self-help in elder care since the 1980s. However, reforms inspired by New Public Management (NPM) have introduced a new policy principle of consumer......-oriented service that stresses the wishes and priorities of older people. We have studied how these two principles are applied by care workers in Denmark. Is one principle or logic replacing the other, or do they coexist? Do they create tensions between professional knowledge and the autonomy of older people......? Using neo-institutional theory and feminist care theory, we analysed the articulation of the two policy principles in interviews and their logics in observations in four local authorities. We conclude that help to self-help is the dominant principle, that it is deeply entrenched in the identity...

  19. Equivalence Principle, Higgs Boson and Cosmology

    Directory of Open Access Journals (Sweden)

    Mauro Francaviglia

    2013-05-01

    Full Text Available We discuss here possible tests for Palatini f(R-theories together with their implications for different formulations of the Equivalence Principle. We shall show that Palatini f(R-theories obey the Weak Equivalence Principle and violate the Strong Equivalence Principle. The violations of the Strong Equivalence Principle vanish in vacuum (and purely electromagnetic solutions as well as on short time scales with respect to the age of the universe. However, we suggest that a framework based on Palatini f(R-theories is more general than standard General Relativity (GR and it sheds light on the interpretation of data and results in a way which is more model independent than standard GR itself.

  20. The 4th Thermodynamic Principle?

    International Nuclear Information System (INIS)

    Montero Garcia, Jose de la Luz; Novoa Blanco, Jesus Francisco

    2007-01-01

    It should be emphasized that the 4th Principle above formulated is a thermodynamic principle and, at the same time, is mechanical-quantum and relativist, as it should inevitably be and its absence has been one of main the theoretical limitations of the physical theory until today.We show that the theoretical discovery of Dimensional Primitive Octet of Matter, the 4th Thermodynamic Principle, the Quantum Hexet of Matter, the Global Hexagonal Subsystem of Fundamental Constants of Energy and the Measurement or Connected Global Scale or Universal Existential Interval of the Matter is that it is possible to be arrived at a global formulation of the four 'forces' or fundamental interactions of nature. The Einstein's golden dream is possible

  1. Principles of neural information processing

    CERN Document Server

    Seelen, Werner v

    2016-01-01

    In this fundamental book the authors devise a framework that describes the working of the brain as a whole. It presents a comprehensive introduction to the principles of Neural Information Processing as well as recent and authoritative research. The books´ guiding principles are the main purpose of neural activity, namely, to organize behavior to ensure survival, as well as the understanding of the evolutionary genesis of the brain. Among the developed principles and strategies belong self-organization of neural systems, flexibility, the active interpretation of the world by means of construction and prediction as well as their embedding into the world, all of which form the framework of the presented description. Since, in brains, their partial self-organization, the lifelong adaptation and their use of various methods of processing incoming information are all interconnected, the authors have chosen not only neurobiology and evolution theory as a basis for the elaboration of such a framework, but also syst...

  2. Beyond the Virtues-Principles Debate.

    Science.gov (United States)

    Keat, Marilyn S.

    1992-01-01

    Indicates basic ontological assumptions in the virtues-principles debate in moral philosophy, noting Aristotle's and Kant's fundamental ideas about morality and considering a hermeneutic synthesis of theories. The article discusses what acceptance of the synthesis might mean in the theory and practice of moral pedagogy, offering examples of…

  3. Optimal decisions principles of programming

    CERN Document Server

    Lange, Oskar

    1971-01-01

    Optimal Decisions: Principles of Programming deals with all important problems related to programming.This book provides a general interpretation of the theory of programming based on the application of the Lagrange multipliers, followed by a presentation of the marginal and linear programming as special cases of this general theory. The praxeological interpretation of the method of Lagrange multipliers is also discussed.This text covers the Koopmans' model of transportation, geometric interpretation of the programming problem, and nature of activity analysis. The solution of t

  4. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  5. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  6. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  7. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  8. Principles of digital image synthesis

    CERN Document Server

    Glassner, Andrew S

    1995-01-01

    Image synthesis, or rendering, is a field of transformation: it changesgeometry and physics into meaningful images. Because the most popularalgorithms frequently change, it is increasingly important for researchersand implementors to have a basic understanding of the principles of imagesynthesis. Focusing on theory, Andrew Glassner provides a comprehensiveexplanation of the three core fields of study that come together to formdigital image synthesis: the human visual system, digital signalprocessing, and the interaction of matter and light. Assuming no more thana basic background in calculus,

  9. Basic Principles of Wastewater Treatment

    OpenAIRE

    Von Sperling, Marcos

    2007-01-01

    "Basic Principles of Wastewater Treatment is the second volume in the series Biological Wastewater Treatment, and focusses on the unit operations and processes associated with biological wastewater treatment. The major topics covered are: microbiology and ecology of wastewater treatment reaction kinetics and reactor hydraulics conversion of organic and inorganic matter sedimentation aeration The theory presented in this volume forms the basis upon which the other books...

  10. String field theory

    International Nuclear Information System (INIS)

    Kaku, M.

    1987-01-01

    In this article, the authors summarize the rapid progress in constructing string field theory actions, such as the development of the covariant BRST theory. They also present the newer geometric formulation of string field theory, from which the BRST theory and the older light cone theory can be derived from first principles. This geometric formulation allows us to derive the complete field theory of strings from two geometric principles, in the same way that general relativity and Yang-Mills theory can be derived from two principles based on global and local symmetry. The geometric formalism therefore reduces string field theory to a problem of finding an invariant under a new local gauge group they call the universal string group (USG). Thus, string field theory is the gauge theory of the universal string group in much the same way that Yang-Mills theory is the gauge theory of SU(N). The geometric formulation places superstring theory on the same rigorous group theoretical level as general relativity and gauge theory

  11. A New Principle in Physics: the Principle 'Finiteness', and Some Consequences

    International Nuclear Information System (INIS)

    Sternlieb, Abraham

    2010-01-01

    In this paper I propose a new principle in physics: the principle of 'finiteness'. It stems from the definition of physics as a science that deals (among other things) with measurable dimensional physical quantities. Since measurement results, including their errors, are always finite, the principle of finiteness postulates that the mathematical formulation of 'legitimate' laws of physics should prevent exactly zero or infinite solutions. Some consequences of the principle of finiteness are discussed, in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The consequences are derived independently of any other theory or principle in physics. I propose 'finiteness' as a postulate (like the constancy of the speed of light in vacuum, 'c'), as opposed to a notion whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories, or principles.

  12. Spectral analysis and filter theory in applied geophysics

    CERN Document Server

    Buttkus, Burkhard

    2000-01-01

    This book is intended to be an introduction to the fundamentals and methods of spectral analysis and filter theory and their appli­ cations in geophysics. The principles and theoretical basis of the various methods are described, their efficiency and effectiveness eval­ uated, and instructions provided for their practical application. Be­ sides the conventional methods, newer methods arediscussed, such as the spectral analysis ofrandom processes by fitting models to the ob­ served data, maximum-entropy spectral analysis and maximum-like­ lihood spectral analysis, the Wiener and Kalman filtering methods, homomorphic deconvolution, and adaptive methods for nonstation­ ary processes. Multidimensional spectral analysis and filtering, as well as multichannel filters, are given extensive treatment. The book provides a survey of the state-of-the-art of spectral analysis and fil­ ter theory. The importance and possibilities ofspectral analysis and filter theory in geophysics for data acquisition, processing an...

  13. Elements of a compatible optimization theory for coupled systems

    International Nuclear Information System (INIS)

    Bonnemay, A.

    1969-01-01

    The first theory deals with the compatible optimization in coupled systems. A game theory for two players and with a non-zero sum is first developed. The conclusions are then extended to the case of a game with any finite number of players. After this essentially static study, the dynamic aspect of the problem is applied to the case of games which evolve. By applying PONTRYAGIN maximum principle it is possible to derive a compatible optimisation theorem which constitutes a necessary condition. (author) [fr

  14. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  15. Theories Supporting Transfer of Training.

    Science.gov (United States)

    Yamnill, Siriporn; McLean, Gary N.

    2001-01-01

    Reviews theories about factors affecting the transfer of training, including theories on motivation (expectancy, equity, goal setting), training transfer design (identical elements, principle, near and far), and transfer climate (organizational). (Contains 36 references.) (SK)

  16. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  17. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  18. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  19. Critical reflections on the principle of beneficence in biomedicine

    African Journals Online (AJOL)

    raoul

    2012-02-18

    Feb 18, 2012 ... Medical ethics as a scholarly discipline and a system of moral principles ... Attribution License (http://creativecommons.org/licenses/by/2.0), which ..... the principle like other ethical principles is only fine in theory, but putting it.

  20. Using the Music Industry To Teach Economic Principles.

    Science.gov (United States)

    Stamm, K. Brad

    The key purpose of this paper is to provide economics and business professors, particularly those teaching principles courses, with concrete examples of economic theory applied to the music industry. A second objective is to further the interest in economic theory among business majors and expose non-majors to economic principles via real world…