WorldWideScience

Sample records for equivalence principle implies

  1. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  2. Energy conservation and the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1979-01-01

    If the equivalence principle is violated, then observers performing local experiments can detect effects due to their position in an external gravitational environment (preferred-location effects) or can detect effects due to their velocity through some preferred frame (preferred frame effects). We show that the principle of energy conservation implies a quantitative connection between such effects and structure-dependence of the gravitational acceleration of test bodies (violation of the Weak Equivalence Principle). We analyze this connection within a general theoretical framework that encompasses both non-gravitational local experiments and test bodies as well as gravitational experiments and test bodies, and we use it to discuss specific experimental tests of the equivalence principle, including non-gravitational tests such as gravitational redshift experiments, Eoetvoes experiments, the Hughes-Drever experiment, and the Turner-Hill experiment, and gravitational tests such as the lunar-laser-ranging ''Eoetvoes'' experiment, and measurements of anisotropies and variations in the gravitational constant. This framework is illustrated by analyses within two theoretical formalisms for studying gravitational theories: the PPN formalism, which deals with the motion of gravitating bodies within metric theories of gravity, and the THepsilonμ formalism that deals with the motion of charged particles within all metric theories and a broad class of non-metric theories of gravity

  3. Neutrino oscillations in non-inertial frames and the violation of the equivalence principle neutrino mixing induced by the equivalence principle violation

    International Nuclear Information System (INIS)

    Lambiase, G.

    2001-01-01

    Neutrino oscillations are analyzed in an accelerating and rotating reference frame, assuming that the gravitational coupling of neutrinos is flavor dependent, which implies a violation of the equivalence principle. Unlike the usual studies in which a constant gravitational field is considered, such frames could represent a more suitable framework for testing if a breakdown of the equivalence principle occurs, due to the possibility to modulate the (simulated) gravitational field. The violation of the equivalence principle implies, for the case of a maximal gravitational mixing angle, the presence of an off-diagonal term in the mass matrix. The consequences on the evolution of flavor (mass) eigenstates of such a term are analyzed for solar (oscillations in the vacuum) and atmospheric neutrinos. We calculate the flavor oscillation probability in the non-inertial frame, which does depend on its angular velocity and linear acceleration, as well as on the energy of neutrinos, the mass-squared difference between two mass eigenstates, and on the measure of the degree of violation of the equivalence principle (Δγ). In particular, we find that the energy dependence disappears for vanishing mass-squared difference, unlike the result obtained by Gasperini, Halprin, Leung, and other physical mechanisms proposed as a viable explanation of neutrino oscillations. Estimations on the upper values of Δγ are inferred for a rotating observer (with vanishing linear acceleration) comoving with the earth, hence ω∝7.10 -5 rad/sec, and all other alternative mechanisms generating the oscillation phenomena have been neglected. In this case we find that the constraints on Δγ are given by Δγ≤10 2 for solar neutrinos and Δγ≤10 6 for atmospheric neutrinos. (orig.)

  4. Gravitational Lagrangians, Mach's Principle, and the Equivalence Principle in an Expanding Universe

    Science.gov (United States)

    Essén, Hanno

    2014-08-01

    Gravitational Lagrangians as derived by Fock for the Einstein-Infeld-Hoffmann approach, and by Kennedy assuming only a fourth rank tensor interaction, contain long range interactions. Here we investigate how these affect the local dynamics when integrated over an expanding universe out to the Hubble radius. Taking the cosmic expansion velocity into account in a heuristic manner it is found that these long range interactions imply Mach's principle, provided the universe has the critical density, and that mass is renormalized. Suitable higher order additions to the Lagrangians make the formalism consistent with the equivalence principle.

  5. Quantification of the equivalence principle

    International Nuclear Information System (INIS)

    Epstein, K.J.

    1978-01-01

    Quantitative relationships illustrate Einstein's equivalence principle, relating it to Newton's ''fictitious'' forces arising from the use of noninertial frames, and to the form of the relativistic time dilatation in local Lorentz frames. The equivalence principle can be interpreted as the equivalence of general covariance to local Lorentz covariance, in a manner which is characteristic of Riemannian and pseudo-Riemannian geometries

  6. Equivalence Principle, Higgs Boson and Cosmology

    Directory of Open Access Journals (Sweden)

    Mauro Francaviglia

    2013-05-01

    Full Text Available We discuss here possible tests for Palatini f(R-theories together with their implications for different formulations of the Equivalence Principle. We shall show that Palatini f(R-theories obey the Weak Equivalence Principle and violate the Strong Equivalence Principle. The violations of the Strong Equivalence Principle vanish in vacuum (and purely electromagnetic solutions as well as on short time scales with respect to the age of the universe. However, we suggest that a framework based on Palatini f(R-theories is more general than standard General Relativity (GR and it sheds light on the interpretation of data and results in a way which is more model independent than standard GR itself.

  7. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  8. The principle of equivalence reconsidered: assessing the relevance of the principle of equivalence in prison medicine.

    Science.gov (United States)

    Jotterand, Fabrice; Wangmo, Tenzin

    2014-01-01

    In this article we critically examine the principle of equivalence of care in prison medicine. First, we provide an overview of how the principle of equivalence is utilized in various national and international guidelines on health care provision to prisoners. Second, we outline some of the problems associated with its applications, and argue that the principle of equivalence should go beyond equivalence to access and include equivalence of outcomes. However, because of the particular context of the prison environment, third, we contend that the concept of "health" in equivalence of health outcomes needs conceptual clarity; otherwise, it fails to provide a threshold for healthy states among inmates. We accomplish this by examining common understandings of the concepts of health and disease. We conclude our article by showing why the conceptualization of diseases as clinical problems provides a helpful approach in the delivery of health care in prison.

  9. Quantum equivalence principle without mass superselection

    International Nuclear Information System (INIS)

    Hernandez-Coronado, H.; Okon, E.

    2013-01-01

    The standard argument for the validity of Einstein's equivalence principle in a non-relativistic quantum context involves the application of a mass superselection rule. The objective of this work is to show that, contrary to widespread opinion, the compatibility between the equivalence principle and quantum mechanics does not depend on the introduction of such a restriction. For this purpose, we develop a formalism based on the extended Galileo group, which allows for a consistent handling of superpositions of different masses, and show that, within such scheme, mass superpositions behave as they should in order to obey the equivalence principle. - Highlights: • We propose a formalism for consistently handling, within a non-relativistic quantum context, superpositions of states with different masses. • The formalism utilizes the extended Galileo group, in which mass is a generator. • The proposed formalism allows for the equivalence principle to be satisfied without the need of imposing a mass superselection rule

  10. The principle of equivalence

    International Nuclear Information System (INIS)

    Unnikrishnan, C.S.

    1994-01-01

    Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs

  11. Foundations of gravitation theory: the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1978-01-01

    A new framework is presented within which to discuss the principle of equivalence and its experimental tests. The framework incorporates a special structure imposed on the equivalence principle by the principle of energy conservation. This structure includes relations among the conceptual components of the equivalence principle as well as quantitative relations among the outcomes of its experimental tests. One of the most striking new results obtained through use of this framework is a connection between the breakdown of local Lorentz invariance and the breakdown of the principle that all bodies fall with the same acceleration in a gravitational field. An extensive discussion of experimental tests of the equivalence principle and their significance is also presented. Within the above framework, theory-independent analyses of a broad range of equivalence principle tests are possible. Gravitational redshift experiments. Doppler-shift experiments, the Turner-Hill and Hughes-Drever experiments, and a number of solar-system tests of gravitation theories are analyzed. Application of the techniques of theoretical nuclear physics to the quantitative interpretation of equivalence principle tests using laboratory materials of different composition yields a number of important results. It is found that current Eotvos experiments significantly demonstrate the compatibility of the weak interactions with the equivalence principle. It is also shown that the Hughes-Drever experiment is the most precise test of local Lorentz invariance yet performed. The work leads to a strong, tightly knit empirical basis for the principle of equivalence, the central pillar of the foundations of gravitation theory

  12. Supersymmetric QED at finite temperature and the principle of equivalence

    International Nuclear Information System (INIS)

    Robinett, R.W.

    1985-01-01

    Unbroken supersymmetric QED is examined at finite temperature and it is shown that the scalar and spinor members of a chiral superfield acquire different temperature-dependent inertial masses. By considering the renormalization of the energy-momentum tensor it is also shown that the T-dependent scalar-spinor gravitational masses are also no longer degenerate and, moreover, are different from their T-dependent inertial mass shifts implying a violation of the equivalence principle. The temperature-dependent corrections to the spinor (g-2) are also calculated and found not to vanish

  13. Quantum mechanics and the equivalence principle

    International Nuclear Information System (INIS)

    Davies, P C W

    2004-01-01

    A quantum particle moving in a gravitational field may penetrate the classically forbidden region of the gravitational potential. This raises the question of whether the time of flight of a quantum particle in a gravitational field might deviate systematically from that of a classical particle due to tunnelling delay, representing a violation of the weak equivalence principle. I investigate this using a model quantum clock to measure the time of flight of a quantum particle in a uniform gravitational field, and show that a violation of the equivalence principle does not occur when the measurement is made far from the turning point of the classical trajectory. The results are then confirmed using the so-called dwell time definition of quantum tunnelling. I conclude with some remarks about the strong equivalence principle in quantum mechanics

  14. Higher-order gravity and the classical equivalence principle

    Science.gov (United States)

    Accioly, Antonio; Herdy, Wallace

    2017-11-01

    As is well known, the deflection of any particle by a gravitational field within the context of Einstein’s general relativity — which is a geometrical theory — is, of course, nondispersive. Nevertheless, as we shall show in this paper, the mentioned result will change totally if the bending is analyzed — at the tree level — in the framework of higher-order gravity. Indeed, to first order, the deflection angle corresponding to the scattering of different quantum particles by the gravitational field mentioned above is not only spin dependent, it is also dispersive (energy-dependent). Consequently, it violates the classical equivalence principle (universality of free fall, or equality of inertial and gravitational masses) which is a nonlocal principle. However, contrary to popular belief, it is in agreement with the weak equivalence principle which is nothing but a statement about purely local effects. It is worthy of note that the weak equivalence principle encompasses the classical equivalence principle locally. We also show that the claim that there exists an incompatibility between quantum mechanics and the weak equivalence principle, is incorrect.

  15. Attainment of radiation equivalency principle

    International Nuclear Information System (INIS)

    Shmelev, A.N.; Apseh, V.A.

    2004-01-01

    Problems connected with the prospects for long-term development of the nuclear energetics are discussed. Basic principles of the future large-scale nuclear energetics are listed, primary attention is the safety of radioactive waste management of nuclear energetics. The radiation equivalence principle means close of fuel cycle and management of nuclear materials transportation with low losses on spent fuel and waste processing. Two aspects are considered: radiation equivalence in global and local aspects. The necessity of looking for other strategies of fuel cycle management in full-scale nuclear energy on radioactive waste management is supported [ru

  16. Dark matter and the equivalence principle

    Science.gov (United States)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  17. Equivalence principle violations and couplings of a light dilaton

    International Nuclear Information System (INIS)

    Damour, Thibault; Donoghue, John F.

    2010-01-01

    We consider possible violations of the equivalence principle through the exchange of a light 'dilaton-like' scalar field. Using recent work on the quark-mass dependence of nuclear binding, we find that the dilaton-quark-mass coupling induces significant equivalence-principle-violating effects varying like the inverse cubic root of the atomic number - A -1/3 . We provide a general parametrization of the scalar couplings, but argue that two parameters are likely to dominate the equivalence-principle phenomenology. We indicate the implications of this framework for comparing the sensitivities of current and planned experimental tests of the equivalence principle.

  18. The equivalence principle in classical mechanics and quantum mechanics

    OpenAIRE

    Mannheim, Philip D.

    1998-01-01

    We discuss our understanding of the equivalence principle in both classical mechanics and quantum mechanics. We show that not only does the equivalence principle hold for the trajectories of quantum particles in a background gravitational field, but also that it is only because of this that the equivalence principle is even to be expected to hold for classical particles at all.

  19. Quantum mechanics from an equivalence principle

    International Nuclear Information System (INIS)

    Faraggi, A.E.

    1997-01-01

    The authors show that requiring diffeomorphic equivalence for one-dimensional stationary states implies that the reduced action S 0 satisfies the quantum Hamilton-Jacobi equation with the Planck constant playing the role of a covariantizing parameter. The construction shows the existence of a fundamental initial condition which is strictly related to the Moebius symmetry of the Legendre transform and to its involutive character. The universal nature of the initial condition implies the Schroedinger equation in any dimension

  20. Cryogenic test of the equivalence principle

    International Nuclear Information System (INIS)

    Worden, P.W. Jr.

    1976-01-01

    The weak equivalence principle is the hypothesis that the ratio of internal and passive gravitational mass is the same for all bodies. A greatly improved test of this principle is possible in an orbiting satellite. The most promising experiments for an orbital test are adaptations of the Galilean free-fall experiment and the Eotvos balance. Sensitivity to gravity gradient noise, both from the earth and from the spacecraft, defines a limit to the sensitivity in each case. This limit is generally much worse for an Eotvos balance than for a properly designed free-fall experiment. The difference is related to the difficulty of making a balance sufficiently isoinertial. Cryogenic technology is desirable to take full advantage of the potential sensitivity, but tides in the liquid helium refrigerant may produce a gravity gradient that seriously degrades the ultimate sensitivity. The Eotvos balance appears to have a limiting sensitivity to relative difference of rate of fall of about 2 x 10 -14 in orbit. The free-fall experiment is limited by helium tide to about 10 -15 ; if the tide can be controlled or eliminated the limit may approach 10 -18 . Other limitations to equivalence principle experiments are discussed. An experimental test of some of the concepts involved in the orbital free-fall experiment is continuing. The experiment consists in comparing the motions of test masses levitated in a superconducting magnetic bearing, and is itself a sensitive test of the equivalence principle. At present the levitation magnets, position monitors and control coils have been tested and major noise sources identified. A measurement of the equivalence principle is postponed pending development of a system for digitizing data. The experiment and preliminary results are described

  1. Principle of natural and artificial radioactive series equivalency

    International Nuclear Information System (INIS)

    Vasilyeva, A.N.; Starkov, O.V.

    2001-01-01

    In the present paper one approach used under development of radioactive waste management conception is under consideration. This approach is based on the principle of natural and artificial radioactive series radiotoxic equivalency. The radioactivity of natural and artificial radioactive series has been calculated for 10 9 - years period. The toxicity evaluation for natural and artificial series has also been made. The correlation between natural radioactive series and their predecessors - actinides produced in thermal and fast reactors - has been considered. It has been shown that systematized reactor series data had great scientific significance and the principle of differential calculation of radiotoxicity was necessary to realize long-lived radioactive waste and uranium and thorium ore radiotoxicity equivalency conception. The calculations show that the execution of equivalency principle is possible for uranium series (4n+2, 4n+1). It is a problem for thorium. series. This principle is impracticable for neptunium series. (author)

  2. Testing the principle of equivalence by solar neutrinos

    International Nuclear Information System (INIS)

    Minakata, Hisakazu; Washington Univ., Seattle, WA; Nunokawa, Hiroshi; Washington Univ., Seattle, WA

    1994-04-01

    We discuss the possibility of testing the principle of equivalence with solar neutrinos. If there exists a violation of the equivalence principle quarks and leptons with different flavors may not universally couple with gravity. The method we discuss employs a quantum mechanical phenomenon of neutrino oscillation to probe into the non-university of the gravitational couplings of neutrinos. We develop an appropriate formalism to deal with neutrino propagation under the weak gravitational fields of the sun in the presence of the flavor mixing. We point out that solar neutrino observation by the next generation water Cherenkov detectors can improve the existing bound on violation of the equivalence principle by 3-4 orders of magnitude if the nonadiabatic Mikheyev-Smirnov-Wolfenstein mechanism is the solution to the solar neutrino problem

  3. Testing the principle of equivalence by solar neutrinos

    International Nuclear Information System (INIS)

    Minakata, H.; Nunokawa, H.

    1995-01-01

    We discuss the possibility of testing the principle of equivalence with solar neutrinos. If there exists a violation of the equivalence principle, quarks and leptons with different flavors may not universally couple with gravity. The method we discuss employs the quantum mechanical phenomenon of neutrino oscillation to probe into the nonuniversality of the gravitational couplings of neutrinos. We develop an appropriate formalism to deal with neutrino propagation under the weak gravitational fields of the Sun in the presence of the flavor mixing. We point out that solar neutrino observation by the next generation water Cherenkov detectors can place stringent bounds on the violation of the equivalence principle to 1 part in 10 15 --10 16 if the nonadiabatic Mikheyev-Smirnov-Wolfenstein mechanism is the solution to the solar neutrino problem

  4. Comments on field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1987-01-01

    It is pointed Out that often-used arguments based on a short-circuit concept in presentations of field equivalence principles are not correct. An alternative presentation based on the uniqueness theorem is given. It does not contradict the results obtained by using the short-circuit concept...

  5. A Note on the Equivalence between the Normal and the Lognormal Implied Volatility : A Model Free Approach

    OpenAIRE

    Grunspan, Cyril

    2011-01-01

    First, we show that implied normal volatility is intimately linked with the incomplete Gamma function. Then, we deduce an expansion on implied normal volatility in terms of the time-value of a European call option. Then, we formulate an equivalence between the implied normal volatility and the lognormal implied volatility with any strike and any model. This generalizes a known result for the SABR model. Finally, we adress the issue of the "breakeven move" of a delta-hedged portfolio.

  6. Apparent violation of the principle of equivalence and Killing horizons

    International Nuclear Information System (INIS)

    Zimmerman, R.L.; Farhoosh, H.; Oregon Univ., Eugene

    1980-01-01

    By means of the principle of equivalence it is deduced that the qualitative behavior of the Schwarzschild horizon about a uniformly accelerating particle. This result is confirmed for an exact solution of a uniformly accelerating object in the limit of small accelerations. For large accelerations the Schwarzschild horizon appears to violate the qualitative behavior established via the principle of equivalence. When similar arguments are extended to an observable such as the red shift between two observers, there is no departure from the results expected from the principle of equivalence. The resolution of the paradox is brought about by a compensating effect due to the Rindler horizon. (author)

  7. The principle of general covariance and the principle of equivalence: two distinct concepts

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    It is shown how to construct a theory with general covariance but without the equivalence principle. Such a theory is in disagreement with experiment, but it serves to illustrate the independence of the former principle from the latter one [pt

  8. The Bohr--Einstein ''weighing-of-energy'' debate and the principle of equivalence

    International Nuclear Information System (INIS)

    Hughes, R.J.

    1990-01-01

    The Bohr--Einstein debate over the ''weighing of energy'' and the validity of the time--energy uncertainty relation is reexamined in the context of gravitation theories that do not respect the equivalence principle. Bohr's use of the equivalence principle is shown to be sufficient, but not necessary, to establish the validity of this uncertainty relation in Einstein's ''weighing-of-energy'' gedanken experiment. The uncertainty relation is shown to hold in any energy-conserving theory of gravity, and so a failure of the equivalence principle does not engender a failure of quantum mechanics. The relationship between the gravitational redshift and the equivalence principle is reviewed

  9. Extended Equivalence Principle: Implications for Gravity, Geometry and Thermodynamics

    OpenAIRE

    Sivaram, C.; Arun, Kenath

    2012-01-01

    The equivalence principle was formulated by Einstein in an attempt to extend the concept of inertial frames to accelerated frames, thereby bringing in gravity. In recent decades, it has been realised that gravity is linked not only with geometry of space-time but also with thermodynamics especially in connection with black hole horizons, vacuum fluctuations, dark energy, etc. In this work we look at how the equivalence principle manifests itself in these different situations where we have str...

  10. Cosmological equivalence principle and the weak-field limit

    International Nuclear Information System (INIS)

    Wiltshire, David L.

    2008-01-01

    The strong equivalence principle is extended in application to averaged dynamical fields in cosmology to include the role of the average density in the determination of inertial frames. The resulting cosmological equivalence principle is applied to the problem of synchronization of clocks in the observed universe. Once density perturbations grow to give density contrasts of order 1 on scales of tens of megaparsecs, the integrated deceleration of the local background regions of voids relative to galaxies must be accounted for in the relative synchronization of clocks of ideal observers who measure an isotropic cosmic microwave background. The relative deceleration of the background can be expected to represent a scale in which weak-field Newtonian dynamics should be modified to account for dynamical gradients in the Ricci scalar curvature of space. This acceleration scale is estimated using the best-fit nonlinear bubble model of the universe with backreaction. At redshifts z -10 ms -2 , is small, when integrated over the lifetime of the universe it amounts to an accumulated relative difference of 38% in the rate of average clocks in galaxies as compared to volume-average clocks in the emptiness of voids. A number of foundational aspects of the cosmological equivalence principle are also discussed, including its relation to Mach's principle, the Weyl curvature hypothesis, and the initial conditions of the universe.

  11. A Technique of Teaching the Principle of Equivalence at Ground Level

    Science.gov (United States)

    Lubrica, Joel V.

    2016-01-01

    This paper presents one way of demonstrating the Principle of Equivalence in the classroom. Teaching the Principle of Equivalence involves someone experiencing acceleration through empty space, juxtaposed with the daily encounter with gravity. This classroom activity is demonstrated with a water-filled bottle containing glass marbles and…

  12. Probing Students' Ideas of the Principle of Equivalence

    Science.gov (United States)

    Bandyopadhyay, Atanu; Kumar, Arvind

    2011-01-01

    The principle of equivalence was the first vital clue to Einstein in his extension of special relativity to general relativity, the modern theory of gravitation. In this paper we investigate in some detail students' understanding of this principle in a variety of contexts, when they are undergoing an introductory course on general relativity. The…

  13. Can quantum probes satisfy the weak equivalence principle?

    International Nuclear Information System (INIS)

    Seveso, Luigi; Paris, Matteo G.A.

    2017-01-01

    We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’s mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.

  14. Can quantum probes satisfy the weak equivalence principle?

    Energy Technology Data Exchange (ETDEWEB)

    Seveso, Luigi, E-mail: luigi.seveso@unimi.it [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); Paris, Matteo G.A. [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); INFN, Sezione di Milano, I-20133 Milano (Italy)

    2017-05-15

    We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’s mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.

  15. Quantum mechanics in noninertial reference frames: Violations of the nonrelativistic equivalence principle

    International Nuclear Information System (INIS)

    Klink, W.H.; Wickramasekara, S.

    2014-01-01

    In previous work we have developed a formulation of quantum mechanics in non-inertial reference frames. This formulation is grounded in a class of unitary cocycle representations of what we have called the Galilean line group, the generalization of the Galilei group that includes transformations amongst non-inertial reference frames. These representations show that in quantum mechanics, just as is the case in classical mechanics, the transformations to accelerating reference frames give rise to fictitious forces. A special feature of these previously constructed representations is that they all respect the non-relativistic equivalence principle, wherein the fictitious forces associated with linear acceleration can equivalently be described by gravitational forces. In this paper we exhibit a large class of cocycle representations of the Galilean line group that violate the equivalence principle. Nevertheless the classical mechanics analogue of these cocycle representations all respect the equivalence principle. -- Highlights: •A formulation of Galilean quantum mechanics in non-inertial reference frames is given. •The key concept is the Galilean line group, an infinite dimensional group. •A large class of general cocycle representations of the Galilean line group is constructed. •These representations show violations of the equivalence principle at the quantum level. •At the classical limit, no violations of the equivalence principle are detected

  16. A weak equivalence principle test on a suborbital rocket

    Energy Technology Data Exchange (ETDEWEB)

    Reasenberg, Robert D; Phillips, James D, E-mail: reasenberg@cfa.harvard.ed [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States)

    2010-05-07

    We describe a Galilean test of the weak equivalence principle, to be conducted during the free fall portion of a sounding rocket flight. The test of a single pair of substances is aimed at a measurement uncertainty of sigma(eta) < 10{sup -16} after averaging the results of eight separate drops. The weak equivalence principle measurement is made with a set of four laser gauges that are expected to achieve 0.1 pm Hz{sup -1/2}. The discovery of a violation (eta not = 0) would have profound implications for physics, astrophysics and cosmology.

  17. Test of the Equivalence Principle in the Dark sector on galactic scales

    International Nuclear Information System (INIS)

    Mohapi, N.; Hees, A.; Larena, J.

    2016-01-01

    The Einstein Equivalence Principle is a fundamental principle of the theory of General Relativity. While this principle has been thoroughly tested with standard matter, the question of its validity in the Dark sector remains open. In this paper, we consider a general tensor-scalar theory that allows to test the equivalence principle in the Dark sector by introducing two different conformal couplings to standard matter and to Dark matter. We constrain these couplings by considering galactic observations of strong lensing and of velocity dispersion. Our analysis shows that, in the case of a violation of the Einstein Equivalence Principle, data favour violations through coupling strengths that are of opposite signs for ordinary and Dark matter. At the same time, our analysis does not show any significant deviations from General Relativity

  18. Violation of Equivalence Principle and Solar Neutrinos

    International Nuclear Information System (INIS)

    Gago, A.M.; Nunokawa, H.; Zukanovich Funchal, R.

    2001-01-01

    We have updated the analysis for the solution to the solar neutrino problem by the long-wavelength neutrino oscillations induced by a tiny breakdown of the weak equivalence principle of general relativity, and obtained a very good fit to all the solar neutrino data

  19. On experimental testing of the weak equivalence principle for the neutron

    International Nuclear Information System (INIS)

    Pokotilovskij, Yu.N.

    1994-01-01

    The considerations is presented of the experimental situation with the verification of the weak equivalence principle for the neutron. The direct method is proposed to significantly increase (to ∼ 10 -6 ) the precision of the equivalence principle for the neutron in the Galilei type experiment, which uses the thin-film Fabri-Perot interferometer and precise time-of-flight spectrometry of ultracold neutrons

  20. Einstein's equivalence principle instead of the inertia forces

    International Nuclear Information System (INIS)

    Herreros Mateos, F.

    1997-01-01

    In this article I intend to show that Einstein's equivalence principle substitutes advantageously the inertia forces in the study and resolution of problems in which non-inertial systems appear. (Author) 13 refs

  1. The principle of equivalence and the Trojan asteroids

    International Nuclear Information System (INIS)

    Orellana, R.; Vucetich, H.

    1986-05-01

    An analysis of the Trojan asteroids motion has been carried out in order to set limits to possible violations to the principle of equivalence. Preliminary results, in agreement with general relativity, are reported. (author)

  2. The equivalence principle in a quantum world

    DEFF Research Database (Denmark)

    Bjerrum-Bohr, N. Emil J.; Donoghue, John F.; El-Menoufi, Basem Kamal

    2015-01-01

    the energy is small, we now have the tools to address this conflict explicitly. Despite the violation of some classical concepts, the EP continues to provide the core of the quantum gravity framework through the symmetry - general coordinate invariance - that is used to organize the effective field theory......We show how modern methods can be applied to quantum gravity at low energy. We test how quantum corrections challenge the classical framework behind the equivalence principle (EP), for instance through introduction of nonlocality from quantum physics, embodied in the uncertainty principle. When...

  3. Free Fall and the Equivalence Principle Revisited

    Science.gov (United States)

    Pendrill, Ann-Marie

    2017-01-01

    Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton's law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field.…

  4. Tests of the equivalence principle with neutral kaons

    CERN Document Server

    Apostolakis, Alcibiades J; Backenstoss, Gerhard; Bargassa, P; Behnke, O; Benelli, A; Bertin, V; Blanc, F; Bloch, P; Carlson, P J; Carroll, M; Cawley, E; Chardin, G; Chertok, M B; Danielsson, M; Dejardin, M; Derré, J; Ealet, A; Eleftheriadis, C; Faravel, L; Fetscher, W; Fidecaro, Maria; Filipcic, A; Francis, D; Fry, J; Gabathuler, Erwin; Gamet, R; Gerber, H J; Go, A; Haselden, A; Hayman, P J; Henry-Coüannier, F; Hollander, R W; Jon-And, K; Kettle, P R; Kokkas, P; Kreuger, R; Le Gac, R; Leimgruber, F; Mandic, I; Manthos, N; Marel, Gérard; Mikuz, M; Miller, J; Montanet, François; Müller, A; Nakada, Tatsuya; Pagels, B; Papadopoulos, I M; Pavlopoulos, P; Polivka, G; Rickenbach, R; Roberts, B L; Ruf, T; Sakelliou, L; Schäfer, M; Schaller, L A; Schietinger, T; Schopper, A; Tauscher, Ludwig; Thibault, C; Touchard, F; Touramanis, C; van Eijk, C W E; Vlachos, S; Weber, P; Wigger, O; Wolter, M; Zavrtanik, D; Zimmerman, D; Ellis, Jonathan Richard; Mavromatos, Nikolaos E; Nanopoulos, Dimitri V

    1999-01-01

    We test the Principle of Equivalence for particles and antiparticles, using CPLEAR data on tagged Pkao and Pkab decays into $pi^+ pi^-$. For the first time, we search for possible annual, monthly and diurnal modulations of the observables $|eta_{+-}|$ and $phi _{+-}$, that could be correlated with variations in astrophysical potentials. Within the accuracy of CPLEAR, the measured values of $|eta _{+-}|$ and $phi _{+-}$ are found not to be correlated with changes of the gravitational potential. We analyze data assuming effective scalar, vector and tensor interactions, and we conclude that the Principle of Equivalence between particles and antiparticles holds to a level of $6.5$, $4.3$ and $1.8 imes 10^{-9}$, respectively, for scalar, vector and tensor potentials originating from the Sun with a range much greater than the distance Earth-Sun. We also study energy-dependent effects that might arise from vector or tensor interactions. Finally, we compile upper limits on the gravitational coupling difference betwee...

  5. High energy cosmic neutrinos and the equivalence principle

    International Nuclear Information System (INIS)

    Minakata, H.

    1996-01-01

    Observation of ultra-high energy neutrinos, in particular detection of ν τ , from cosmologically distant sources like active galactic nuclei (AGN) opens new possibilities to search for neutrino flavor conversion. We consider the effects of violation of the equivalence principle (VEP) on propagation of these cosmic neutrinos. In particular, we discuss two effects: (1) the oscillations of neutrinos due to VEP in the gravitational field of our Galaxy and in the intergalactic space; (2) resonance flavor conversion driven by the gravitational potential of AGN. We show that ultra-high energies of the neutrinos as well as cosmological distances to AGN, or strong AGN gravitational potential allow to improve the accuracy of testing of the equivalence principle by 25 orders of magnitude for massless neutrinos (Δf ∼ 10 -41 ) and by 11 orders of magnitude for massive neutrinos (Δf ∼ 10 -28 x (Δm 2 /1eV 2 )). The experimental signatures of the transitions induced by VEP are discussed. (author). 17 refs

  6. Do positrons and antiprotons respect the weak equivalence principle?

    International Nuclear Information System (INIS)

    Hughes, R.J.

    1990-01-01

    We resolve the difficulties which Morrison identified with energy conservation and the gravitational red-shift when particles of antimatter, such as the positron and antiproton, do not respect the weak equivalence principle. 13 refs

  7. The c equivalence principle and the correct form of writing Maxwell's equations

    International Nuclear Information System (INIS)

    Heras, Jose A

    2010-01-01

    It is well known that the speed c u =1/√(ε 0 μ 0 ) is obtained in the process of defining SI units via action-at-a-distance forces, like the force between two static charges and the force between two long and parallel currents. The speed c u is then physically different from the observed speed of propagation c associated with electromagnetic waves in vacuum. However, repeated experiments have led to the numerical equality c u = c, which we have called the c equivalence principle. In this paper we point out that ∇xE=-[1/(ε 0 μ 0 c 2 )]∂B/∂t is the correct form of writing Faraday's law when the c equivalence principle is not assumed. We also discuss the covariant form of Maxwell's equations without assuming the c equivalence principle.

  8. Uniformly accelerating charged particles. A threat to the equivalence principle

    International Nuclear Information System (INIS)

    Lyle, Stephen N.

    2008-01-01

    There has been a long debate about whether uniformly accelerated charges should radiate electromagnetic energy and how one should describe their worldline through a flat spacetime, i.e., whether the Lorentz-Dirac equation is right. There are related questions in curved spacetimes, e.g., do different varieties of equivalence principle apply to charged particles, and can a static charge in a static spacetime radiate electromagnetic energy? The problems with the LD equation in flat spacetime are spelt out in some detail here, and its extension to curved spacetime is discussed. Different equivalence principles are compared and some vindicated. The key papers are discussed in detail and many of their conclusions are significantly revised by the present solution. (orig.)

  9. Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO

    Directory of Open Access Journals (Sweden)

    Lo C. Y.

    2006-04-01

    Full Text Available The theoretical foundation of LIGO's design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein's theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein's equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne's formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein's equivalence principle is not important or even irrelevant.

  10. Mars seasonal polar caps as a test of the equivalence principle

    International Nuclear Information System (INIS)

    Rubincam, David Parry

    2011-01-01

    The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial (passive) to gravitational (active) masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor Eoetvoes test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.

  11. Mars Seasonal Polar Caps as a Test of the Equivalence Principle

    Science.gov (United States)

    Rubincam, Daivd Parry

    2011-01-01

    The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial to gravitational masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor E6tv6s test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.

  12. Cosmology with equivalence principle breaking in the dark sector

    International Nuclear Information System (INIS)

    Keselman, Jose Ariel; Nusser, Adi; Peebles, P. J. E.

    2010-01-01

    A long-range force acting only between nonbaryonic particles would be associated with a large violation of the weak equivalence principle. We explore cosmological consequences of this idea, which we label ReBEL (daRk Breaking Equivalence principLe). A high resolution hydrodynamical simulation of the distributions of baryons and dark matter confirms our previous findings that a ReBEL force of comparable strength to gravity on comoving scales of about 1 h -1 Mpc causes voids between the concentrations of large galaxies to be more nearly empty, suppresses accretion of intergalactic matter onto galaxies at low redshift, and produces an early generation of dense dark-matter halos. A preliminary analysis indicates the ReBEL scenario is consistent with the one-dimensional power spectrum of the Lyman-Alpha forest and the three-dimensional galaxy autocorrelation function. Segregation of baryons and DM in galaxies and systems of galaxies is a strong prediction of ReBEL. ReBEL naturally correlates the baryon mass fraction in groups and clusters of galaxies with the system mass, in agreement with recent measurements.

  13. The Equivalence Principle and Anomalous Magnetic Moment Experiments

    OpenAIRE

    Alvarez, C.; Mann, R. B.

    1995-01-01

    We investigate the possibility of testing of the Einstein Equivalence Principle (EEP) using measurements of anomalous magnetic moments of elementary particles. We compute the one loop correction for the $g-2$ anomaly within the class of non metric theories of gravity described by the \\tmu formalism. We find several novel mechanisms for breaking the EEP whose origin is due purely to radiative corrections. We discuss the possibilities of setting new empirical constraints on these effects.

  14. Phenomenology of the Equivalence Principle with Light Scalars

    OpenAIRE

    Damour, Thibault; Donoghue, John F.

    2010-01-01

    Light scalar particles with couplings of sub-gravitational strength, which can generically be called 'dilatons', can produce violations of the equivalence principle. However, in order to understand experimental sensitivities one must know the coupling of these scalars to atomic systems. We report here on a study of the required couplings. We give a general Lagrangian with five independent dilaton parameters and calculate the "dilaton charge" of atomic systems for each of these. Two combinatio...

  15. Test masses for the G-POEM test of the weak equivalence principle

    International Nuclear Information System (INIS)

    Reasenberg, Robert D; Phillips, James D; Popescu, Eugeniu M

    2011-01-01

    We describe the design of the test masses that are used in the 'ground-based principle of equivalence measurement' test of the weak equivalence principle. The main features of the design are the incorporation of corner cubes and the use of mass removal and replacement to create pairs of test masses with different test substances. The corner cubes allow for the vertical separation of the test masses to be measured with picometer accuracy by SAO's unique tracking frequency laser gauge, while the mass removal and replacement operations are arranged so that the test masses incorporating different test substances have nominally identical gravitational properties. (papers)

  16. Relativity and equivalence principles in the gauge theory of gravitation

    International Nuclear Information System (INIS)

    Ivanenko, D.; Sardanashvili, G.

    1981-01-01

    Roles of relativity (RP) and equivalence principles (EP) in the gauge theory of gravity are shown. RP in the gravitational theory in formalism of laminations can be formulated as requirement of covariance of equations relative to the GL + (4, R)(X) gauge group. In such case RP turns out to be identical to the gauge principle in the gauge theory of a group of outer symmetries, and the gravitational theory can be directly constructed as the gauge theory. In general relativity theory the equivalence theory adds RP and is intended for description of transition to a special relativity theory in some system of reference. The approach described takes into account that in the gauge theory, besides gauge fields under conditions of spontaneous symmetry breaking, the Goldstone and Higgs fields can also arise, to which the gravitational metric field is related, what is the sequence of taking account of RP in the gauge theory of gravitation [ru

  17. Equivalence of Dirac quantization and Schwinger's action principle quantization

    International Nuclear Information System (INIS)

    Das, A.; Scherer, W.

    1987-01-01

    We show that the method of Dirac quantization is equivalent to Schwinger's action principle quantization. The relation between the Lagrange undetermined multipliers in Schwinger's method and Dirac's constraint bracket matrix is established and it is explicitly shown that the two methods yield identical (anti)commutators. This is demonstrated in the non-trivial example of supersymmetric quantum mechanics in superspace. (orig.)

  18. Possible test of the strong principle of equivalence

    International Nuclear Information System (INIS)

    Brecher, K.

    1978-01-01

    We suggest that redshift determinations of X-ray and γ-ray lines produced near the surface of neutron stars which arise from different physical processes could provide a significant test of the strong principle of equivalence for strong gravitational fields. As a complement to both the high-precision weak-field solar-system experiments and the cosmological time variation searches, such observations could further test the hypothesis that physics is locally the same at all times and in all places

  19. Strong quantum violation of the gravitational weak equivalence principle by a non-Gaussian wave packet

    International Nuclear Information System (INIS)

    Chowdhury, P; Majumdar, A S; Sinha, S; Home, D; Mousavi, S V; Mozaffari, M R

    2012-01-01

    The weak equivalence principle of gravity is examined at the quantum level in two ways. First, the position detection probabilities of particles described by a non-Gaussian wave packet projected upwards against gravity around the classical turning point and also around the point of initial projection are calculated. These probabilities exhibit mass dependence at both these points, thereby reflecting the quantum violation of the weak equivalence principle. Second, the mean arrival time of freely falling particles is calculated using the quantum probability current, which also turns out to be mass dependent. Such a mass dependence is shown to be enhanced by increasing the non-Gaussianity parameter of the wave packet, thus signifying a stronger violation of the weak equivalence principle through a greater departure from Gaussianity of the initial wave packet. The mass dependence of both the position detection probabilities and the mean arrival time vanishes in the limit of large mass. Thus, compatibility between the weak equivalence principle and quantum mechanics is recovered in the macroscopic limit of the latter. A selection of Bohm trajectories is exhibited to illustrate these features in the free fall case. (paper)

  20. MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle.

    Science.gov (United States)

    Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter

    2017-12-08

    According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10^{-15} precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ(Ti,Pt)=[-1±9(stat)±9(syst)]×10^{-15} (1σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.

  1. MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle

    Science.gov (United States)

    Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter

    2017-12-01

    According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10-15 precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ (Ti ,Pt )=[-1 ±9 (stat)±9 (syst)]×10-15 (1 σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.

  2. Test of the Equivalence Principle in an Einstein Elevator

    Science.gov (United States)

    Shapiro, Irwin I.; Glashow, S.; Lorenzini, E. C.; Cosmo, M. L.; Cheimets, P. N.; Finkelstein, N.; Schneps, M.

    2005-01-01

    This Annual Report illustrates the work carried out during the last grant-year activity on the Test of the Equivalence Principle in an Einstein Elevator. The activity focused on the following main topics: (1) analysis and conceptual design of a detector configuration suitable for the flight tests; (2) development of techniques for extracting a small signal from data strings with colored and white noise; (3) design of the mechanism that spins and releases the instrument package inside the cryostat; and (4) experimental activity carried out by our non-US partners (a summary is shown in this report). The analysis and conceptual design of the flight-detector (point 1) was focused on studying the response of the differential accelerometer during free fall, in the presence of errors and precession dynamics, for various detector's configurations. The goal was to devise a detector configuration in which an Equivalence Principle violation (EPV) signal at the sensitivity threshold level can be successfully measured and resolved out of a much stronger dynamics-related noise and gravity gradient. A detailed analysis and comprehensive simulation effort led us to a detector's design that can accomplish that goal successfully.

  3. The c equivalence principle and the correct form of writing Maxwell's equations

    Energy Technology Data Exchange (ETDEWEB)

    Heras, Jose A, E-mail: herasgomez@gmail.co [Universidad Autonoma Metropolitana Unidad Azcapotzalco, Av. San Pablo No. 180, Col. Reynosa, 02200, Mexico DF (Mexico)

    2010-09-15

    It is well known that the speed c{sub u}=1/{radical}({epsilon}{sub 0{mu}0}) is obtained in the process of defining SI units via action-at-a-distance forces, like the force between two static charges and the force between two long and parallel currents. The speed c{sub u} is then physically different from the observed speed of propagation c associated with electromagnetic waves in vacuum. However, repeated experiments have led to the numerical equality c{sub u} = c, which we have called the c equivalence principle. In this paper we point out that {nabla}xE=-[1/({epsilon}{sub 0}{mu}{sub 0}c{sup 2})]{partial_derivative}B/{partial_derivative}t is the correct form of writing Faraday's law when the c equivalence principle is not assumed. We also discuss the covariant form of Maxwell's equations without assuming the c equivalence principle.

  4. Faster than light motion does not imply time travel

    International Nuclear Information System (INIS)

    Andréka, Hajnal; Madarász, Judit X; Németi, István; Székely, Gergely; Stannett, Mike

    2014-01-01

    Seeing the many examples in the literature of causality violations based on faster than light (FTL) signals one naturally thinks that FTL motion leads inevitably to the possibility of time travel. We show that this logical inference is invalid by demonstrating a model, based on (3+1)-dimensional Minkowski spacetime, in which FTL motion is permitted (in every direction without any limitation on speed) yet which does not admit time travel. Moreover, the Principle of Relativity is true in this model in the sense that all observers are equivalent. In short, FTL motion does not imply time travel after all. (paper)

  5. Violation of the equivalence principle for stressed bodies in asynchronous relativity

    Energy Technology Data Exchange (ETDEWEB)

    Andrade Martins, R. de (Centro de Logica, Epistemologia e Historia da Ciencia, Campinas (Brazil))

    1983-12-11

    In the recently developed asynchronous formulation of the relativistic theory of extended bodies, the inertial mass of a body does not explicitly depend on its pressure or stress. The detailed analysis of the weight of a box filled with a gas and placed in a weak gravitational field shows that this feature of asynchronous relativity implies a breakdown of the equivalence between inertial and passive gravitational mass for stressed systems.

  6. Equivalence principle and the baryon acoustic peak

    Science.gov (United States)

    Baldauf, Tobias; Mirbabayi, Mehrdad; Simonović, Marko; Zaldarriaga, Matias

    2015-08-01

    We study the dominant effect of a long wavelength density perturbation δ (λL) on short distance physics. In the nonrelativistic limit, the result is a uniform acceleration, fixed by the equivalence principle, and typically has no effect on statistical averages due to translational invariance. This same reasoning has been formalized to obtain a "consistency condition" on the cosmological correlation functions. In the presence of a feature, such as the acoustic peak at ℓBAO, this naive expectation breaks down for λLexplicitly applied to the one-loop calculation of the power spectrum. Finally, the success of baryon acoustic oscillation reconstruction schemes is argued to be another empirical evidence for the validity of the results.

  7. The equivalence principle

    International Nuclear Information System (INIS)

    Smorodinskij, Ya.A.

    1980-01-01

    The prerelativistic history of the equivalence principle (EP) is presented briefly. Its role in history of the general relativity theory (G.R.T.) discovery is elucidated. A modern idea states that the ratio of inert and gravitational masses does not differ from 1 at least up to the 12 sign after comma. Attention is paid to the difference of the gravitational field from electromagnetic one. The difference is as follows, the energy of the gravitational field distributed in space is the source of the field. These fields always interact at superposition. Electromagnetic fields from different sources are put together. On the basis of EP it is established the Sun field interact with the Earth gravitational energy in the same way as with any other one. The latter proves the existence of gravitation of the very gravitational field to a heavy body. A problem on gyroscope movement in the Earth gravitational field is presented as a paradox. The calculation has shown that gyroscope at satellite makes a positive precession, and its axis turns in an angle equal to α during a turn of the satellite round the Earth, but because of the space curvature - into the angle two times larger than α. A resulting turn is equal to 3α. It is shown on the EP basis that the polarization plane in any coordinate system does not turn when the ray of light passes in the gravitational field. Together with the historical value of EP noted is the necessity to take into account the requirements claimed by the EP at description of the physical world

  8. Equivalence principle and quantum mechanics: quantum simulation with entangled photons.

    Science.gov (United States)

    Longhi, S

    2018-01-15

    Einstein's equivalence principle (EP) states the complete physical equivalence of a gravitational field and corresponding inertial field in an accelerated reference frame. However, to what extent the EP remains valid in non-relativistic quantum mechanics is a controversial issue. To avoid violation of the EP, Bargmann's superselection rule forbids a coherent superposition of states with different masses. Here we suggest a quantum simulation of non-relativistic Schrödinger particle dynamics in non-inertial reference frames, which is based on the propagation of polarization-entangled photon pairs in curved and birefringent optical waveguides and Hong-Ou-Mandel quantum interference measurement. The photonic simulator can emulate superposition of mass states, which would lead to violation of the EP.

  9. Theoretical aspects of the equivalence principle

    International Nuclear Information System (INIS)

    Damour, Thibault

    2012-01-01

    We review several theoretical aspects of the equivalence principle (EP). We emphasize the unsatisfactory fact that the EP maintains the absolute character of the coupling constants of physics, while general relativity and its generalizations (Kaluza–Klein, …, string theory) suggest that all absolute structures should be replaced by dynamical entities. We discuss the EP-violation phenomenology of dilaton-like models, which is likely to be dominated by the linear superposition of two effects: a signal proportional to the nuclear Coulomb energy, related to the variation of the fine-structure constant, and a signal proportional to the surface nuclear binding energy, related to the variation of the light quark masses. We recall various theoretical arguments (including a recently proposed anthropic argument) suggesting that the EP be violated at a small, but not unmeasurably small level. This motivates the need for improved tests of the EP. These tests are probing new territories in physics that are related to deep, and mysterious, issues in fundamental physics. (paper)

  10. Equivalence principle implications of modified gravity models

    International Nuclear Information System (INIS)

    Hui, Lam; Nicolis, Alberto; Stubbs, Christopher W.

    2009-01-01

    Theories that attempt to explain the observed cosmic acceleration by modifying general relativity all introduce a new scalar degree of freedom that is active on large scales, but is screened on small scales to match experiments. We demonstrate that if such screening occurs via the chameleon mechanism, such as in f(R) theory, it is possible to have order unity violation of the equivalence principle, despite the absence of explicit violation in the microscopic action. Namely, extended objects such as galaxies or constituents thereof do not all fall at the same rate. The chameleon mechanism can screen the scalar charge for large objects but not for small ones (large/small is defined by the depth of the gravitational potential and is controlled by the scalar coupling). This leads to order one fluctuations in the ratio of the inertial mass to gravitational mass. We provide derivations in both Einstein and Jordan frames. In Jordan frame, it is no longer true that all objects move on geodesics; only unscreened ones, such as test particles, do. In contrast, if the scalar screening occurs via strong coupling, such as in the Dvali-Gabadadze-Porrati braneworld model, equivalence principle violation occurs at a much reduced level. We propose several observational tests of the chameleon mechanism: 1. small galaxies should accelerate faster than large galaxies, even in environments where dynamical friction is negligible; 2. voids defined by small galaxies would appear larger compared to standard expectations; 3. stars and diffuse gas in small galaxies should have different velocities, even if they are on the same orbits; 4. lensing and dynamical mass estimates should agree for large galaxies but disagree for small ones. We discuss possible pitfalls in some of these tests. The cleanest is the third one where the mass estimate from HI rotational velocity could exceed that from stars by 30% or more. To avoid blanket screening of all objects, the most promising place to look is in

  11. Acceleration Measurements Using Smartphone Sensors: Dealing with the Equivalence Principle

    OpenAIRE

    Monteiro, Martín; Cabeza, Cecilia; Martí, Arturo C.

    2014-01-01

    Acceleration sensors built into smartphones, i-pads or tablets can conveniently be used in the physics laboratory. By virtue of the equivalence principle, a sensor fixed in a non-inertial reference frame cannot discern between a gravitational field and an accelerated system. Accordingly, acceleration values read by these sensors must be corrected for the gravitational component. A physical pendulum was studied by way of example, and absolute acceleration and rotation angle values were derived...

  12. Equivalence principle, CP violations, and the Higgs-like boson mass

    International Nuclear Information System (INIS)

    Bellucci, S.; Faraoni, V.

    1994-01-01

    We consider the violation of the equivalence principle induced by a massive gravivector, i.e., the partner of the graviton in N>1 supergravity. The present limits on this violation allow us to obtain a lower bound on the vacuum expectation value of the scalar field that gives the gravivector its mass. We consider also the effective neutral kaon mass difference induced by the gravivector and compare the result with the experimental data on the CP-violation parameter ε

  13. Density matrix in quantum electrodynamics, equivalence principle and Hawking effect

    International Nuclear Information System (INIS)

    Frolov, V.P.; Gitman, D.M.

    1978-01-01

    The expression for the density matrix describing particles of one sort (electrons or positrons) created by an external electromagnetic field from the vacuum is obtained. The explicit form of the density matrix is found for the case of constant and uniform electric field. Arguments are given for the presence of a connection between the thermal nature of the density matrix describing particles created by the gravitational field of a black hole and the equivalence principle. (author)

  14. Effective Inertial Frame in an Atom Interferometric Test of the Equivalence Principle

    Science.gov (United States)

    Overstreet, Chris; Asenbaum, Peter; Kovachy, Tim; Notermans, Remy; Hogan, Jason M.; Kasevich, Mark A.

    2018-05-01

    In an ideal test of the equivalence principle, the test masses fall in a common inertial frame. A real experiment is affected by gravity gradients, which introduce systematic errors by coupling to initial kinematic differences between the test masses. Here we demonstrate a method that reduces the sensitivity of a dual-species atom interferometer to initial kinematics by using a frequency shift of the mirror pulse to create an effective inertial frame for both atomic species. Using this method, we suppress the gravity-gradient-induced dependence of the differential phase on initial kinematic differences by 2 orders of magnitude and precisely measure these differences. We realize a relative precision of Δ g /g ≈6 ×10-11 per shot, which improves on the best previous result for a dual-species atom interferometer by more than 3 orders of magnitude. By reducing gravity gradient systematic errors to one part in 1 013 , these results pave the way for an atomic test of the equivalence principle at an accuracy comparable with state-of-the-art classical tests.

  15. Handicap principle implies emergence of dimorphic ornaments.

    Science.gov (United States)

    Clifton, Sara M; Braun, Rosemary I; Abrams, Daniel M

    2016-11-30

    Species spanning the animal kingdom have evolved extravagant and costly ornaments to attract mating partners. Zahavi's handicap principle offers an elegant explanation for this: ornaments signal individual quality, and must be costly to ensure honest signalling, making mate selection more efficient. Here, we incorporate the assumptions of the handicap principle into a mathematical model and show that they are sufficient to explain the heretofore puzzling observation of bimodally distributed ornament sizes in a variety of species. © 2016 The Author(s).

  16. Test of the Equivalence Principle in an Einstein Elevator

    Science.gov (United States)

    Shapiro, Irwin I.; Glashow, S.; Lorenzini, E. C.; Cosmo, M. L.; Cheimets, P. N.; Finkelstein, N.; Schneps, M.

    2004-01-01

    The scientific goal of the experiment is to test the equality of gravitational and inertial mass (i.e., to test the Principle of Equivalence) by measuring the independence of the rate of fall of bodies from their compositions. The measurement is accomplished by measuring the relative displacement (or equivalently acceleration) of two falling bodies of different materials which are the proof masses of a differential accelerometer spinning about an horizontal axis to modulate a possible violation signal. A non-zero differential acceleration appearing at the signal frequency will indicate a violation of the Equivalence Principle. The goal of the experiment is to measure the Eotvos ratio og/g (differential acceleration/common acceleration) with a targeted accuracy that is about two orders of magnitude better than the state of the art (presently at several parts in 10(exp 13). The analyses carried out during this first grant year have focused on: (1) evaluation of possible shapes for the proof masses to meet the requirements on the higher-order mass moment disturbances generated by the falling capsule; (2) dynamics of the instrument package and differential acceleration measurement in the presence of errors and imperfections; (3) computation of the inertia characteristic of the instrument package that enable a separation of the signal from the dynamics-related noise; (4) a revised thermal analysis of the instrument package in light of the new conceptual design of the cryostat; (5) the development of a dynamic and control model of the capsule attached to the gondola and balloon to define the requirements for the leveling mechanism (6) a conceptual design of the leveling mechanism that keeps the capsule aligned before release from the balloon; and (7) a new conceptual design of the customized cryostat and a preliminary valuation of its cost. The project also involves an international cooperation with the Institute of Space Physics (IFSI) in Rome, Italy. The group at IFSI

  17. Weak principle of equivalence and gauge theory of tetrad aravitational field

    International Nuclear Information System (INIS)

    Tunyak, V.N.

    1978-01-01

    It is shown that, unlike the tetrade formulation of the general relativity theory derived from the requirement on the Poincare group localization, the tetrade gravitation theory corresponding to the Trader formulation of the weak equivalence principle, where the nongravitational-matter Lagrangian is the direct covariant generalization of the partial relativistic expression on the Riemann space-time is incompatible with the known method for deriving the calibration theory of the tetrade gravitation field

  18. Five-dimensional projective unified theory and the principle of equivalence

    International Nuclear Information System (INIS)

    De Sabbata, V.; Gasperini, M.

    1984-01-01

    We investigate the physical consequences of a new five-dimensional projective theory unifying gravitation and electromagnetism. Solving the field equations in the linear approximation and in the static limit, we find that a celestial body would act as a source of a long-range scalar field, and that macroscopic test bodies with different internal structure would accelerate differently in the solar gravitational field; this seems to be in disagreement with the equivalence principle. To avoid this contradiction, we suggest a possible modification of the geometrical structure of the five-dimensional projective space

  19. Testing the equivalence principle on a trampoline

    Science.gov (United States)

    Reasenberg, Robert D.; Phillips, James D.

    2001-07-01

    We are developing a Galilean test of the equivalence principle in which two pairs of test mass assemblies (TMA) are in free fall in a comoving vacuum chamber for about 0.9 s. The TMA are tossed upward, and the process repeats at 1.2 s intervals. Each TMA carries a solid quartz retroreflector and a payload mass of about one-third of the total TMA mass. The relative vertical motion of the TMA of each pair is monitored by a laser gauge working in an optical cavity formed by the retroreflectors. Single-toss precision of the relative acceleration of a single pair of TMA is 3.5×10-12 g. The project goal of Δg/g = 10-13 can be reached in a single night's run, but repetition with altered configurations will be required to ensure the correction of systematic error to the nominal accuracy level. Because the measurements can be made quickly, we plan to study several pairs of materials.

  20. Testing the equivalence principle on cosmological scales

    Science.gov (United States)

    Bonvin, Camille; Fleury, Pierre

    2018-05-01

    The equivalence principle, that is one of the main pillars of general relativity, is very well tested in the Solar system; however, its validity is more uncertain on cosmological scales, or when dark matter is concerned. This article shows that relativistic effects in the large-scale structure can be used to directly test whether dark matter satisfies Euler's equation, i.e. whether its free fall is characterised by geodesic motion, just like baryons and light. After having proposed a general parametrisation for deviations from Euler's equation, we perform Fisher-matrix forecasts for future surveys like DESI and the SKA, and show that such deviations can be constrained with a precision of order 10%. Deviations from Euler's equation cannot be tested directly with standard methods like redshift-space distortions and gravitational lensing, since these observables are not sensitive to the time component of the metric. Our analysis shows therefore that relativistic effects bring new and complementary constraints to alternative theories of gravity.

  1. Solar system and equivalence principle constraints on f(R) gravity by the chameleon approach

    International Nuclear Information System (INIS)

    Capozziello, Salvatore; Tsujikawa, Shinji

    2008-01-01

    We study constraints on f(R) dark energy models from solar system experiments combined with experiments on the violation of the equivalence principle. When the mass of an equivalent scalar field degree of freedom is heavy in a region with high density, a spherically symmetric body has a thin shell so that an effective coupling of the fifth force is suppressed through a chameleon mechanism. We place experimental bounds on the cosmologically viable models recently proposed in the literature that have an asymptotic form f(R)=R-λR c [1-(R c /R) 2n ] in the regime R>>R c . From the solar system constraints on the post-Newtonian parameter γ, we derive the bound n>0.5, whereas the constraints from the violations of the weak and strong equivalence principles give the bound n>0.9. This allows a possibility to find the deviation from the Λ-cold dark matter (ΛCDM) cosmological model. For the model f(R)=R-λR c (R/R c ) p with 0 -10 , which shows that this model is hardly distinguishable from the ΛCDM cosmology

  2. On the relativity and equivalence principles in the gauge theory of gravitation

    International Nuclear Information System (INIS)

    Ivanenko, D.; Sardanashvily, G.

    1981-01-01

    One sees the basic ideas of the gauge gravitation theory still not generally accepted in spite of more than twenty years of its history. The chief reason lies in the fact that the gauge character of gravity is connected with the whole complex of problems of Einstein General Relativity: about the reference system definition, on the (3+1)-splitting, on the presence (or absence) of symmetries in GR, on the necessity (or triviality) of general covariance, on the meaning of equivalence principle, which led Einstein from Special to General Relativity |1|. The real actuality of this complex of interconnected problems is demonstrated by the well-known work of V. Fock, who saw no symmetries in General Relativity, declared the unnecessary Equivalence principle and proposed even to substitute the designation ''chronogeometry'' instead of ''general relativity'' (see also P. Havas). Developing this line, H. Bondi quite recently also expressed doubts about the ''relativity'' in Einstein theory of gravitation. All proposed versions of the gauge gravitation theory must clarify the discrepancy between Einstein gravitational field being a pseudo-Riemannian metric field, and the gauge potentials representing connections on some fiber bundles and there exists no group, whose gauging would lead to the purely gravitational part of connection (Christoffel symbols or Fock-Ivenenko-Weyl spinorial coefficients). (author)

  3. Testing Einstein's Equivalence Principle With Fast Radio Bursts.

    Science.gov (United States)

    Wei, Jun-Jie; Gao, He; Wu, Xue-Feng; Mészáros, Peter

    2015-12-31

    The accuracy of Einstein's equivalence principle (EEP) can be tested with the observed time delays between correlated particles or photons that are emitted from astronomical sources. Assuming as a lower limit that the time delays are caused mainly by the gravitational potential of the Milky Way, we prove that fast radio bursts (FRBs) of cosmological origin can be used to constrain the EEP with high accuracy. Taking FRB 110220 and two possible FRB/gamma-ray burst (GRB) association systems (FRB/GRB 101011A and FRB/GRB 100704A) as examples, we obtain a strict upper limit on the differences of the parametrized post-Newtonian parameter γ values as low as [γ(1.23  GHz)-γ(1.45  GHz)]radio energies, improving by 1 to 2 orders of magnitude the previous results at other energies based on supernova 1987A and GRBs.

  4. The kernel G1(x,x') and the quantum equivalence principle

    International Nuclear Information System (INIS)

    Ceccatto, H.; Foussats, A.; Giacomini, H.; Zandron, O.

    1981-01-01

    In this paper, it is re-examined the formulation of the quantum equivalence principle (QEP) and its compatibility with the conditions which must be fulfilled by the kernel G 1 (x,x') is discussed. It is also determined the base of solutions which give the particle model in a curved space-time in terms of Cauchy's data for such a kernel. Finally, it is analyzed the creation of particles in this model by studying the time evolution of creation and annihilation operators. This method is an alternative to one that uses Bogoliubov's transformation as a mechanism of creation. (author)

  5. The short-circuit concept used in field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1990-01-01

    In field equivalence principles, electric and magnetic surface currents are specified and considered as impressed currents. Often the currents are placed on perfect conductors. It is shown that these currents can be treated through two approaches. The first approach is decomposition of the total...... field into partial fields caused by the individual impressed currents. When this approach is used, it is shown that, on a perfect electric (magnetic) conductor, impressed electric (magnetic) surface currents are short-circuited. The second approach is to note that, since Maxwell's equations...... and the boundary conditions are satisfied, none of the impressed currents is short-circuited and no currents are induced on the perfect conductors. Since all currents and field quantities are considered at the same time, this approach is referred to as the total-field approach. The partial-field approach leads...

  6. Tidal tails test the equivalence principle in the dark-matter sector

    International Nuclear Information System (INIS)

    Kesden, Michael; Kamionkowski, Marc

    2006-01-01

    Satellite galaxies currently undergoing tidal disruption offer a unique opportunity to constrain an effective violation of the equivalence principle in the dark sector. While dark matter in the standard scenario interacts solely through gravity on large scales, a new long-range force between dark-matter particles may naturally arise in theories in which the dark matter couples to a light scalar field. An inverse-square-law force of this kind would manifest itself as a violation of the equivalence principle in the dynamics of dark matter compared to baryons in the form of gas or stars. In a previous paper, we showed that an attractive force would displace stars outwards from the bottom of the satellite's gravitational potential well, leading to a higher fraction of stars being disrupted from the tidal bulge further from the Galactic center. Since stars disrupted from the far (near) side of the satellite go on to form the trailing (leading) tidal stream, an attractive dark-matter force will produce a relative enhancement of the trailing stream compared to the leading stream. This distinctive signature of a dark-matter force might be detected through detailed observations of the tidal tails of a disrupting satellite, such as those recently performed by the Two-Micron All-Sky Survey (2MASS) and Sloan Digital Sky Survey (SDSS) on the Sagittarius (Sgr) dwarf galaxy. Here we show that this signature is robust to changes in our models for both the satellite and Milky Way, suggesting that we might hope to search for a dark-matter force in the tidal features of other recently discovered satellite galaxies in addition to the Sgr dwarf

  7. Testing Einstein's Equivalence Principle With Fast Radio Bursts

    Science.gov (United States)

    Wei, Jun-Jie; Gao, He; Wu, Xue-Feng; Mészáros, Peter

    2015-12-01

    The accuracy of Einstein's equivalence principle (EEP) can be tested with the observed time delays between correlated particles or photons that are emitted from astronomical sources. Assuming as a lower limit that the time delays are caused mainly by the gravitational potential of the Milky Way, we prove that fast radio bursts (FRBs) of cosmological origin can be used to constrain the EEP with high accuracy. Taking FRB 110220 and two possible FRB/gamma-ray burst (GRB) association systems (FRB/GRB 101011A and FRB/GRB 100704A) as examples, we obtain a strict upper limit on the differences of the parametrized post-Newtonian parameter γ values as low as [γ (1.23 GHz )-γ (1.45 GHz )] <4.36 ×10-9. This provides the most stringent limit up to date on the EEP through the relative differential variations of the γ parameter at radio energies, improving by 1 to 2 orders of magnitude the previous results at other energies based on supernova 1987A and GRBs.

  8. Quantum Field Theoretic Derivation of the Einstein Weak Equivalence Principle Using Emqg Theory

    OpenAIRE

    Ostoma, Tom; Trushyk, Mike

    1999-01-01

    We provide a quantum field theoretic derivation of Einstein's Weak Equivalence Principle of general relativity using a new quantum gravity theory proposed by the authors called Electro-Magnetic Quantum Gravity or EMQG (ref. 1). EMQG is based on a new theory of inertia (ref. 5) proposed by R. Haisch, A. Rueda, and H. Puthoff (which we modified and called Quantum Inertia). Quantum Inertia states that classical Newtonian Inertia is a property of matter due to the strictly local electrical force ...

  9. Ecological aspects of the radiation-migration equivalence principle in a closed fuel cycle and its comparative assessment with the ALARA principle

    International Nuclear Information System (INIS)

    Poluektov, P.P.; Lopatkin, A.V.; Nikipelov, B.V.; Rachkov, V.I.; Sukhanov, L.P.; Voloshin, S.V.

    2005-01-01

    The errors and uncertainties arising in the determination of radionuclide escape from the RW burial require the use of extremely conservative estimates. In the limit, the nuclide concentrations in the waste may be used as estimates of their concentrations in underground waters. On this basis, it is possible to evaluate the corresponding radio-toxicities (by normalizing to the interference levels) of individual components and radioactive waste as a whole or the effective radio-toxicities (by dividing the radionuclide radio-toxicities into the retardation factors for the nuclide transfer with underground waters). This completely coincides with the procedure of performing the limiting conservative estimate according to the traditional approach with the use of scenarios, escape models, and the corresponding codes. A comparison of radio-toxicities for waste with those for natural uranium consumed for producing a required fuel results in the notion of radiation-migration equivalence for individual waste components and radioactive waste as a whole. Therefore, the radiation-migration equivalence corresponds to the limiting conservative estimate in the traditional approach to the determination of RW disposal safety in comparison with the radiotoxicity of natural uranium. The amounts of radionuclides in fragments (and actinides) and the corresponding weight of heavy metal in the fuel are compared with due regard for the hazard (according to the NRB-99 standards), the nuclide mobility (through the sorption retardation factors), the retention of radioactive waste by the solid matrix, and the contribution from the chains of uranium fission products. It was noted above that the RME principle is aimed at ensuring the radiological safety of the present and future generations and the environment through the minimization of radioactive waste upon reprocessing. This is attended by reaching a reasonably achievable, low level of radiological action in the context of modern science, i

  10. On return rate implied by behavioural present value

    OpenAIRE

    Piasecki, Krzysztof

    2013-01-01

    The future value of a security is described as a random variable. Distribution of this random variable is the formal image of risk uncertainty. On the other side, any present value is defined as a value equivalent to the given future value. This equivalence relationship is a subjective. Thus follows, that present value is described as a fuzzy number, which is depend on the investor's susceptibility to behavioural factors. All above reasons imply, that return rate is given as a fuzzy probabili...

  11. Determination of dose equivalent with tissue-equivalent proportional counters

    International Nuclear Information System (INIS)

    Dietze, G.; Schuhmacher, H.; Menzel, H.G.

    1989-01-01

    Low pressure tissue-equivalent proportional counters (TEPC) are instruments based on the cavity chamber principle and provide spectral information on the energy loss of single charged particles crossing the cavity. Hence such detectors measure absorbed dose or kerma and are able to provide estimates on radiation quality. During recent years TEPC based instruments have been developed for radiation protection applications in photon and neutron fields. This was mainly based on the expectation that the energy dependence of their dose equivalent response is smaller than that of other instruments in use. Recently, such instruments have been investigated by intercomparison measurements in various neutron and photon fields. Although their principles of measurements are more closely related to the definition of dose equivalent quantities than those of other existing dosemeters, there are distinct differences and limitations with respect to the irradiation geometry and the determination of the quality factor. The application of such instruments for measuring ambient dose equivalent is discussed. (author)

  12. Null result for violation of the equivalence principle with free-fall rotating gyroscopes

    International Nuclear Information System (INIS)

    Luo, J.; Zhou, Z.B.; Nie, Y.X.; Zhang, Y.Z.

    2002-01-01

    The differential acceleration between a rotating mechanical gyroscope and a nonrotating one is directly measured by using a double free-fall interferometer, and no apparent differential acceleration has been observed at the relative level of 2x10 -6 . It means that the equivalence principle is still valid for rotating extended bodies, i.e., the spin-gravity interaction between the extended bodies has not been observed at this level. Also, to the limit of our experimental sensitivity, there is no observed asymmetrical effect or antigravity of the rotating gyroscopes as reported by Hayasaka et al

  13. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton

    Science.gov (United States)

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-01

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10-12 eV (i.e., range larger than a few 1 05 m ), we improve existing constraints by one order of magnitude to |α |baryon number and to |α |baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10-12 eV , the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  14. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton.

    Science.gov (United States)

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-06

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10^{-12}  eV (i.e., range larger than a few 10^{5}  m), we improve existing constraints by one order of magnitude to |α|difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10^{-12}  eV, the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  15. Galactic Shapiro delay to the Crab pulsar and limit on weak equivalence principle violation

    Energy Technology Data Exchange (ETDEWEB)

    Desai, Shantanu [Indian Institute of Technology, Department of Physics, Hyderabad, Telangana (India); Kahya, Emre [Istanbul Technical University, Department of Physics, Istanbul (Turkey)

    2018-02-15

    We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Weak equivalence principle by using observations of ''nano-shot'' giant pulses from the Crab pulsar with time-delay < 0.4 ns, as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Weak equivalence principle in terms of the PPN parameter Δγ < 2.41 x 10{sup -15}. From the time-difference between simultaneous optical and radio observations, we get Δγ < 1.54 x 10{sup -9}. We also point out differences in our calculation of Shapiro delay and that from two recent papers (Yang and Zhang, Phys Rev D 94(10):101501, 2016; Zhang and Gong, Astrophys J 837:134, 2017), which used the same observations to obtain a corresponding limit on Δγ. (orig.)

  16. Testing the Equivalence Principle and Lorentz Invariance with PeV Neutrinos from Blazar Flares.

    Science.gov (United States)

    Wang, Zi-Yi; Liu, Ruo-Yu; Wang, Xiang-Yu

    2016-04-15

    It was recently proposed that a giant flare of the blazar PKS B1424-418 at redshift z=1.522 is in association with a PeV-energy neutrino event detected by IceCube. Based on this association we here suggest that the flight time difference between the PeV neutrino and gamma-ray photons from blazar flares can be used to constrain the violations of equivalence principle and the Lorentz invariance for neutrinos. From the calculated Shapiro delay due to clusters or superclusters in the nearby universe, we find that violation of the equivalence principle for neutrinos and photons is constrained to an accuracy of at least 10^{-5}, which is 2 orders of magnitude tighter than the constraint placed by MeV neutrinos from supernova 1987A. Lorentz invariance violation (LIV) arises in various quantum-gravity theories, which predicts an energy-dependent velocity of propagation in vacuum for particles. We find that the association of the PeV neutrino with the gamma-ray outburst set limits on the energy scale of possible LIV to >0.01E_{pl} for linear LIV models and >6×10^{-8}E_{pl} for quadratic order LIV models, where E_{pl} is the Planck energy scale. These are the most stringent constraints on neutrino LIV for subluminal neutrinos.

  17. Galactic Shapiro delay to the Crab pulsar and limit on weak equivalence principle violation

    International Nuclear Information System (INIS)

    Desai, Shantanu; Kahya, Emre

    2018-01-01

    We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Weak equivalence principle by using observations of ''nano-shot'' giant pulses from the Crab pulsar with time-delay < 0.4 ns, as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Weak equivalence principle in terms of the PPN parameter Δγ < 2.41 x 10 -15 . From the time-difference between simultaneous optical and radio observations, we get Δγ < 1.54 x 10 -9 . We also point out differences in our calculation of Shapiro delay and that from two recent papers (Yang and Zhang, Phys Rev D 94(10):101501, 2016; Zhang and Gong, Astrophys J 837:134, 2017), which used the same observations to obtain a corresponding limit on Δγ. (orig.)

  18. The equivalence principle and the gravitational constant in experimental relativity

    International Nuclear Information System (INIS)

    Spallicci, A.D.A.M.

    1988-01-01

    Fischbach's analysis of the Eotvos experiment, showing an embedded fifth force, has stressed the importance of further tests of the Equivalence Principle (EP). From Galilei and Newton, the EP played the role of a postulate for all gravitational physics and mechanics (weak EP), until Einstein, who extended the validity of the EP to all physics (strong EP). After Fischbach's publication on the fifth force, several experiments have been performed or simply proposed to test the WEP. They are concerned with possible gravitational potential anomalies, depending upon distances or matter composition. While the low level of accuracy with which the gravitational constant G is known has been recognized, experiments have been proposed to test G in the range from few cm until 200 m. This paper highlights the different features of the proposed space experiments. Possible implications on the metric formalism for objects in low potential and slow motion are briefly indicated

  19. Equivalence principle for quantum systems: dephasing and phase shift of free-falling particles

    Science.gov (United States)

    Anastopoulos, C.; Hu, B. L.

    2018-02-01

    We ask the question of how the (weak) equivalence principle established in classical gravitational physics should be reformulated and interpreted for massive quantum objects that may also have internal degrees of freedom (dof). This inquiry is necessary because even elementary concepts like a classical trajectory are not well defined in quantum physics—trajectories originating from quantum histories become viable entities only under stringent decoherence conditions. From this investigation we posit two logically and operationally distinct statements of the equivalence principle for quantum systems. Version A: the probability distribution of position for a free-falling particle is the same as the probability distribution of a free particle, modulo a mass-independent shift of its mean. Version B: any two particles with the same velocity wave-function behave identically in free fall, irrespective of their masses. Both statements apply to all quantum states, including those without a classical correspondence, and also for composite particles with quantum internal dof. We also investigate the consequences of the interaction between internal and external dof induced by free fall. For a class of initial states, we find dephasing occurs for the translational dof, namely, the suppression of the off-diagonal terms of the density matrix, in the position basis. We also find a gravitational phase shift in the reduced density matrix of the internal dof that does not depend on the particle’s mass. For classical states, the phase shift has a natural classical interpretation in terms of gravitational red-shift and special relativistic time-dilation.

  20. Some algorithms for reordering a sequence of objects, with application to E. Sparre Andersen's principle of equivalence in mathematical statistics

    NARCIS (Netherlands)

    Bruijn, de N.G.

    1972-01-01

    Recently A. W. Joseph described an algorithm providing combinatorial insight into E. Sparre Andersen's so-called Principle of Equivalence in mathematical statistics. In the present paper such algorithms are discussed systematically.

  1. A homogeneous static gravitational field and the principle of equivalence

    International Nuclear Information System (INIS)

    Chernikov, N.A.

    2001-01-01

    In this paper any gravitational field (both in the Einsteinian case and in the Newtonian case) is described by the connection, called gravitational. A homogeneous static gravitational field is considered in the four-dimensional area z>0 of a space-time with Cartesian coordinates x, y, z, and t. Such field can be created by masses, disposed outside the area z>0 with a density distribution independent of x, y, and t. Remarkably, in the four-dimensional area z>0, together with the primitive background connection, the primitive gravitational connection has been derived. In concordance with the Principle of Equivalence all components of such gravitational connection are equal to zero in the uniformly accelerated frame system, in which the gravitational force of attraction is balanced by the inertial force. However, all components of such background connection are equal to zero in the resting frame system, but not in the accelerated frame system

  2. Violations of the equivalence principle in a dilaton-runaway scenario

    CERN Document Server

    Damour, Thibault Marie Alban Guillaume; Veneziano, Gabriele

    2002-01-01

    We explore a version of the cosmological dilaton-fixing and decoupling mechanism in which the dilaton-dependence of the low-energy effective action is extremized for infinitely large values of the bare string coupling $g_s^2 = e^{\\phi}$. We study the efficiency with which the dilaton $\\phi$ runs away towards its ``fixed point'' at infinity during a primordial inflationary stage, and thereby approximately decouples from matter. The residual dilaton couplings are found to be related to the amplitude of the density fluctuations generated during inflation. For the simplest inflationary potential, $V (\\chi) = {1/2} m_{\\chi}^2 (\\phi) \\chi^2$, the residual dilaton couplings are shown to predict violations of the universality of gravitational acceleration near the $\\Delta a / a \\sim 10^{-12}$ level. This suggests that a modest improvement in the precision of equivalence principle tests might be able to detect the effect of such a runaway dilaton. Under some assumptions about the coupling of the dilaton to dark matter...

  3. The chain rule implies Tsirelson's bound: an approach from generalized mutual information

    International Nuclear Information System (INIS)

    Wakakuwa, Eyuri; Murao, Mio

    2012-01-01

    In order to analyze an information theoretical derivation of Tsirelson's bound based on information causality, we introduce a generalized mutual information (GMI), defined as the optimal coding rate of a channel with classical inputs and general probabilistic outputs. In the case where the outputs are quantum, the GMI coincides with the quantum mutual information. In general, the GMI does not necessarily satisfy the chain rule. We prove that Tsirelson's bound can be derived by imposing the chain rule on the GMI. We formulate a principle, which we call the no-supersignaling condition, which states that the assistance of nonlocal correlations does not increase the capability of classical communication. We prove that this condition is equivalent to the no-signaling condition. As a result, we show that Tsirelson's bound is implied by the nonpositivity of the quantitative difference between information causality and no-supersignaling. (paper)

  4. [Equivalent Lever Principle of Ossicular Chain and Amplitude Reduction Effect of Internal Ear Lymph].

    Science.gov (United States)

    Zhao, Xiaoyan; Qin, Renjia

    2015-04-01

    This paper makes persuasive demonstrations on some problems about the human ear sound transmission principle in existing physiological textbooks and reference books, and puts forward the authors' view to make up for its literature. Exerting the knowledge of lever in physics and the acoustics theory, we come up with an equivalent simplified model of manubrium mallei which is to meet the requirements as the long arm of the lever. We also set up an equivalent simplified model of ossicular chain--a combination of levers of ossicular chain. We disassemble the model into two simple levers, and make full analysis and demonstration on them. Through the calculation and comparison of displacement amplitudes in both external auditory canal air and internal ear lymph, we may draw a conclusion that the key reason, which the sound displacement amplitude is to be decreased to adapt to the endurance limit of the basement membrane, is that the density and sound speed in lymph is much higher than those in the air.

  5. Comment on ''Modified photon equation of motion as a test for the principle of equivalence''

    International Nuclear Information System (INIS)

    Nityananda, R.

    1992-01-01

    In a recent paper, a modification of the geodesic equation was proposed for spinning photons containing a spin-curvature coupling term. The difference in arrival times of opposite circular polarizations starting simultaneously from a source was computed, obtaining a result linear in the coupling parameter. It is pointed out here that this linear term violates causality and, more generally, Fermat's principle, implying calculational errors. Even if these are corrected, there is a violation of covariance in the way the photon spin was introduced. Rectifying this makes the effect computed vanish entirely

  6. Significance and principles of the calculation of the effective dose equivalent for radiological protection of personnel and patients

    International Nuclear Information System (INIS)

    Drexler, G.; Williams, G.

    1985-01-01

    The application of the effective dose equivalent, Hsub(E), concept for radiological protection assessments of occupationally exposed persons is justifiable by the practicability thus achieved with regard to the limiting principles. Nevertheless, it would be proper logic to further use as the basic limiting quantity the real physical dose equivalent of homogeneous whole-body exposure, and for inhomogeneous whole-body irradiation the Hsub(E) value, calculated by means of the concept of the effective dose equivalent. For then the required concepts, models and calculations would not be connected with a basic radiation protection quantity. Application of the effective dose equivalent for radiation protection assessments for patients is misleading and is not practical with regard to assessing an individual or collective radiation risk of patients. The quantity of expected harm would be better suited to this purpose. There is no need to express the radiation risk by a dose quantity, which means careless handling of good information. (orig./WU) [de

  7. Adaptation of the TH Epsilon Mu formalism for the analysis of the equivalence principle in the presence of the weak and electroweak interaction

    Science.gov (United States)

    Fennelly, A. J.

    1981-01-01

    The TH epsilon mu formalism, used in analyzing equivalence principle experiments of metric and nonmetric gravity theories, is adapted to the description of the electroweak interaction using the Weinberg-Salam unified SU(2) x U(1) model. The use of the TH epsilon mu formalism is thereby extended to the weak interactions, showing how the gravitational field affects W sub mu (+ or -1) and Z sub mu (0) boson propagation and the rates of interactions mediated by them. The possibility of a similar extension to the strong interactions via SU(5) grand unified theories is briefly discussed. Also, using the effects of the potentials on the baryon and lepton wave functions, the effects of gravity on transition mediated in high-A atoms which are electromagnetically forbidden. Three possible experiments to test the equivalence principle in the presence of the weak interactions, which are technologically feasible, are then briefly outline: (1) K-capture by the FE nucleus (counting the emitted X-ray); (2) forbidden absorption transitions in high-A atoms' vapor; and (3) counting the relative Beta-decay rates in a suitable alpha-beta decay chain, assuming the strong interactions obey the equivalence principle.

  8. Gravitational quadrupolar coupling to equivalence principle test masses: the general case

    CERN Document Server

    Lockerbie, N A

    2002-01-01

    This paper discusses the significance of the quadrupolar gravitational force in the context of test masses destined for use in equivalence principle (EP) experiments, such as STEP and MICROSCOPE. The relationship between quadrupolar gravity and rotational inertia for an arbitrary body is analysed, and the special, gravitational, role of a body's principal axes of inertia is revealed. From these considerations the gravitational quadrupolar force acting on a cylindrically symmetrical body, due to a point-like attracting source mass, is derived in terms of the body's mass quadrupole tensor. The result is shown to be in agreement with that obtained from MacCullagh's formula (as the starting point). The theory is then extended to cover the case of a completely arbitrary solid body, and a compact formulation for the quadrupolar force on such a body is derived. A numerical example of a dumb-bell's attraction to a local point-like gravitational source is analysed using this theory. Close agreement is found between th...

  9. Equivalent Lagrangians

    International Nuclear Information System (INIS)

    Hojman, S.

    1982-01-01

    We present a review of the inverse problem of the Calculus of Variations, emphasizing the ambiguities which appear due to the existence of equivalent Lagrangians for a given classical system. In particular, we analyze the properties of equivalent Lagrangians in the multidimensional case, we study the conditions for the existence of a variational principle for (second as well as first order) equations of motion and their solutions, we consider the inverse problem of the Calculus of Variations for singular systems, we state the ambiguities which emerge in the relationship between symmetries and conserved quantities in the case of equivalent Lagrangians, we discuss the problems which appear in trying to quantize classical systems which have different equivalent Lagrangians, we describe the situation which arises in the study of equivalent Lagrangians in field theory and finally, we present some unsolved problems and discussion topics related to the content of this article. (author)

  10. Dependence on age at intake of committed dose equivalents from radionuclides

    International Nuclear Information System (INIS)

    Adams, N.

    1981-01-01

    The dependence of committed dose equivalents on age at intake is needed to assess the significance of exposures of young persons among the general public resulting from inhaled or ingested radionuclides. The committed dose equivalents, evaluated using ICRP principles, depend on the body dimensions of the young person at the time of intake of a radionuclide and on subsequent body growth. Representation of growth by a series of exponential segments facilitates the derivation of general expressions for the age dependence of committed dose equivalents if metabolic models do not change with age. The additional assumption that intakes of radionuclides in air or food are proportional to a person's energy expenditure (implying age-independent dietary composition) enables the demonstration that the age of the most highly exposed 'critical groups' of the general public from these radionuclides is either about 1 year or 17 years. With the above assumptions the exposure of the critical group is less than three times the exposure of adult members of the general public. Approximate values of committed dose equivalents which avoid both underestimation and excessive overestimation are shown to be obtainable by simplified procedures. Modified procedures are suggested for use if metabolic models change with age. (author)

  11. Expanded solar-system limits on violations of the equivalence principle

    International Nuclear Information System (INIS)

    Overduin, James; Mitcham, Jack; Warecki, Zoey

    2014-01-01

    Most attempts to unify general relativity with the standard model of particle physics predict violations of the equivalence principle associated in some way with the composition of the test masses. We test this idea by using observational uncertainties in the positions and motions of solar-system bodies to set upper limits on the relative difference Δ between gravitational and inertial mass for each body. For suitable pairs of objects, it is possible to constrain three different linear combinations of Δ using Kepler’s third law, the migration of stable Lagrange points, and orbital polarization (the Nordtvedt effect). Limits of order 10 −10 –10 −6 on Δ for individual bodies can then be derived from planetary and lunar ephemerides, Cassini observations of the Saturn system, and observations of Jupiter’s Trojan asteroids as well as recently discovered Trojan companions around the Earth, Mars, Neptune, and Saturnian moons. These results can be combined with models for elemental abundances in each body to test for composition-dependent violations of the universality of free fall in the solar system. The resulting limits are weaker than those from laboratory experiments, but span a larger volume in composition space. (paper)

  12. Risk measurement with equivalent utility principles

    NARCIS (Netherlands)

    Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.

    2006-01-01

    Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics

  13. Einstein's Elevator in Class: A Self-Construction by Students for the Study of the Equivalence Principle

    Science.gov (United States)

    Kapotis, Efstratios; Kalkanis, George

    2016-10-01

    According to the principle of equivalence, it is impossible to distinguish between gravity and inertial forces that a noninertial observer experiences in his own frame of reference. For example, let's consider an elevator in space that is being accelerated in one direction. An observer inside it would feel as if there was gravity force pulling him toward the opposite direction. The same holds for a person in a stationary elevator located in Earth's gravitational field. No experiment enables us to distinguish between the accelerating elevator in space and the motionless elevator near Earth's surface. Strictly speaking, when the gravitational field is non-uniform (like Earth's), the equivalence principle holds only for experiments in elevators that are small enough and that take place over a short enough period of time (Fig. 1). However, performing an experiment in an elevator in space is impractical. On the other hand, it is easy to combine both forces on the same observer, i.e., gravity and a fictitious inertial force due to acceleration. Imagine an observer in an elevator that falls freely within Earth's gravitational field. The observer experiences gravity pulling him down while it might be said that the inertial force due to gravity acceleration g pulls him up. Gravity and inertial force cancel each other, (mis)leading the observer to believe there is no gravitational field. This study outlines our implementation of a self-construction idea that we have found useful in teaching introductory physics students (undergraduate, non-majors).

  14. Decompositional equivalence: A fundamental symmetry underlying quantum theory

    OpenAIRE

    Fields, Chris

    2014-01-01

    Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?

  15. The Thermodynamical Arrow and the Historical Arrow; Are They Equivalent?

    Directory of Open Access Journals (Sweden)

    Martin Tamm

    2017-08-01

    Full Text Available In this paper, the relationship between the thermodynamic and historical arrows of time is studied. In the context of a simple combinatorial model, their definitions are made more precise and in particular strong versions (which are not compatible with time symmetric microscopic laws and weak versions (which can be compatible with time symmetric microscopic laws are given. This is part of a larger project that aims to explain the arrows as consequences of a common time symmetric principle in the set of all possible universes. However, even if we accept that both arrows may have the same origin, this does not imply that they are equivalent, and it is argued that there can be situations where one arrow may be well-defined but the other is not.

  16. Development of dose equivalent meters based on microdosimetric principles

    International Nuclear Information System (INIS)

    Booz, J.

    1984-01-01

    In this paper, the employment of microdosimetric dose-equivalent meters in radiation protection is described considering the advantages of introducing microdosimetric methods into radiation protection, the technical suitability of such instruments for measuring dose equivalent, and finally technical requirements, constraints and solutions together with some examples of instruments and experimental results. The advantage of microdosimetric methods in radiation protection is illustrated with the evaluation of dose-mean quality factors in radiation fields of unknown composition and with the methods of evaluating neutron- and gamma-dose fractions. - It is shown that there is good correlation between dose-mean lineal energy, anti ysub(anti D), and the ICRP quality factor. - Neutron- and gamma-dose fractions of unknown radiation fields can be evaluated with microdosimetric proportional counters without recurrence to other instruments and methods. The problems of separation are discussed. The technical suitability of microdosimetric instruments for measuring dose equivalent is discussed considering the energy response to neutrons and photons and the sensitivity in terms of dose-equivalent rate. Then, considering technical requirements, constraints, and solutions, the problem of the large dynamic range in LET, the large dynamic range in pulse rate, geometry of sensitive volume and electrodes, evaluation of dose-mean quality factors, calibration methods, and uncertainties are discussed. (orig.)

  17. Conditions needed to give meaning to rad-equivalence principle

    International Nuclear Information System (INIS)

    Latarjet, R.

    1980-01-01

    To legislate on mutagenic chemical pollution the problem to be faced is similar to that tackled about 30 years ago regarding pollution by ionizing radiations. It would be useful to benefit from the work of these 30 years by establishing equivalences, if possible, between chemical mutagens and radiations. Inevitable mutagenic pollutions are considered here, especially those associated with fuel based energy production. As with radiations the legislation must derive from a compromise between the harmful and beneficial effects of the polluting system. When deciding on tolerance doses it is necessary to safeguard the biosphere without inflicting excessive restrictions on industry and on the economy. The present article discusses the conditions needed to give meaning to the notion of rad-equivalence. Some examples of already established equivalences are given, together with the first practical consequences which emerge [fr

  18. The Principle of General Tovariance

    Science.gov (United States)

    Heunen, C.; Landsman, N. P.; Spitters, B.

    2008-06-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.

  19. Nonextensive entropies derived from Gauss' principle

    International Nuclear Information System (INIS)

    Wada, Tatsuaki

    2011-01-01

    Gauss' principle in statistical mechanics is generalized for a q-exponential distribution in nonextensive statistical mechanics. It determines the associated stochastic and statistical nonextensive entropies which satisfy Greene-Callen principle concerning on the equivalence between microcanonical and canonical ensembles. - Highlights: → Nonextensive entropies are derived from Gauss' principle and ensemble equivalence. → Gauss' principle is generalized for a q-exponential distribution. → I have found the condition for satisfying Greene-Callen principle. → The associated statistical q-entropy is found to be normalized Tsallis entropy.

  20. A Pontryagin Minimum Principle-Based Adaptive Equivalent Consumption Minimum Strategy for a Plug-in Hybrid Electric Bus on a Fixed Route

    Directory of Open Access Journals (Sweden)

    Shaobo Xie

    2017-09-01

    Full Text Available When developing a real-time energy management strategy for a plug-in hybrid electric vehicle, it is still a challenge for the Equivalent Consumption Minimum Strategy to achieve near-optimal energy consumption, because the optimal equivalence factor is not readily available without the trip information. With the help of realistic speeding profiles sampled from a plug-in hybrid electric bus running on a fixed commuting line, this paper proposes a convenient and effective approach of determining the equivalence factor for an adaptive Equivalent Consumption Minimum Strategy. Firstly, with the adaptive law based on the feedback of battery SOC, the equivalence factor is described as a combination of the major component and tuning component. In particular, the major part defined as a constant is applied to the inherent consistency of regular speeding profiles, while the second part including a proportional and integral term can slightly tune the equivalence factor to satisfy the disparity of daily running cycles. Moreover, Pontryagin’s Minimum Principle is employed and solved by using the shooting method to capture the co-state dynamics, in which the Secant method is introduced to adjust the initial co-state value. And then the initial co-state value in last shooting is taken as the optimal stable constant of equivalence factor. Finally, altogether ten successive driving profiles are selected with different initial SOC levels to evaluate the proposed method, and the results demonstrate the excellent fuel economy compared with the dynamic programming and PMP method.

  1. Gravitational quadrupolar coupling to equivalence principle test masses: the general case

    International Nuclear Information System (INIS)

    Lockerbie, N A

    2002-01-01

    This paper discusses the significance of the quadrupolar gravitational force in the context of test masses destined for use in equivalence principle (EP) experiments, such as STEP and MICROSCOPE. The relationship between quadrupolar gravity and rotational inertia for an arbitrary body is analysed, and the special, gravitational, role of a body's principal axes of inertia is revealed. From these considerations the gravitational quadrupolar force acting on a cylindrically symmetrical body, due to a point-like attracting source mass, is derived in terms of the body's mass quadrupole tensor. The result is shown to be in agreement with that obtained from MacCullagh's formula (as the starting point). The theory is then extended to cover the case of a completely arbitrary solid body, and a compact formulation for the quadrupolar force on such a body is derived. A numerical example of a dumb-bell's attraction to a local point-like gravitational source is analysed using this theory. Close agreement is found between the resulting quadrupolar force on the body and the difference between the net and the monopolar forces acting on it, underscoring the utility of the approach. A dynamical technique for experimentally obtaining the mass quadrupole tensors of EP test masses is discussed, and a means of validating the results is noted

  2. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.

  3. Quantum Action Principle with Generalized Uncertainty Principle

    OpenAIRE

    Gu, Jie

    2013-01-01

    One of the common features in all promising candidates of quantum gravity is the existence of a minimal length scale, which naturally emerges with a generalized uncertainty principle, or equivalently a modified commutation relation. Schwinger's quantum action principle was modified to incorporate this modification, and was applied to the calculation of the kernel of a free particle, partly recovering the result previously studied using path integral.

  4. Experimental constraints on a minimal and nonminimal violation of the equivalence principle in the oscillations of massive neutrinos

    International Nuclear Information System (INIS)

    Gasperini, M.; Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Torino, Italy)

    1989-01-01

    The negative results of the oscillations experiments are discussed with the hypothesis that the various neutrino types are not universally coupled to gravity. In this case the transition probabiltiy between two different flavor eigenstates may be affected by the local gravitational field present in a terrestrial laboratory, and the contribution of gravity can interfere, in general, with the mass contribution to the oscillation process. In particular it is shown that even a strong violation of the equivalence principle could be compatible with the experimental data, provided the gravity-induced energy splitting is balanced by a suitable neutrino mass difference

  5. Bilingual Dictionaries and Communicative Equivalence for a ...

    African Journals Online (AJOL)

    This implies that a bilingual dictionary becomes a poly functional instrument, presenting more information than just translation equivalents. ... With the emphasis on the user perspective, metalexicographical criteria are used to investigate problems regarding the access structure and the addressing procedures in Afrikaans ...

  6. Implied Volatility of Interest Rate Options: An Empirical Investigation of the Market Model

    DEFF Research Database (Denmark)

    Christiansen, Charlotte; Hansen, Charlotte Strunk

    2002-01-01

    We analyze the empirical properties of the volatility implied in options on the 13-week US Treasury bill rate. These options have not been studied previously. It is shown that a European style put option on the interest rate is equivalent to a call option on a zero-coupon bond. We apply the LIBOR...

  7. Experimental method research on neutron equal dose-equivalent detection

    International Nuclear Information System (INIS)

    Ji Changsong

    1995-10-01

    The design principles of neutron dose-equivalent meter for neutron biological equi-effect detection are studied. Two traditional principles 'absorption net principle' and 'multi-detector principle' are discussed, and on the basis of which a new theoretical principle for neutron biological equi-effect detection--'absorption stick principle' has been put forward to place high hope on both increasing neutron sensitivity of this type of meters and overcoming the shortages of the two traditional methods. In accordance with this new principle a brand-new model of neutron dose-equivalent meter BH3105 has been developed. Its neutron sensitivity reaches 10 cps/(μSv·h -1 ), 18∼40 times higher than that of all the same kinds of meters 0.23∼0.56 cps/(μSv·h -1 ), available today at home and abroad and the specifications of the newly developed meter reach or surpass the levels of the same kind of meters. Therefore the new theoretical principle of neutron biological equi-effect detection--'absorption stick principle' is proved to be scientific, advanced and useful by experiments. (3 refs., 3 figs., 2 tabs.)

  8. Current research efforts at JILA to test the equivalence principle at short ranges

    International Nuclear Information System (INIS)

    Faller, J.E.; Niebauer, T.M.; McHugh, M.P.; Van Baak, D.A.

    1988-01-01

    We are presently engaged in three different experiments to search for a possible breakdown of the equivalence principle at short ranges. The first of these experiments, which has been completed, is our so-called Galilean test in which the differential free-fall of two objects of differing composition was measured using laser interferometry. We observed that the differential acceleration of two test bodies was less than 5 parts in 10 billion. This experiment set new limits on a suggested baryon dependent ''Fifth Force'' at ranges longer than 1 km. With a second experiment, we are investigating substance dependent interactions primarily for ranges up to 10 meters using a fluid supported torsion balance; this apparatus has been built and is now undergoing laboratory tests. Finally, a proposal has been made to measure the gravitational signal associated with the changing water level at a large pumped storage facility in Ludington, Michigan. Measuring the gravitational signal above and below the pond will yield the value of the gravitational constant, G, at ranges from 10-100 m. These measurements will serve as an independent check on other geophysical measurements of G

  9. Principle of space existence and De Sitter metric

    International Nuclear Information System (INIS)

    Mal'tsev, V.K.

    1990-01-01

    The selection principle for the solutions of the Einstein equations suggested in a series of papers implies the existence of space (g ik ≠ 0) only in the presence of matter (T ik ≠0). This selection principle (principle of space existence, in the Markov terminology) implies, in the general case, the absence of the cosmological solution with the De Sitter metric. On the other hand, the De Sitter metric is necessary for describing both inflation and deflation periods of the Universe. It is shown that the De Sitter metric is also allowed by the selection principle under discussion if the metric experiences the evolution into the Friedmann metric

  10. Allocentrically implied target locations are updated in an eye-centred reference frame.

    Science.gov (United States)

    Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P

    2012-04-18

    When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Testing the Equivalence Principle in an Einstein Elevator: Detector Dynamics and Gravity Perturbations

    Science.gov (United States)

    Hubbard, Dorthy (Technical Monitor); Lorenzini, E. C.; Shapiro, I. I.; Cosmo, M. L.; Ashenberg, J.; Parzianello, G.; Iafolla, V.; Nozzoli, S.

    2003-01-01

    We discuss specific, recent advances in the analysis of an experiment to test the Equivalence Principle (EP) in free fall. A differential accelerometer detector with two proof masses of different materials free falls inside an evacuated capsule previously released from a stratospheric balloon. The detector spins slowly about its horizontal axis during the fall. An EP violation signal (if present) will manifest itself at the rotational frequency of the detector. The detector operates in a quiet environment as it slowly moves with respect to the co-moving capsule. There are, however, gravitational and dynamical noise contributions that need to be evaluated in order to define key requirements for this experiment. Specifically, higher-order mass moments of the capsule contribute errors to the differential acceleration output with components at the spin frequency which need to be minimized. The dynamics of the free falling detector (in its present design) has been simulated in order to estimate the tolerable errors at release which, in turn, define the release mechanism requirements. Moreover, the study of the higher-order mass moments for a worst-case position of the detector package relative to the cryostat has led to the definition of requirements on the shape and size of the proof masses.

  12. Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission

    Science.gov (United States)

    Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G.; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.

    2018-01-01

    The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10-5. By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10-5 and the Sun's gravitational oblateness, J2⊙J2⊙ = (2.246 ± 0.022) × 10-7. Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, GM⊙°/GM⊙GM⊙°/GM⊙ = (-6.13 ± 1.47) × 10-14, which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain ∣∣G°∣∣/GG°/G to be <4 × 10-14 per year.

  13. Normalization for Implied Volatility

    OpenAIRE

    Fukasawa, Masaaki

    2010-01-01

    We study specific nonlinear transformations of the Black-Scholes implied volatility to show remarkable properties of the volatility surface. Model-free bounds on the implied volatility skew are given. Pricing formulas for the European options which are written in terms of the implied volatility are given. In particular, we prove elegant formulas for the fair strikes of the variance swap and the gamma swap.

  14. Forecasting with Option-Implied Information

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Jacobs, Kris; Chang, Bo Young

    2013-01-01

    This chapter surveys the methods available for extracting information from option prices that can be used in forecasting. We consider option-implied volatilities, skewness, kurtosis, and densities. More generally, we discuss how any forecasting object that is a twice differentiable function...... of the future realization of the underlying risky asset price can utilize option-implied information in a well-defined manner. Going beyond the univariate option-implied density, we also consider results on option-implied covariance, correlation and beta forecasting, as well as the use of option......-implied information in cross-sectional forecasting of equity returns. We discuss how option-implied information can be adjusted for risk premia to remove biases in forecasting regressions....

  15. Consistency of the Mach principle and the gravitational-to-inertial mass equivalence principle

    International Nuclear Information System (INIS)

    Granada, Kh.K.; Chubykalo, A.E.

    1990-01-01

    Kinematics of the system, composed of two bodies, interacting with each other according to inverse-square law, was investigated. It is shown that the Mach principle, earlier rejected by the general relativity theory, can be used as an alternative for the absolute space concept, if it is proposed, that distant star background dictates both inertial and gravitational mass of a body

  16. How to estimate the differential acceleration in a two-species atom interferometer to test the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Varoquaux, G; Nyman, R A; Geiger, R; Cheinet, P; Bouyer, P [Laboratoire Charles Fabry de l' Institut d' Optique, Campus Polytechnique, RD 128, 91127 Palaiseau (France); Landragin, A [LNE-SYRTE, UMR8630, UPMC, Observatoire de Paris, 61 avenue de l' Observatoire, 75014 Paris (France)], E-mail: philippe.bouyer@institutoptique.fr

    2009-11-15

    We propose a scheme for testing the weak equivalence principle (universality of free-fall (UFF)) using an atom-interferometric measurement of the local differential acceleration between two atomic species with a large mass ratio as test masses. An apparatus in free fall can be used to track atomic free-fall trajectories over large distances. We show how the differential acceleration can be extracted from the interferometric signal using Bayesian statistical estimation, even in the case of a large mass and laser wavelength difference. We show that this statistical estimation method does not suffer from acceleration noise of the platform and does not require repeatable experimental conditions. We specialize our discussion to a dual potassium/rubidium interferometer and extend our protocol with other atomic mixtures. Finally, we discuss the performance of the UFF test developed for the free-fall (zero-gravity) airplane in the ICE project (http://www.ice-space.fr)

  17. How to estimate the differential acceleration in a two-species atom interferometer to test the equivalence principle

    International Nuclear Information System (INIS)

    Varoquaux, G; Nyman, R A; Geiger, R; Cheinet, P; Bouyer, P; Landragin, A

    2009-01-01

    We propose a scheme for testing the weak equivalence principle (universality of free-fall (UFF)) using an atom-interferometric measurement of the local differential acceleration between two atomic species with a large mass ratio as test masses. An apparatus in free fall can be used to track atomic free-fall trajectories over large distances. We show how the differential acceleration can be extracted from the interferometric signal using Bayesian statistical estimation, even in the case of a large mass and laser wavelength difference. We show that this statistical estimation method does not suffer from acceleration noise of the platform and does not require repeatable experimental conditions. We specialize our discussion to a dual potassium/rubidium interferometer and extend our protocol with other atomic mixtures. Finally, we discuss the performance of the UFF test developed for the free-fall (zero-gravity) airplane in the ICE project (http://www.ice-space.fr).

  18. Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission.

    Science.gov (United States)

    Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G; Neumann, Gregory A; Smith, David E; Zuber, Maria T

    2018-01-18

    The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10 -5 . By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10 -5 and the Sun's gravitational oblateness, [Formula: see text] = (2.246 ± 0.022) × 10 -7 . Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, [Formula: see text] = (-6.13 ± 1.47) × 10 -14 , which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain [Formula: see text] to be <4 × 10 -14 per year.

  19. The Application of Equivalence Theory to Advertising Translation

    Institute of Scientific and Technical Information of China (English)

    张颖

    2017-01-01

    Through analyzing equivalence theory, the author tries to find a solution to the problems arising in the process of ad?vertising translation. These problems include cultural diversity, language diversity and special requirement of advertisement. The author declares that Nida''s functional equivalence is one of the most appropriate theories to deal with these problems. In this pa?per, the author introduces the principles of advertising translation and culture divergences in advertising translation, and then gives some advertising translation practices to explain and analyze how to create good advertising translation by using functional equivalence. At last, the author introduces some strategies in advertising translation.

  20. Limitations of Boltzmann's principle

    International Nuclear Information System (INIS)

    Lavenda, B.H.

    1995-01-01

    The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2

  1. The Satellite Test of the Equivalence Principle (STEP)

    Science.gov (United States)

    2004-01-01

    STEP will carry concentric test masses to Earth orbit to test a fundamental assumption underlying Einstein's theory of general relativity: that gravitational mass is equivalent to inertial mass. STEP is a 21st-century version of the test that Galileo is said to have performed by dropping a carnon ball and a musket ball simultaneously from the top of the Leaning Tower of Pisa to compare their accelerations. During the STEP experiment, four pairs of test masses will be falling around the Earth, and their accelerations will be measured by superconducting quantum interference devices (SQUIDS). The extended time sensitivity of the instruments will allow the measurements to be a million times more accurate than those made in modern ground-based tests.

  2. Formal structures, the concepts of covariance, invariance, equivalent reference frames, and the principle Relativity

    Science.gov (United States)

    Rodrigues, W. A.; Scanavini, M. E. F.; de Alcantara, L. P.

    1990-02-01

    In this paper a given spacetime theory T is characterized as the theory of a certain species of structure in the sense of Bourbaki [1]. It is then possible to clarify in a rigorous way the concepts of passive and active covariance of T under the action of the manifold mapping group G M . For each T, we define also an invariance group G I T and, in general, G I T ≠ G M . This group is defined once we realize that, for each τ ∈ ModT, each explicit geometrical object defining the structure can be classified as absolute or dynamical [2]. All spacetime theories possess also implicit geometrical objects that do not appear explicitly in the structure. These implicit objects are not absolute nor dynamical. Among them there are the reference frame fields, i.e., “timelike” vector fields X ∈ TU,U subseteq M M, where M is a manifold which is part of ST, a substructure for each τ ∈ ModT, called spacetime. We give a physically motivated definition of equivalent reference frames and introduce the concept of the equivalence group of a class of reference frames of kind X according to T, G X T. We define that T admits a weak principle of relativity (WPR) only if G X T ≠ identity for some X. If G X T = G I T for some X, we say that T admits a strong principle of relativity (PR). The results of this paper generalize and clarify several results obtained by Anderson [2], Scheibe [3], Hiskes [4], Recami and Rodrigues [5], Friedman [6], Fock [7], and Scanavini [8]. Among the novelties here, there is the realization that the definitions of G I T and G X T can be given only when certain boundary conditions for the equations of motion of T can be physically realizable in the domain U U subseteq M M, where a given reference frame is defined. The existence of physically realizable boundary conditions for each τ ∈ ModT (in ∂ U), in contrast with the mathematically possible boundary condition, is then seen to be essential for the validity of a principle of relativity for T

  3. Superstring field theory equivalence: Ramond sector

    International Nuclear Information System (INIS)

    Kroyter, Michael

    2009-01-01

    We prove that the finite gauge transformation of the Ramond sector of the modified cubic superstring field theory is ill-defined due to collisions of picture changing operators. Despite this problem we study to what extent could a bijective classical correspondence between this theory and the (presumably consistent) non-polynomial theory exist. We find that the classical equivalence between these two theories can almost be extended to the Ramond sector: We construct mappings between the string fields (NS and Ramond, including Chan-Paton factors and the various GSO sectors) of the two theories that send solutions to solutions in a way that respects the linearized gauge symmetries in both sides and keeps the action of the solutions invariant. The perturbative spectrum around equivalent solutions is also isomorphic. The problem with the cubic theory implies that the correspondence of the linearized gauge symmetries cannot be extended to a correspondence of the finite gauge symmetries. Hence, our equivalence is only formal, since it relates a consistent theory to an inconsistent one. Nonetheless, we believe that the fact that the equivalence formally works suggests that a consistent modification of the cubic theory exists. We construct a theory that can be considered as a first step towards a consistent RNS cubic theory.

  4. Construction principles and design rules in the case of circular design

    NARCIS (Netherlands)

    Romme, A.G.L.; Endenburg, G.

    2006-01-01

    This paper proposes science-based organization design that uses construction principles and design rules to guide practitioner-academic projects. Organization science implies construction principles for creating and implementing designs. These principles serve to construct design rules that are

  5. A Community Standard: Equivalency of Healthcare in Australian Immigration Detention.

    Science.gov (United States)

    Essex, Ryan

    2017-08-01

    The Australian government has long maintained that the standard of healthcare provided in its immigration detention centres is broadly comparable with health services available within the Australian community. Drawing on the literature from prison healthcare, this article examines (1) whether the principle of equivalency is being applied in Australian immigration detention and (2) whether this standard of care is achievable given Australia's current policies. This article argues that the principle of equivalency is not being applied and that this standard of health and healthcare will remain unachievable in Australian immigration detention without significant reform. Alternate approaches to addressing the well documented issues related to health and healthcare in Australian immigration detention are discussed.

  6. A Cp-theory problem book functional equivalencies

    CERN Document Server

    Tkachuk, Vladimir V

    2016-01-01

    This fourth volume in Vladimir Tkachuk's series on Cp-theory gives reasonably complete coverage of the theory of functional equivalencies through 500 carefully selected problems and exercises. By systematically introducing each of the major topics of Cp-theory, the book is intended to bring a dedicated reader from basic topological principles to the frontiers of modern research. The book presents complete and up-to-date information on the preservation of topological properties by homeomorphisms of function spaces.  An exhaustive theory of t-equivalent, u-equivalent and l-equivalent spaces is developed from scratch.   The reader will also find introductions to the theory of uniform spaces, the theory of locally convex spaces, as well as  the theory of inverse systems and dimension theory. Moreover, the inclusion of Kolmogorov's solution of Hilbert's Problem 13 is included as it is needed for the presentation of the theory of l-equivalent spaces. This volume contains the most important classical re...

  7. On the role of the equivalence principle in the general relativity theory

    International Nuclear Information System (INIS)

    Gertsenshtein, M.E.; Stanyukovich, K.P.; Pogosyan, V.A.

    1977-01-01

    The conditions under which the solutions of the general relativity theory equations satisfy the correspondence principle are considered. It is shown that in general relativity theory, as in a plane space any systems of coordinates satisfying the topological requirements of continuity and uniqueness are admissible. The coordinate transformations must be mutually unique, and the following requirements must be met: the transformations of the coordinates xsup(i)=xsup(i)(anti xsup(k)) must preserve the class of the function, while the transformation jacobian must be finite and nonzero. The admissible metrics in the Tolmen problem for a vacuum are considered. A prohibition of the vacuum solution of the Tolmen problem is obtained from the correspondence principle. The correspondence principle is applied to the solution of the Friedmann problem by constructing a spherical symmetric self-similar solution, in which replacement of compression by expansion occurs at a finite density. The examples adduced convince that the application of the correspondence principle makes it possible to discard physically inadmissible solutions and obtained new physical results

  8. Dark energy and equivalence principle constraints from astrophysical tests of the stability of the fine-structure constant

    Energy Technology Data Exchange (ETDEWEB)

    Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C. [Centro de Astrofísica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Pino, M. [Institut Domènech i Montaner, C/Maspujols 21-23, 43206 Reus (Spain); Rocha, C.I.S.A. [Externato Ribadouro, Rua de Santa Catarina 1346, 4000-447 Porto (Portugal); Wietersheim, M. von, E-mail: Carlos.Martins@astro.up.pt, E-mail: Ana.Pinho@astro.up.pt, E-mail: up201106579@fc.up.pt, E-mail: mpc_97@yahoo.com, E-mail: cisar97@hotmail.com, E-mail: maxivonw@gmail.com [Institut Manuel Sales i Ferré, Avinguda de les Escoles 6, 43550 Ulldecona (Spain)

    2015-08-01

    Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w{sub 0}. Moreover, in these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints.

  9. Dark energy and equivalence principle constraints from astrophysical tests of the stability of the fine-structure constant

    International Nuclear Information System (INIS)

    Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C.; Pino, M.; Rocha, C.I.S.A.; Wietersheim, M. von

    2015-01-01

    Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w 0 . Moreover, in these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints

  10. Scalar utility theory and proportional processing: What does it actually imply?

    Science.gov (United States)

    Rosenström, Tom; Wiesner, Karoline; Houston, Alasdair I

    2016-09-07

    Scalar Utility Theory (SUT) is a model used to predict animal and human choice behaviour in the context of reward amount, delay to reward, and variability in these quantities (risk preferences). This article reviews and extends SUT, deriving novel predictions. We show that, contrary to what has been implied in the literature, (1) SUT can predict both risk averse and risk prone behaviour for both reward amounts and delays to reward depending on experimental parameters, (2) SUT implies violations of several concepts of rational behaviour (e.g. it violates strong stochastic transitivity and its equivalents, and leads to probability matching) and (3) SUT can predict, but does not always predict, a linear relationship between risk sensitivity in choices and coefficient of variation in the decision-making experiment. SUT derives from Scalar Expectancy Theory which models uncertainty in behavioural timing using a normal distribution. We show that the above conclusions also hold for other distributions, such as the inverse Gaussian distribution derived from drift-diffusion models. A straightforward way to test the key assumptions of SUT is suggested and possible extensions, future prospects and mechanistic underpinnings are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Elementary physical approach to Mach's principle and its observational basis

    International Nuclear Information System (INIS)

    Horak, Z.

    1979-01-01

    It is shown that Mach's principle and the general principle of relativity are logical consequences of a 'materialistic postulate' and that general relativity implies the validity of Mach's principle for a static (or quasistatic) homogeneous and isotropic universe, spatially self-enclosed. The finite velocity of propagation of gravitational field does not imply a retardation of inertial forces due to the distant masses and therefore does not exclude the validity of Mach's principle. Similarly, the experimentally verified isotropy of inertia is compatible with this principle. The recent observational evidence of very high isotropy of the actual universe proves that the 'anti-Machian' Godel world model must be rejected as a nonphysical one. This suggests the possibility of a renaissance of Einstein's first cosmological model by considering-in the spirit of an older idea of Herbert Dingle-a superlarge-scale quasistatic universe consisting of an unknown number of statistically oscillating regions similar to our own, momentarily expanding, metagalaxy. (author)

  12. Quantum correlations are tightly bound by the exclusivity principle.

    Science.gov (United States)

    Yan, Bin

    2013-06-28

    It is a fundamental problem in physics of what principle limits the correlations as predicted by our current description of nature, based on quantum mechanics. One possible explanation is the "global exclusivity" principle recently discussed in Phys. Rev. Lett. 110, 060402 (2013). In this work we show that this principle actually has a much stronger restriction on the probability distribution. We provide a tight constraint inequality imposed by this principle and prove that this principle singles out quantum correlations in scenarios represented by any graph. Our result implies that the exclusivity principle might be one of the fundamental principles of nature.

  13. Negative masses, even if isolated, imply self-acceleration, hence a catastrophic world

    International Nuclear Information System (INIS)

    Cavalleri, G.; Tonni, E.

    1997-01-01

    The conjecture of the existence of negative masses together with ordinary positive masses leads to runaway motions even if no self-reaction is considered. Pollard and Dunning-Davies have shown other constraints as a modification of the principle of least action and that negative masses can only exist at negative temperature, and must be adiabatically separate from positive masses. They show here that the self-reaction on a single isolated negative mass implies a runaway motion. Consequently, the consideration of self-fields and relevant self-reaction excludes negative masses even if isolated

  14. Equivalent and Alternative Forms for BF Gravity with Immirzi Parameter

    Directory of Open Access Journals (Sweden)

    Merced Montesinos

    2011-11-01

    Full Text Available A detailed analysis of the BF formulation for general relativity given by Capovilla, Montesinos, Prieto, and Rojas is performed. The action principle of this formulation is written in an equivalent form by doing a transformation of the fields of which the action depends functionally on. The transformed action principle involves two BF terms and the two Lorentz invariants that appear in the original action principle generically. As an application of this formalism, the action principle used by Engle, Pereira, and Rovelli in their spin foam model for gravity is recovered and the coupling of the cosmological constant in such a formulation is obtained.

  15. 32 CFR 634.8 - Implied consent.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Implied consent. 634.8 Section 634.8 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Driving Privileges § 634.8 Implied consent. (a) Implied consent to blood, breath, or urine tests....

  16. Human perception and the uncertainty principle

    International Nuclear Information System (INIS)

    Harney, R.C.

    1976-01-01

    The concept of the uncertainty principle that position and momentum cannot be simultaneously specified to arbitrary accuracy is somewhat difficult to reconcile with experience. This note describes order-of-magnitude calculations which quantify the inadequacy of human perception with regards to direct observation of the breakdown of the trajectory concept implied by the uncertainty principle. Even with the best optical microscope, human vision is inadequate by three orders of magnitude. 1 figure

  17. Fractional Black–Scholes option pricing, volatility calibration and implied Hurst exponents in South African context

    Directory of Open Access Journals (Sweden)

    Emlyn Flint

    2017-03-01

    least-squares optimisation routine. Results: It is shown that a fractional Black–Scholes model always admits a non-constant implied volatility term structure when the Hurst exponent is not 0.5, and that 1-year implied volatility is independent of the Hurst exponent and equivalent to fractional volatility. Furthermore, we show that the FBSI model fits the equity index implied volatility data very well but that a more flexible Hurst exponent parameterisation is required to fit accurately the currency implied volatility data. Conclusion: The FBSI model is an arbitrage-free deterministic volatility model that can accurately model equity index implied volatility. It also provides one with an estimate of the implied Hurst exponent, which could be very useful in derivatives trading and delta hedging.

  18. Mathematically equivalent hydraulic analogue for teaching the pharmcokinetics and pharmacodynamics of atracurium

    NARCIS (Netherlands)

    Nikkelen, A.L.J.M.; Meurs, van W.L.; Ohrn, M.A.K.

    1997-01-01

    We evaluated the mathematical equivalence between the two-compartment pharmacokinetic model of the neuromuscular blocking agent atracurium and a hydraulic analogue that includes harmacodynamic principles.

  19. [Discussion on ideological concept implied in traditional reinforcing and reducing method of acupuncture].

    Science.gov (United States)

    Li, Suyun; Zhao, Jingsheng

    2017-11-12

    The forming and development of traditional reinforcing and reducing method of acupuncture was rooted in traditional culture of China, and was based on the ancients' special understanding of nature, life and diseases, therefore its principle and methods were inevitably influenced by philosophy culture and medicine concept at that time. With deep study on Inner Canon of Huangdi and representative reinforcing and reducing method of acupuncture, the implied ideological concept, including contradiction view and profit-loss view in ancient dialectic, yin-yang balance theory, concept of life flow, monophyletic theory of qi , theory of existence of disease-evil, yin - yang astrology theory, theory of inter-promotion of five elements, were summarized and analyzed. The clarified and systematic understanding on guiding ideology of reinforcing and reducing method of acupuncture could significantly promote the understanding on principle, method, content and manipulation.

  20. Solar neutrino results and Violation of the Equivalence Principle An analysis of the existing data and predictions for SNO

    CERN Document Server

    Majumdar, D; Sil, A; Majumdar, Debasish; Raychaudhuri, Amitava; Sil, Arunansu

    2001-01-01

    Violation of the Equivalence Principle (VEP) can lead to neutrino oscillation through the non-diagonal coupling of neutrino flavor eigenstates with the gravitational field. The neutrino energy dependence of this oscillation probability is different from that of the usual mass-mixing neutrino oscillations. In this work we explore, in detail, the viability of the VEP hypothesis as a solution to the solar neutrino problem in a two generation scenario with both the active and sterile neutrino alternatives, choosing these states to be massless. To obtain the best-fit values of the oscillation parameters we perform a chi square analysis for the total rates of solar neutrinos seen at the Chlorine (Homestake), Gallium (Gallex and SAGE), Kamiokande, and SuperKamiokande (SK) experiments. We find that the goodness of these fits is never satisfactory. It markedly improves if the Chlorine data is excluded from the analysis, especially for VEP transformation to sterile neutrinos. The 1117-day SK data for recoil electron sp...

  1. An Abstract Approach to Process Equivalence and a Coinduction Principle for Traces

    DEFF Research Database (Denmark)

    Klin, Bartek

    2004-01-01

    An abstract coalgebraic approach to well-structured relations on processes is presented, based on notions of tests and test suites. Preorders and equivalences on processes are modelled as coalgebras for behaviour endofunctors lifted to a category of test suites. The general framework is specializ...

  2. Local unitary versus local Clifford equivalence of stabilizer and graph states

    International Nuclear Information System (INIS)

    Zeng, Bei; Chung, Hyeyoun; Cross, Andrew W.; Chuang, Isaac L.

    2007-01-01

    The equivalence of stabilizer states under local transformations is of fundamental interest in understanding properties and uses of entanglement. Two stabilizer states are equivalent under the usual stochastic local operations and classical communication criterion if and only if they are equivalent under local unitary (LU) operations. More surprisingly, under certain conditions, two LU-equivalent stabilizer states are also equivalent under local Clifford (LC) operations, as was shown by Van den Nest et al. [Phys. Rev. A 71, 062323 (2005)]. Here, we broaden the class of stabilizer states for which LU equivalence implies LC equivalence (LU LC) to include all stabilizer states represented by graphs with cycles of length neither 3 nor 4. To compare our result with Van den Nest et al.'s, we show that any stabilizer state of distance δ=2 is beyond their criterion. We then further prove that LU LC holds for a more general class of stabilizer states of δ=2. We also explicitly construct graphs representing δ>2 stabilizer states which are beyond their criterion: we identify all 58 graphs with up to 11 vertices and construct graphs with 2 m -1 (m≥4) vertices using quantum error-correcting codes which have non-Clifford transversal gates

  3. On the Bourbaki-Witt principle in toposes

    Science.gov (United States)

    Bauer, Andrej; Lumsdaine, Peter Lefanu

    2013-07-01

    The Bourbaki-Witt principle states that any progressive map on a chain-complete poset has a fixed point above every point. It is provable classically, but not intuitionistically. We study this and related principles in an intuitionistic setting. Among other things, we show that Bourbaki-Witt fails exactly when the trichotomous ordinals form a set, but does not imply that fixed points can always be found by transfinite iteration. Meanwhile, on the side of models, we see that the principle fails in realisability toposes, and does not hold in the free topos, but does hold in all cocomplete toposes.

  4. The Principle of Energetic Consistency

    Science.gov (United States)

    Cohn, Stephen E.

    2009-01-01

    A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of

  5. Gauge theories under incorporation of a generalized uncertainty principle

    International Nuclear Information System (INIS)

    Kober, Martin

    2010-01-01

    There is considered an extension of gauge theories according to the assumption of a generalized uncertainty principle which implies a minimal length scale. A modification of the usual uncertainty principle implies an extended shape of matter field equations like the Dirac equation. If there is postulated invariance of such a generalized field equation under local gauge transformations, the usual covariant derivative containing the gauge potential has to be replaced by a generalized covariant derivative. This leads to a generalized interaction between the matter field and the gauge field as well as to an additional self-interaction of the gauge field. Since the existence of a minimal length scale seems to be a necessary assumption of any consistent quantum theory of gravity, the gauge principle is a constitutive ingredient of the standard model, and even gravity can be described as gauge theory of local translations or Lorentz transformations, the presented extension of gauge theories appears as a very important consideration.

  6. Planck Constant Determination from Power Equivalence

    Science.gov (United States)

    Newell, David B.

    2000-04-01

    Equating mechanical to electrical power links the kilogram, the meter, and the second to the practical realizations of the ohm and the volt derived from the quantum Hall and the Josephson effects, yielding an SI determination of the Planck constant. The NIST watt balance uses this power equivalence principle, and in 1998 measured the Planck constant with a combined relative standard uncertainty of 8.7 x 10-8, the most accurate determination to date. The next generation of the NIST watt balance is now being assembled. Modification to the experimental facilities have been made to reduce the uncertainty components from vibrations and electromagnetic interference. A vacuum chamber has been installed to reduce the uncertainty components associated with performing the experiment in air. Most of the apparatus is in place and diagnostic testing of the balance should begin this year. Once a combined relative standard uncertainty of one part in 10-8 has been reached, the power equivalence principle can be used to monitor the possible drift in the artifact mass standard, the kilogram, and provide an accurate alternative definition of mass in terms of fundamental constants. *Electricity Division, Electronics and Electrical Engineering Laboratory, Technology Administration, U.S. Department of Commerce. Contribution of the National Institute of Standards and Technology, not subject to copyright in the U.S.

  7. Diabetes and hypertension care among male prisoners in Mexico City: exploring transition of care and the equivalence principle.

    Science.gov (United States)

    Silverman-Retana, Omar; Servan-Mori, Edson; Lopez-Ridaura, Ruy; Bautista-Arredondo, Sergio

    2016-07-01

    To document the performance of diabetes and hypertension care in two large male prisons in Mexico City. We analyzed data from a cross-sectional study carried out during July-September 2010, including 496 prisoners with hypertension or diabetes in Mexico City. Bivariate and multivariable logistic regressions were used to assess process-of-care indicators and disease control status. Hypertension and diabetes prevalence were estimated on 2.1 and 1.4 %, respectively. Among prisoners with diabetes 22.7 % (n = 62) had hypertension as comorbidity. Low achievement of process-of-care indicators-follow-up visits, blood pressure and laboratory assessments-were observed during incarceration compared to the same prisoners in the year prior to incarceration. In contrast to nonimprisoned diabetes population from Mexico City and from the lowest quintile of socioeconomic status at the national level, prisoners with diabetes had the lowest performance on process-of-care indicators. Continuity of care for chronic diseases, coupled with the equivalence of care principle, should provide the basis for designing chronic disease health policy for prisoners, with the goal of consistent transition of care from community to prison and vice versa.

  8. Option-implied measures of equity risk

    DEFF Research Database (Denmark)

    Chang, Bo-Young; Christoffersen, Peter; Vainberg, Gregory

    2012-01-01

    Equity risk measured by beta is of great interest to both academics and practitioners. Existing estimates of beta use historical returns. Many studies have found option-implied volatility to be a strong predictor of future realized volatility. We find that option-implied volatility and skewness...... are also good predictors of future realized beta. Motivated by this finding, we establish a set of assumptions needed to construct a beta estimate from option-implied return moments using equity and index options. This beta can be computed using only option data on a single day. It is therefore potentially...

  9. Assaults by Mentally Disordered Offenders in Prison: Equity and Equivalence.

    Science.gov (United States)

    Hales, Heidi; Dixon, Amy; Newton, Zoe; Bartlett, Annie

    2016-06-01

    Managing the violent behaviour of mentally disordered offenders (MDO) is challenging in all jurisdictions. We describe the ethical framework and practical management of MDOs in England and Wales in the context of the move to equivalence of healthcare between hospital and prison. We consider the similarities and differences between prison and hospital management of the violent and challenging behaviours of MDOs. We argue that both types of institution can learn from each other and that equivalence of care should extend to equivalence of criminal proceedings in court and prisons for MDOs. We argue that any adjudication process in prison for MDOs is enhanced by the relevant involvement of mental health professionals and the articulation of the ethical principles underpinning health and criminal justice practices.

  10. Equivalent Circuit Modeling of Hysteresis Motors

    Energy Technology Data Exchange (ETDEWEB)

    Nitao, J J; Scharlemann, E T; Kirkendall, B A

    2009-08-31

    We performed a literature review and found that many equivalent circuit models of hysteresis motors in use today are incorrect. The model by Miyairi and Kataoka (1965) is the correct one. We extended the model by transforming it to quadrature coordinates, amenable to circuit or digital simulation. 'Hunting' is an oscillatory phenomenon often observed in hysteresis motors. While several works have attempted to model the phenomenon with some partial success, we present a new complete model that predicts hunting from first principles.

  11. Conditions for equivalence of statistical ensembles in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Mallik, Swagata; Chaudhuri, Gargi

    2012-01-01

    Statistical models based on canonical and grand canonical ensembles are extensively used to study intermediate energy heavy-ion collisions. The underlying physical assumption behind canonical and grand canonical models is fundamentally different, and in principle agree only in the thermodynamical limit when the number of particles become infinite. Nevertheless, we show that these models are equivalent in the sense that they predict similar results if certain conditions are met even for finite nuclei. In particular, the results converge when nuclear multifragmentation leads to the formation of predominantly nucleons and low mass clusters. The conditions under which the equivalence holds are amenable to present day experiments.

  12. Progress in classical and quantum variational principles

    International Nuclear Information System (INIS)

    Gray, C G; Karl, G; Novikov, V A

    2004-01-01

    We review the development and practical uses of a generalized Maupertuis least action principle in classical mechanics in which the action is varied under the constraint of fixed mean energy for the trial trajectory. The original Maupertuis (Euler-Lagrange) principle constrains the energy at every point along the trajectory. The generalized Maupertuis principle is equivalent to Hamilton's principle. Reciprocal principles are also derived for both the generalized Maupertuis and the Hamilton principles. The reciprocal Maupertuis principle is the classical limit of Schroedinger's variational principle of wave mechanics and is also very useful to solve practical problems in both classical and semiclassical mechanics, in complete analogy with the quantum Rayleigh-Ritz method. Classical, semiclassical and quantum variational calculations are carried out for a number of systems, and the results are compared. Pedagogical as well as research problems are used as examples, which include nonconservative as well as relativistic systems. '... the most beautiful and important discovery of Mechanics.' Lagrange to Maupertuis (November 1756)

  13. Babinet's principle in double-refraction systems

    Science.gov (United States)

    Ropars, Guy; Le Floch, Albert

    2014-06-01

    Babinet's principle applied to systems with double refraction is shown to involve spatial interchanges between the ordinary and extraordinary patterns observed through two complementary screens. As in the case of metamaterials, the extraordinary beam does not follow the Snell-Descartes refraction law, the superposition principle has to be applied simultaneously at two points. Surprisingly, by contrast to the intuitive impression, in the presence of the screen with an opaque region, we observe that the emerging extraordinary photon pattern, which however has undergone a deviation, remains fixed when a natural birefringent crystal is rotated while the ordinary one rotates with the crystal. The twofold application of Babinet's principle implies intensity and polarization interchanges but also spatial and dynamic interchanges which should occur in birefringent metamaterials.

  14. Multifractal analysis of implied volatility in index options

    Science.gov (United States)

    Oh, GabJin

    2014-06-01

    In this paper, we analyze the statistical and the non-linear properties of the log-variations in implied volatility for the CAC40, DAX and S& P500 daily index options. The price of an index option is generally represented by its implied volatility surface, including its smile and skew properties. We utilize a Lévy process model as the underlying asset to deepen our understanding of the intrinsic property of the implied volatility in the index options and estimate the implied volatility surface. We find that the options pricing models with the exponential Lévy model can reproduce the smile or sneer features of the implied volatility that are observed in real options markets. We study the variation in the implied volatility for at-the-money index call and put options, and we find that the distribution function follows a power-law distribution with an exponent of 3.5 ≤ γ ≤ 4.5. Especially, the variation in the implied volatility exhibits multifractal spectral characteristics, and the global financial crisis has influenced the complexity of the option markets.

  15. On the equivalence of the Clauser–Horne and Eberhard inequality based tests

    International Nuclear Information System (INIS)

    Khrennikov, Andrei; Ramelow, Sven; Ursin, Rupert; Wittmann, Bernhard; Kofler, Johannes; Basieva, Irina

    2014-01-01

    Recently, the results of the first experimental test for entangled photons closing the detection loophole (also referred to as the fair sampling loophole) were published (Vienna, 2013). From the theoretical viewpoint the main distinguishing feature of this long-aspired to experiment was that the Eberhard inequality was used. Almost simultaneously another experiment closing this loophole was performed (Urbana-Champaign, 2013) and it was based on the Clauser–Horne inequality (for probabilities). The aim of this note is to analyze the mathematical and experimental equivalence of tests based on the Eberhard inequality and various forms of the Clauser–Horne inequality. The structure of the mathematical equivalence is nontrivial. In particular, it is necessary to distinguish between algebraic and statistical equivalence. Although the tests based on these inequalities are algebraically equivalent, they need not be equivalent statistically, i.e., theoretically the level of statistical significance can drop under transition from one test to another (at least for finite samples). Nevertheless, the data collected in the Vienna test implies not only a statistically significant violation of the Eberhard inequality, but also of the Clauser–Horne inequality (in the ratio-rate form): for both a violation >60σ. (paper)

  16. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. II. The rejection of common mode forces

    International Nuclear Information System (INIS)

    Comandi, G.L.; Toncelli, R.; Chiofalo, M.L.; Bramanti, D.; Nobili, A.M.

    2006-01-01

    'Galileo Galilei on the ground' (GGG) is a fast rotating differential accelerometer designed to test the equivalence principle (EP). Its sensitivity to differential effects, such as the effect of an EP violation, depends crucially on the capability of the accelerometer to reject all effects acting in common mode. By applying the theoretical and simulation methods reported in Part I of this work, and tested therein against experimental data, we predict the occurrence of an enhanced common mode rejection of the GGG accelerometer. We demonstrate that the best rejection of common mode disturbances can be tuned in a controlled way by varying the spin frequency of the GGG rotor

  17. The special theory of Brownian relativity: equivalence principle for dynamic and static random paths and uncertainty relation for diffusion.

    Science.gov (United States)

    Mezzasalma, Stefano A

    2007-03-15

    The theoretical basis of a recent theory of Brownian relativity for polymer solutions is deepened and reexamined. After the problem of relative diffusion in polymer solutions is addressed, its two postulates are formulated in all generality. The former builds a statistical equivalence between (uncorrelated) timelike and shapelike reference frames, that is, among dynamical trajectories of liquid molecules and static configurations of polymer chains. The latter defines the "diffusive horizon" as the invariant quantity to work with in the special version of the theory. Particularly, the concept of universality in polymer physics corresponds in Brownian relativity to that of covariance in the Einstein formulation. Here, a "universal" law consists of a privileged observation, performed from the laboratory rest frame and agreeing with any diffusive reference system. From the joint lack of covariance and simultaneity implied by the Brownian Lorentz-Poincaré transforms, a relative uncertainty arises, in a certain analogy with quantum mechanics. It is driven by the difference between local diffusion coefficients in the liquid solution. The same transformation class can be used to infer Fick's second law of diffusion, playing here the role of a gauge invariance preserving covariance of the spacetime increments. An overall, noteworthy conclusion emerging from this view concerns the statistics of (i) static macromolecular configurations and (ii) the motion of liquid molecules, which would be much more related than expected.

  18. Derivation of the blackbody radiation spectrum from the equivalence principle in classical physics with classical electromagnetic zero-point radiation

    International Nuclear Information System (INIS)

    Boyer, T.H.

    1984-01-01

    A derivation of Planck's spectrum including zero-point radiation is given within classical physics from recent results involving the thermal effects of acceleration through classical electromagnetic zero-point radiation. A harmonic electric-dipole oscillator undergoing a uniform acceleration a through classical electromagnetic zero-point radiation responds as would the same oscillator in an inertial frame when not in zero-point radiation but in a different spectrum of random classical radiation. Since the equivalence principle tells us that the oscillator supported in a gravitational field g = -a will respond in the same way, we see that in a gravitational field we can construct a perpetual-motion machine based on this different spectrum unless the different spectrum corresponds to that of thermal equilibrium at a finite temperature. Therefore, assuming the absence of perpetual-motion machines of the first kind in a gravitational field, we conclude that the response of an oscillator accelerating through classical zero-point radiation must be that of a thermal system. This then determines the blackbody radiation spectrum in an inertial frame which turns out to be exactly Planck's spectrum including zero-point radiation

  19. BH3105 type neutron dose equivalent meter of high sensitivity

    International Nuclear Information System (INIS)

    Ji Changsong; Zhang Enshan; Yang Jianfeng; Zhang Hong; Huang Jiling

    1995-10-01

    It is noted that to design a neutron dose meter of high sensitivity is almost impossible in the frame of traditional designing principle--'absorption net principle'. Based on a newly proposed principle of obtaining neutron dose equi-biological effect adjustment--' absorption stick principle', a brand-new neutron dose-equivalent meter with high neutron sensitivity BH3105 has been developed. Its sensitivity reaches 10 cps/(μSv·h -1 ), which is 18∼40 times higher than one of foreign products of the same kind and is 10 4 times higher than that of domestic FJ342 neutron rem-meter. BH3105 has a measurement range from 0.1μSv/h to 1 Sv/h which is 1 or 2 orders wider than that of the other's. It has the advanced properties of gamma-resistance, energy response, orientation, etc. (6 tabs., 5 figs.)

  20. Completely boundary-free minimum and maximum principles for neutron transport and their least-squares and Galerkin equivalents

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1982-01-01

    Some minimum and maximum variational principles for even-parity neutron transport are reviewed and the corresponding principles for odd-parity transport are derived by a simple method to show why the essential boundary conditions associated with these maximum principles have to be imposed. The method also shows why both the essential and some of the natural boundary conditions associated with these minimum principles have to be imposed. These imposed boundary conditions for trial functions in the variational principles limit the choice of the finite element used to represent trial functions. The reasons for the boundary conditions imposed on the principles for even- and odd-parity transport point the way to a treatment of composite neutron transport, for which completely boundary-free maximum and minimum principles are derived from a functional identity. In general a trial function is used for each parity in the composite neutron transport, but this can be reduced to one without any boundary conditions having to be imposed. (author)

  1. Implied terms in English and Romanian law

    OpenAIRE

    Stefan Dinu

    2015-01-01

    This study analyses the matter of implied terms from the point of view of both English and Romanian law. First, the introductory section provides a brief overview of implied terms, by defining this class of contractual clauses and by providing their general features. Second, the English law position is analysed, where it is generally recognised that a term may be implied in one of three manners, which are described in turn. An emp hasis is made on the Privy Council’s decision in Attorney G...

  2. The Forecast Performance of Competing Implied Volatility Measures

    DEFF Research Database (Denmark)

    Tsiaras, Leonidas

    This study examines the information content of alternative implied volatility measures for the 30 components of the Dow Jones Industrial Average Index from 1996 until 2007. Along with the popular Black-Scholes and "model-free" implied volatility expectations, the recently proposed corridor implie......, volatility definitions, loss functions and forecast evaluation settings....

  3. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  4. Implied terms in English and Romanian law

    Directory of Open Access Journals (Sweden)

    Stefan Dinu

    2015-12-01

    Full Text Available This study analyses the matter of implied terms from the point of view of both English and Romanian law. First, the introductory section provides a brief overview of implied terms, by defining this class of contractual clauses and by providing their general features. Second, the English law position is analysed, where it is generally recognised that a term may be implied in one of three manners, which are described in turn. An emp hasis is made on the Privy Council’s decision in Attorney General of Belize v Belize Telecom Ltd and its impact. Third, the Romanian law position is described, the starting point of the discussion being represented by the provisions of Article 1272 of the 2009 Civil Code. Fourth, the study ends by mentioning some points of comparison between the two legal systems in what concerns the approach towards implied terms.

  5. Heisenberg's principle of uncertainty and the uncertainty relations

    International Nuclear Information System (INIS)

    Redei, Miklos

    1987-01-01

    The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs

  6. The role of general relativity in the uncertainty principle

    International Nuclear Information System (INIS)

    Padmanabhan, T.

    1986-01-01

    The role played by general relativity in quantum mechanics (especially as regards the uncertainty principle) is investigated. It is confirmed that the validity of time-energy uncertainty does depend on gravitational time dilation. It is also shown that there exists an intrinsic lower bound to the accuracy with which acceleration due to gravity can be measured. The motion of equivalence principle in quantum mechanics is clarified. (author)

  7. Double meanings will not save the principle of double effect.

    Science.gov (United States)

    Douglas, Charles D; Kerridge, Ian H; Ankeny, Rachel A

    2014-06-01

    In an article somewhat ironically entitled "Disambiguating Clinical Intentions," Lynn Jansen promotes an idea that should be bewildering to anyone familiar with the literature on the intention/foresight distinction. According to Jansen, "intention" has two commonsense meanings, one of which is equivalent to "foresight." Consequently, questions about intention are "infected" with ambiguity-people cannot tell what they mean and do not know how to answer them. This hypothesis is unsupported by evidence, but Jansen states it as if it were accepted fact. In this reply, we make explicit the multiple misrepresentations she has employed to make her hypothesis seem plausible. We also point out the ways in which it defies common sense. In particular, Jansen applies her thesis only to recent empirical research on the intentions of doctors, totally ignoring the widespread confusion that her assertion would imply in everyday life, in law, and indeed in religious and philosophical writings concerning the intention/foresight distinction and the Principle of Double Effect. © The Author 2014. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Testing the strong equivalence principle with the triple pulsar PSR J 0337 +1715

    Science.gov (United States)

    Shao, Lijing

    2016-04-01

    Three conceptually different masses appear in equations of motion for objects under gravity, namely, the inertial mass, mI , the passive gravitational mass, mP, and the active gravitational mass, mA. It is assumed that, for any objects, mI=mP=mA in the Newtonian gravity, and mI=mP in the Einsteinian gravity, oblivious to objects' sophisticated internal structure. Empirical examination of the equivalence probes deep into gravity theories. We study the possibility of carrying out new tests based on pulsar timing of the stellar triple system, PSR J 0337 +1715 . Various machine-precision three-body simulations are performed, from which, the equivalence-violating parameters are extracted with Markov chain Monte Carlo sampling that takes full correlations into account. We show that the difference in masses could be probed to 3 ×1 0-8 , improving the current constraints from lunar laser ranging on the post-Newtonian parameters that govern violations of mP=mI and mA=mP by thousands and millions, respectively. The test of mP=mA would represent the first test of Newton's third law with compact objects.

  9. Principles or imagination? Two approaches to global justice.

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2007-01-01

    What does it mean to introduce the notion of imagination in the discussion about global justice? What is gained by studying the role of imagination in thinking about global justice? Does a focus on imagination imply that we must replace existing influential principle-centred approaches such as that

  10. Interpretation of Ukrainian and Polish Adverbial Word Equivalents Form and Meaning Interaction in National Explanatory Lexicography

    Directory of Open Access Journals (Sweden)

    Alla Luchyk

    2015-06-01

    Full Text Available Interpretation of Ukrainian and Polish Adverbial Word Equivalents Form and Meaning Interaction in National Explanatory Lexicography The article proves the necessity and possibility of compiling dictionaries with intermediate existence status glossary units, to which the word equivalents belong. In order to form the Ukrainian-Polish dictionary glossary of this type the form and meaning analysis of Ukrainian and Polish word equivalents is done, the common and distinctive features of these language system elements are described, the compiling principles of such dictionary are clarified.

  11. A study of principle and testing of piezoelectric transformer

    International Nuclear Information System (INIS)

    Liu Weiyue; Wang Yanfang; Huang Yihua; Shi Jun

    2002-01-01

    The operating principle and structure of a kind of piezoelectric transformer which can be used in a particle accelerator are investigated. The properties of piezoelectric transformer are tested through equivalent circuit combined with experiment

  12. On organizing principles of discrete differential geometry. Geometry of spheres

    International Nuclear Information System (INIS)

    Bobenko, Alexander I; Suris, Yury B

    2007-01-01

    Discrete differential geometry aims to develop discrete equivalents of the geometric notions and methods of classical differential geometry. This survey contains a discussion of the following two fundamental discretization principles: the transformation group principle (smooth geometric objects and their discretizations are invariant with respect to the same transformation group) and the consistency principle (discretizations of smooth parametrized geometries can be extended to multidimensional consistent nets). The main concrete geometric problem treated here is discretization of curvature-line parametrized surfaces in Lie geometry. Systematic use of the discretization principles leads to a discretization of curvature-line parametrization which unifies circular and conical nets.

  13. Electrostatic Positioning System for a free fall test at drop tower Bremen and an overview of tests for the Weak Equivalence Principle in past, present and future

    Science.gov (United States)

    Sondag, Andrea; Dittus, Hansjörg

    2016-08-01

    The Weak Equivalence Principle (WEP) is at the basis of General Relativity - the best theory for gravitation today. It has been and still is tested with different methods and accuracies. In this paper an overview of tests of the Weak Equivalence Principle done in the past, developed in the present and planned for the future is given. The best result up to now is derived from the data of torsion balance experiments by Schlamminger et al. (2008). An intuitive test of the WEP consists of the comparison of the accelerations of two free falling test masses of different composition. This has been carried through by Kuroda & Mio (1989, 1990) with the up to date most precise result for this setup. There is still more potential in this method, especially with a longer free fall time and sensors with a higher resolution. Providing a free fall time of 4.74 s (9.3 s using the catapult) the drop tower of the Center of Applied Space Technology and Microgravity (ZARM) at the University of Bremen is a perfect facility for further improvements. In 2001 a free fall experiment with high sensitive SQUID (Superconductive QUantum Interference Device) sensors tested the WEP with an accuracy of 10-7 (Nietzsche, 2001). For optimal conditions one could reach an accuracy of 10-13 with this setup (Vodel et al., 2001). A description of this experiment and its results is given in the next part of this paper. For the free fall of macroscopic test masses it is important to start with precisely defined starting conditions concerning the positions and velocities of the test masses. An Electrostatic Positioning System (EPS) has been developed to this purpose. It is described in the last part of this paper.

  14. Maximum principle and convergence of central schemes based on slope limiters

    KAUST Repository

    Mehmetoglu, Orhan; Popov, Bojan

    2012-01-01

    A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.

  15. Variational principle for the Bloch unified reaction theory

    International Nuclear Information System (INIS)

    MacDonald, W.; Rapheal, R.

    1975-01-01

    The unified reaction theory formulated by Claude Bloch uses a boundary value operator to write the Schroedinger equation for a scattering state as an inhomogeneous equation over the interaction region. As suggested by Lane and Robson, this equation can be solved by using a matrix representation on any set which is complete over the interaction volume. Lane and Robson have proposed, however, that a variational form of the Bloch equation can be used to obtain a ''best'' value for the S-matrix when a finite subset of this basis is used. The variational principle suggested by Lane and Robson, which gives a many-channel S-matrix different from the matrix solution on a finite basis, is considered first, and it is shown that the difference results from the fact that their variational principle is not, in fact, equivalent to the Bloch equation. Then a variational principle is presented which is fully equivalent to the Bloch form of the Schroedinger equation, and it is shown that the resulting S-matrix is the same as that obtained from the matrix solution of this equation. (U.S.)

  16. Variational principles for locally variational forms

    International Nuclear Information System (INIS)

    Brajercik, J.; Krupka, D.

    2005-01-01

    We present the theory of higher order local variational principles in fibered manifolds, in which the fundamental global concept is a locally variational dynamical form. Any two Lepage forms, defining a local variational principle for this form, differ on intersection of their domains, by a variationally trivial form. In this sense, but in a different geometric setting, the local variational principles satisfy analogous properties as the variational functionals of the Chern-Simons type. The resulting theory of extremals and symmetries extends the first order theories of the Lagrange-Souriau form, presented by Grigore and Popp, and closed equivalents of the first order Euler-Lagrange forms of Hakova and Krupkova. Conceptually, our approach differs from Prieto, who uses the Poincare-Cartan forms, which do not have higher order global analogues

  17. The equivalent energy method: an engineering approach to fracture

    International Nuclear Information System (INIS)

    Witt, F.J.

    1981-01-01

    The equivalent energy method for elastic-plastic fracture evaluations was developed around 1970 for determining realistic engineering estimates for the maximum load-displacement or stress-strain conditions for fracture of flawed structures. The basis principles were summarized but the supporting experimental data, most of which were obtained after the method was proposed, have never been collated. This paper restates the original bases more explicitly and presents the validating data in graphical form. Extensive references are given. The volumetric energy ratio, a modelling parameter encompassing both size and temperature, is the fundamental parameter of the equivalent energy method. It is demonstrated that, in an engineering sense, the volumetric energy ratio is a unique material characteristic for a steel, much like a material property except size must be taken into account. With this as a proposition, the basic formula of the equivalent energy method is derived. Sufficient information is presented so that investigators and analysts may judge the viability and applicability of the method to their areas of interest. (author)

  18. Designing sustainable concrete on the basis of equivalence performance: assessment criteria for safety

    NARCIS (Netherlands)

    Visser, J.H.M.; Bigaj, A.J.

    2014-01-01

    In order not to hampers innovations, the Dutch National Building Regulations (NBR), allow an alternative approval route for new building materials. It is based on the principles of equivalent performance which states that if the solution proposed can be proven to have the same level of safety,

  19. THE CONSTITUTIONAL PRINCIPLE OF EQUALITY - LEGAL SIGNIFICANCE AND SOCIAL IMPLICATIONS -

    Directory of Open Access Journals (Sweden)

    Marius ANDREESCU

    2017-12-01

    Full Text Available The equality in human rights and obligations, the equality of citizens before the law are fundamental categories of the theories on social democracy but also conditions of the lawful state, without which constitutional democracy cannot be conceived. In Romanian Constitution, this principle is consecrated in the form of equality of the citizens before the law and public authorities. There are also particular aspects of this principle consecrated in the Constitution. The constitutional principle of equality requires that equal treatment be applied to equal situations. This social and legal reality implies numerous interferences between the principle of equality and other constitutional principles. In this study, by using theoretical and jurisprudential arguments, we intend to demonstrate that, in relation to contemporary social reality, equality, as a constitutional principle, is a particular aspect of the principle of proportionality. The latter one expresses in essence the ideas of: fairness, justice, reasonableness and fair appropriateness of state decisions to the facts and legitimate aims proposed.

  20. Entropy-based implied volatility and its information content

    NARCIS (Netherlands)

    X. Xiao (Xiao); C. Zhou (Chen)

    2016-01-01

    markdownabstractThis paper investigates the maximum entropy approach on estimating implied volatility. The entropy approach also allows to measure option implied skewness and kurtosis nonparametrically, and to construct confidence intervals. Simulations show that the en- tropy approach outperforms

  1. Uniform bounds for Black--Scholes implied volatility

    OpenAIRE

    Tehranchi, Michael R.

    2015-01-01

    In this note, Black--Scholes implied volatility is expressed in terms of various optimisation problems. From these representations, upper and lower bounds are derived which hold uniformly across moneyness and call price. Various symmetries of the Black--Scholes formula are exploited to derive new bounds from old. These bounds are used to reprove asymptotic formulae for implied volatility at extreme strikes and/or maturities.

  2. Politico-economic equivalence

    DEFF Research Database (Denmark)

    Gonzalez Eiras, Martin; Niepelt, Dirk

    2015-01-01

    Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime and a st......Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime...... their use in the context of several applications, relating to social security reform, tax-smoothing policies and measures to correct externalities....

  3. Development of a superconducting position sensor for the Satellite Test of the Equivalence Principle

    Science.gov (United States)

    Clavier, Odile Helene

    The Satellite Test of the Equivalence Principle (STEP) is a joint NASA/ESA mission that proposes to measure the differential acceleration of two cylindrical test masses orbiting the earth in a drag-free satellite to a precision of 10-18 g. Such an experiment would conceptually reproduce Galileo's tower of Pisa experiment with a much longer time of fall and greatly reduced disturbances. The superconducting test masses are constrained in all degrees of freedom except their axial direction (the sensitive axis) using superconducting bearings. The STEP accelerometer measures the differential position of the masses in their sensitive direction using superconducting inductive pickup coils coupled to an extremely sensitive magnetometer called a DC-SQUID (Superconducting Quantum Interference Device). Position sensor development involves the design, manufacture and calibration of pickup coils that will meet the acceleration sensitivity requirement. Acceleration sensitivity depends on both the displacement sensitivity and stiffness of the position sensor. The stiffness must kept small while maintaining stability of the accelerometer. Using a model for the inductance of the pickup coils versus displacement of the test masses, a computer simulation calculates the sensitivity and stiffness of the accelerometer in its axial direction. This simulation produced a design of pickup coils for the four STEP accelerometers. Manufacture of the pickup coils involves standard photolithography techniques modified for superconducting thin-films. A single-turn pickup coil was manufactured and produced a successful superconducting coil using thin-film Niobium. A low-temperature apparatus was developed with a precision position sensor to measure the displacement of a superconducting plate (acting as a mock test mass) facing the coil. The position sensor was designed to detect five degrees of freedom so that coupling could be taken into account when measuring the translation of the plate

  4. Framing Climate Goals in Terms of Cumulative CO2-Forcing-Equivalent Emissions

    Science.gov (United States)

    Jenkins, S.; Millar, R. J.; Leach, N.; Allen, M. R.

    2018-03-01

    The relationship between cumulative CO2 emissions and CO2-induced warming is determined by the Transient Climate Response to Emissions (TCRE), but total anthropogenic warming also depends on non-CO2 forcing, complicating the interpretation of emissions budgets based on CO2 alone. An alternative is to frame emissions budgets in terms of CO2-forcing-equivalent (CO2-fe) emissions—the CO2 emissions that would yield a given total anthropogenic radiative forcing pathway. Unlike conventional "CO2-equivalent" emissions, these are directly related to warming by the TCRE and need to fall to zero to stabilize warming: hence, CO2-fe emissions generalize the concept of a cumulative carbon budget to multigas scenarios. Cumulative CO2-fe emissions from 1870 to 2015 inclusive are found to be 2,900 ± 600 GtCO2-fe, increasing at a rate of 67 ± 9.5 GtCO2-fe/yr. A TCRE range of 0.8-2.5°C per 1,000 GtC implies a total budget for 0.6°C of additional warming above the present decade of 880-2,750 GtCO2-fe, with 1,290 GtCO2-fe implied by the Coupled Model Intercomparison Project Phase 5 median response, corresponding to 19 years' CO2-fe emissions at the current rate.

  5. What is correct: equivalent dose or dose equivalent

    International Nuclear Information System (INIS)

    Franic, Z.

    1994-01-01

    In Croatian language some physical quantities in radiation protection dosimetry have not precise names. Consequently, in practice either terms in English or mathematical formulas are used. The situation is even worse since the Croatian language only a limited number of textbooks, reference books and other papers are available. This paper compares the concept of ''dose equivalent'' as outlined in International Commission on Radiological Protection (ICRP) recommendations No. 26 and newest, conceptually different concept of ''equivalent dose'' which is introduced in ICRP 60. It was found out that Croatian terminology is both not uniform and unprecise. For the term ''dose equivalent'' was, under influence of Russian and Serbian languages, often used as term ''equivalent dose'' even from the point of view of ICRP 26 recommendations, which was not justified. Unfortunately, even now, in Croatia the legal unit still ''dose equivalent'' defined as in ICRP 26, but the term used for it is ''equivalent dose''. Therefore, in Croatian legislation a modified set of quantities introduced in ICRP 60, should be incorporated as soon as possible

  6. Uniform Bounds for Black--Scholes Implied Volatility

    OpenAIRE

    Tehranchi, Michael Rummine

    2016-01-01

    In this note, Black--Scholes implied volatility is expressed in terms of various optimization problems. From these representations, upper and lower bounds are derived which hold uniformly across moneyness and call price. Various symmetries of the Black--Scholes formula are exploited to derive new bounds from old. These bounds are used to reprove asymptotic formulas for implied volatility at extreme strikes and/or maturities. the Society for Industrial and Applied Mathematics 10.1137/14095248X

  7. The gravitational exclusion principle and null states in anti-de Sitter space

    International Nuclear Information System (INIS)

    Castro, Alejandra; Maloney, Alexander; Hartman, Thomas

    2011-01-01

    The holographic principle implies a vast reduction in the number of degrees of freedom of quantum gravity. This idea can be made precise in AdS 3 , where the the stringy or gravitational exclusion principle asserts that certain perturbative excitations are not present in the exact quantum spectrum. We show that this effect is visible directly in the bulk gravity theory: the norm of the offending linearized state is zero or negative. When the norm is negative, the theory is signalling its own breakdown as an effective field theory; this provides a perturbative bulk explanation for the stringy exclusion principle. When the norm vanishes the bulk state is null rather than physical. This implies that certain non-trivial diffeomorphisms must be regarded as gauge symmetries rather than spectrum-generating elements of the asymptotic symmetry group. This leads to subtle effects in the computation of one-loop determinants for Einstein gravity, higher spin theories and topologically massive gravity in AdS 3 . In particular, heat kernel methods do not capture the correct spectrum of a theory with null states. Communicated by S Ross

  8. The energy price equivalence of carbon taxes and emissions trading—Theory and evidence

    International Nuclear Information System (INIS)

    Chiu, Fan-Ping; Kuo, Hsiao-I.; Chen, Chi-Chung; Hsu, Chia-Sheng

    2015-01-01

    Highlights: • The price equivalence of carbon taxes and emissions trading from theoretical and empirical models are developed. • The theoretical findings show that the price effects of these two schemes depend on the market structures. • Energy prices under a carbon tax is lower than an issions trading in an imperfectly competitive market. • A case study from Taiwan gasoline market is applied here. - Abstract: The main purpose of this study is to estimate the energy price equivalence of carbon taxes and emissions trading in an energy market. To this end, both the carbon tax and emissions trading systems are designed in the theoretical model, while alternative market structures are taken into consideration. The theoretical findings show that the economic effects of these two schemes on energy prices depend on the market structures. Energy prices are equivalent between these two schemes given the same amount of greenhouse gas emissions (GHGE) reduction when the market structure is characterized by perfect competition. However, energy prices will be lower when a carbon tax is introduced than when emissions trading is implemented in an imperfectly competitive market, which implies that the price effects of a carbon tax and emissions trading depend on the energy market structure. Such a theoretical basis is applied to the market for gasoline in Taiwan. The empirical results indicate that the gasoline prices under a carbon tax are lower than under emissions trading. This implies that the structure of the energy market needs to be examined when a country seeks to reduce its GHGE through the implementation of either a carbon tax or emissions trading.

  9. Option-implied term structures

    OpenAIRE

    Vogt, Erik

    2014-01-01

    The illiquidity of long-maturity options has made it difficult to study the term structures of option spanning portfolios. This paper proposes a new estimation and inference framework for these option-implied term structures that addresses long-maturity illiquidity. By building a sieve estimator around the risk-neutral valuation equation, the framework theoretically justifies (fat-tailed) extrapolations beyond truncated strikes and between observed maturities while remaining nonparametric. Ne...

  10. A Brief Talk of Functional Equivalence Used in Chinese Translation of English Lyrics

    Institute of Scientific and Technical Information of China (English)

    吴晨

    2015-01-01

    With the development of cultural exchanges between China and foreign countries,a great number of English songs,serving as one important part of cultural exchanges,have been an important part of Chinese people’s daily life. However,barriers always could be encountered in translating those lyrics into Chinese. Thus,how to realize functional equivalence between Chinese translation version and English song lyrics has been a tough target which could not be neglected. By looking at Chinese translation of English song lyrics,a study of functional equivalence used on it will be made to solve such problems,it includes the principles for producing functional equivalence and adjustment. The key to realize functional equivalence in Chinese translation of English song lyrics is,namely,to balance rhythm and tones with the style of English songs,to minimize the loss of meaning in Chinese translation version,so that Chinese music lovers would understand the meaning of English song lyrics.

  11. Thermal versus high pressure processing of carrots: A comparative pilot-scale study on equivalent basis

    NARCIS (Netherlands)

    Vervoort, L.; Plancken, Van der L.; Grauwet, T.; Verlinde, P.; Matser, A.M.; Hendrickx, M.; Loey, van A.

    2012-01-01

    This report describes the first study comparing different high pressure (HP) and thermal treatments at intensities ranging from mild pasteurization to sterilization conditions. To allow a fair comparison, the processing conditions were selected based on the principles of equivalence. Moreover,

  12. Framework for assessing causality in disease management programs: principles.

    Science.gov (United States)

    Wilson, Thomas; MacDowell, Martin

    2003-01-01

    To credibly state that a disease management (DM) program "caused" a specific outcome it is required that metrics observed in the DM population be compared with metrics that would have been expected in the absence of a DM intervention. That requirement can be very difficult to achieve, and epidemiologists and others have developed guiding principles of causality by which credible estimates of DM impact can be made. This paper introduces those key principles. First, DM program metrics must be compared with metrics from a "reference population." This population should be "equivalent" to the DM intervention population on all factors that could independently impact the outcome. In addition, the metrics used in both groups should use the same defining criteria (ie, they must be "comparable" to each other). The degree to which these populations fulfill the "equivalent" assumption and metrics fulfill the "comparability" assumption should be stated. Second, when "equivalence" or "comparability" is not achieved, the DM managers should acknowledge this fact and, where possible, "control" for those factors that may impact the outcome(s). Finally, it is highly unlikely that one study will provide definitive proof of any specific DM program value for all time; thus, we strongly recommend that studies be ongoing, at multiple points in time, and at multiple sites, and, when observational study designs are employed, that more than one type of study design be utilized. Methodologically sophisticated studies that follow these "principles of causality" will greatly enhance the reputation of the important and growing efforts in DM.

  13. Matter tensor from the Hilbert variational principle

    International Nuclear Information System (INIS)

    Pandres, D. Jr.

    1976-01-01

    We consider the Hilbert variational principle which is conventionally used to derive Einstein's equations for the source-free gravitational field. We show that at least one version of the equivalence principle suggests an alternative way of performing the variation, resulting in a different set of Einstein equations with sources automatically present. This illustrates a technique which may be applied to any theory that is derived from a variational principle and that admits a gauge group. The essential point is that, if one first imposes a gauge condition and then performs the variation, one obtains field equations with source terms which do not appear if one first performs the variation and then imposes the gauge condition. A second illustration is provided by the variational principle conventionally used to derive Maxwell's equations for the source-free electromagnetic field. If one first imposes the Lorentz gauge condition and then performs the variation, one obtains Maxwell's equations with sources present

  14. Equivalent Modeling of DFIG-Based Wind Power Plant Considering Crowbar Protection

    Directory of Open Access Journals (Sweden)

    Qianlong Zhu

    2016-01-01

    Full Text Available Crowbar conduction has an impact on the transient characteristics of a doubly fed induction generator (DFIG in the short-circuit fault condition. But crowbar protection is seldom considered in the aggregation method for equivalent modeling of DFIG-based wind power plants (WPPs. In this paper, the relationship between the growth of postfault rotor current and the amplitude of the terminal voltage dip is studied by analyzing the rotor current characteristics of a DFIG during the fault process. Then, a terminal voltage dip criterion which can identify crowbar conduction is proposed. Considering the different grid connection structures for single DFIG and WPP, the criterion is revised and the crowbar conduction is judged depending on the revised criterion. Furthermore, an aggregation model of the WPP is established based on the division principle of crowbar conduction. Finally, the proposed equivalent WPP is simulated on a DIgSILENT PowerFactory platform and the results are compared with those of the traditional equivalent WPPs and the detailed WPP. The simulation results show the effectiveness of the method for equivalent modeling of DFIG-based WPP when crowbar protection is also taken into account.

  15. Money market rates and implied CCAPM rates: some international evidence

    OpenAIRE

    Yamin Ahmad

    2004-01-01

    New Neoclassical Synthesis models equate the instrument of monetary policy to the implied CCAPM rate arising from an Euler equation. This paper identifies monetary policy shocks within six of the G7 countries and examines the movement of money market and implied CCAPM rates. The key result is that an increase in the nominal interest rate leads to a fall in the implied CCAPM rate. Incorporating habit still yields the same result. The findings suggest that the movement of these two rates implie...

  16. Implied Terms: The Foundation in Good Faith and Fair Dealing

    OpenAIRE

    2014-01-01

    With the aim of clarifying English law of implied terms in contracts and explaining their basis in the idea of good faith in performance, it is argued first that two, but no more, types of implied terms can be distinguished (terms implied in fact and terms implied by law), though it is explained why these types are frequently confused. Second, the technique of implication of terms is distinguished in most instances from the task of interpretation of contracts. Third, it is a...

  17. Gravitational leptogenesis, C, CP and strong equivalence

    International Nuclear Information System (INIS)

    McDonald, Jamie I.; Shore, Graham M.

    2015-01-01

    The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.

  18. Gravitational leptogenesis, C, CP and strong equivalence

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, Jamie I.; Shore, Graham M. [Department of Physics, Swansea University,Swansea, SA2 8PP (United Kingdom)

    2015-02-12

    The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.

  19. Reciprocity principle in duct acoustics

    Science.gov (United States)

    Cho, Y.-C.

    1979-01-01

    Various reciprocity relations in duct acoustics have been derived on the basis of the spatial reciprocity principle implied in Green's functions for linear waves. The derivation includes the reciprocity relations between mode conversion coefficients for reflection and transmission in nonuniform ducts, and the relation between the radiation of a mode from an arbitrarily terminated duct and the absorption of an externally incident plane wave by the duct. Such relations are well defined as long as the systems remain linear, regardless of acoustic properties of duct nonuniformities which cause the mode conversions.

  20. The Didactic Principles and Their Applications in the Didactic Activity

    Science.gov (United States)

    Marius-Costel, Esi

    2010-01-01

    The evaluation and reevaluation of the fundamental didactic principles suppose the acceptance at the level of an instructive-educative activity of a new educational paradigm. Thus, its understanding implies an assumption at a conceptual-theoretical level of some approaches where the didactic aspects find their usefulness by relating to value…

  1. Data fusion according to the principle of polyrepresentation

    DEFF Research Database (Denmark)

    Larsen, Birger; Ingwersen, Peter; Lund, Berit

    2009-01-01

    logical data fusion combinations compared to the performance of the four individual models and their intermediate fusions when following the principle of polyrepresentation. This principle is based on cognitive IR perspective (Ingwersen & Järvelin, 2005) and implies that each retrieval model is regarded...... that only the inner disjoint overlap documents between fused models are ranked. The second set of experiments was based on traditional data fusion methods. The experiments involved the 30 TREC 5 topics that contain more than 44 relevant documents. In all tests, the Borda and CombSUM scoring methods were...... the individual models at DCV100. At DCV15, however, the results of polyrepresentative fusion were less predictable.The traditional fusion method based on polyrepresentation principles demonstrates a clear picture of performance at both DCV levels and verifies the polyrepresentation predictions for data fusion...

  2. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    OpenAIRE

    Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte

    2015-01-01

    This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...

  3. Principles of Economic Rationality in Mice.

    Science.gov (United States)

    Rivalan, Marion; Winter, York; Nachev, Vladislav

    2017-12-12

    Humans and non-human animals frequently violate principles of economic rationality, such as transitivity, independence of irrelevant alternatives, and regularity. The conditions that lead to these violations are not completely understood. Here we report a study on mice tested in automated home-cage setups using rewards of drinking water. Rewards differed in one of two dimensions, volume or probability. Our results suggest that mouse choice conforms to the principles of economic rationality for options that differ along a single reward dimension. A psychometric analysis of mouse choices further revealed that mice responded more strongly to differences in probability than to differences in volume, despite equivalence in return rates. This study also demonstrates the synergistic effect between the principles of economic rationality and psychophysics in making quantitative predictions about choices of healthy laboratory mice. This opens up new possibilities for the analyses of multi-dimensional choice and the use of mice with cognitive impairments that may violate economic rationality.

  4. Electrodermal responses to implied versus actual violence on television.

    Science.gov (United States)

    Kalamas, A D; Gruber, M L

    1998-01-01

    The electrodermal response (EDR) of children watching a violent show was measured. Particular attention was paid to the type of violence (actual or implied) that prompted an EDR. In addition, the impact of the auditory component (sounds associated with violence) of the show was evaluated. Implied violent stimuli, such as the villain's face, elicited the strongest EDR. The elements that elicited the weakest responses were the actual violent stimuli, such as stabbing. The background noise and voices of the sound track enhanced the total number of EDRs. The results suggest that implied violence may elicit more fear (as measured by EDRs) than actual violence does and that sounds alone contribute significantly to the emotional response to television violence. One should not, therefore, categorically assume that a show with mostly actual violence evokes less fear than one with mostly implied violence.

  5. Statistical analogues of thermodynamic extremum principles

    Science.gov (United States)

    Ramshaw, John D.

    2018-05-01

    As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.

  6. Detecting overpressure using the Eaton and Equivalent Depth methods in Offshore Nova Scotia, Canada

    Science.gov (United States)

    Ernanda; Primasty, A. Q. T.; Akbar, K. A.

    2018-03-01

    Overpressure is an abnormal high subsurface pressure of any fluids which exceeds the hydrostatic pressure of column of water or formation brine. In Offshore Nova Scotia Canada, the values and depth of overpressure zone are determined using the eaton and equivalent depth method, based on well data and the normal compaction trend analysis. Since equivalent depth method is using effective vertical stress principle and Eaton method considers physical property ratio (velocity). In this research, pressure evaluation only applicable on Penobscot L-30 well. An abnormal pressure is detected at depth 11804 feet as possibly overpressure zone, based on pressure gradient curve and calculation between the Eaton method (7241.3 psi) and Equivalent Depth method (6619.4 psi). Shales within Abenaki formation especially Baccaro Member is estimated as possible overpressure zone due to hydrocarbon generation mechanism.

  7. Evaluating the Quality of Transfer versus Nontransfer Accounting Principles Grades.

    Science.gov (United States)

    Colley, J. R.; And Others

    1996-01-01

    Using 1989-92 student records from three colleges accepting large numbers of transfers from junior schools into accounting, regression analyses compared grades of transfer and nontransfer students. Quality of accounting principle grades of transfer students was not equivalent to that of nontransfer students. (SK)

  8. Equivalence between short-time biphasic and incompressible elastic material responses.

    Science.gov (United States)

    Ateshian, Gerard A; Ellis, Benjamin J; Weiss, Jeffrey A

    2007-06-01

    Porous-permeable tissues have often been modeled using porous media theories such as the biphasic theory. This study examines the equivalence of the short-time biphasic and incompressible elastic responses for arbitrary deformations and constitutive relations from first principles. This equivalence is illustrated in problems of unconfined compression of a disk, and of articular contact under finite deformation, using two different constitutive relations for the solid matrix of cartilage, one of which accounts for the large disparity observed between the tensile and compressive moduli in this tissue. Demonstrating this equivalence under general conditions provides a rationale for using available finite element codes for incompressible elastic materials as a practical substitute for biphasic analyses, so long as only the short-time biphasic response is sought. In practice, an incompressible elastic analysis is representative of a biphasic analysis over the short-term response deltatelasticity tensor, and K is the hydraulic permeability tensor of the solid matrix. Certain notes of caution are provided with regard to implementation issues, particularly when finite element formulations of incompressible elasticity employ an uncoupled strain energy function consisting of additive deviatoric and volumetric components.

  9. Equity-regarding poverty measures: differences in needs and the role of equivalence scales

    OpenAIRE

    Udo Ebert

    2010-01-01

    The paper investigates the definition of equity-regarding poverty measures when there are different household types in the population. It demonstrates the implications of a between-type regressive transfer principle for poverty measures, for the choice of poverty lines, and for the measurement of living standard. The role of equivalence scales, which are popular in empirical work on poverty measurement, is clarified.

  10. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  11. Variational principles of fluid mechanics and electromagnetism: imposition and neglect of the Lin constraint

    International Nuclear Information System (INIS)

    Allen, R.R. Jr.

    1987-01-01

    The Lin constraint has been utilized by a number of authors who have sought to develop Eulerian variational principles in both fluid mechanics and electromagnetics (or plasmadynamics). This dissertation first reviews the work of earlier authors concerning the development of variational principles in both the Eulerian and Lagrangian nomenclatures. In the process, it is shown whether or not the Euler-Lagrange equations that result from the variational principles are equivalent to the generally accepted equations of motion. In particular, it is shown in the case of several Eulerian variational principles that imposition of the Lin constraint results in Euler-Lagrange equations equivalent to the generally accepted equations of motion, whereas neglect of the Lin constraint results in restrictive Euler-Lagrange equations. In an effort to improve the physical motivation behind introduction of the Lin constraint, a new variational constraint is developed based on teh concept of surface forces within a fluid. Additionally, it is shown that a quantity often referred to as the canonical momentum of a charged fluid is not always a constant of the motion of the fluid; and it is demonstrated that there does not exist an unconstrained Eulerian variational principle giving rise to the generally accepted equations of motion for both a perfect fluid and a cold, electromagnetic fluid

  12. Automatically extracting functionally equivalent proteins from SwissProt

    Directory of Open Access Journals (Sweden)

    Martin Andrew CR

    2008-10-01

    Full Text Available Abstract Background There is a frequent need to obtain sets of functionally equivalent homologous proteins (FEPs from different species. While it is usually the case that orthology implies functional equivalence, this is not always true; therefore datasets of orthologous proteins are not appropriate. The information relevant to extracting FEPs is contained in databanks such as UniProtKB/Swiss-Prot and a manual analysis of these data allow FEPs to be extracted on a one-off basis. However there has been no resource allowing the easy, automatic extraction of groups of FEPs – for example, all instances of protein C. We have developed FOSTA, an automatically generated database of FEPs annotated as having the same function in UniProtKB/Swiss-Prot which can be used for large-scale analysis. The method builds a candidate list of homologues and filters out functionally diverged proteins on the basis of functional annotations using a simple text mining approach. Results Large scale evaluation of our FEP extraction method is difficult as there is no gold-standard dataset against which the method can be benchmarked. However, a manual analysis of five protein families confirmed a high level of performance. A more extensive comparison with two manually verified functional equivalence datasets also demonstrated very good performance. Conclusion In summary, FOSTA provides an automated analysis of annotations in UniProtKB/Swiss-Prot to enable groups of proteins already annotated as functionally equivalent, to be extracted. Our results demonstrate that the vast majority of UniProtKB/Swiss-Prot functional annotations are of high quality, and that FOSTA can interpret annotations successfully. Where FOSTA is not successful, we are able to highlight inconsistencies in UniProtKB/Swiss-Prot annotation. Most of these would have presented equal difficulties for manual interpretation of annotations. We discuss limitations and possible future extensions to FOSTA, and

  13. Predicting Agency Rating Migrations with Spread Implied Ratings

    OpenAIRE

    Jianming Kou; Dr Simone Varotto

    2005-01-01

    Investors traditionally rely on credit ratings to price debt instruments. However, rating agencies are known to be prudent in their approach to rating revisions, which results in delayed ratings adjustments to mutating credit conditions. For a large set of eurobonds we derive credit spread implied ratings and compare them with the ratings issued by rating agencies. Our results indicate that spread implied ratings often anticipate future movement of agency ratings and hence could help track cr...

  14. Clifford Algebra Implying Three Fermion Generations Revisited

    International Nuclear Information System (INIS)

    Krolikowski, W.

    2002-01-01

    The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √p 2 → Γ (N) ·p works, leading to a sequence N=1, 2, 3, ... of Dirac-type equations, where four Dirac-type matrices Γ (N) μ are embedded into a Clifford algebra via a Jacobi definition introducing four ''centre-of-mass'' and (N - 1) x four ''relative'' Dirac-type matrices. These define one ''centre-of-mass'' and N - 1 ''relative'' Dirac bispinor indices. Secundo, the ''centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while N - 1 ''relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ''relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1, 3, 5 in the case of N odd, and two with N = 2, 4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3 x 3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is m τ = 1776.80 MeV, when the input of experimental m e and m μ is used. (author)

  15. Clifford Algebra Implying Three Fermion Generations Revisited

    Science.gov (United States)

    Krolikowski, Wojciech

    2002-09-01

    The author's idea of algebraic compositeness of fundamental particles, allowing to understand the existence in Nature of three fermion generations, is revisited. It is based on two postulates. Primo, for all fundamental particles of matter the Dirac square-root procedure √ {p2} → {Γ }(N)p works, leading to a sequence N = 1,2,3, ... of Dirac-type equations, where four Dirac-type matrices {Γ }(N)μ are embedded into a Clifford algebra via a Jacobi definition introducing four ``centre-of-mass'' and (N-1)× four ``relative'' Dirac-type matrices. These define one ``centre-of-mass'' and (N-1) ``relative'' Dirac bispinor indices. Secundo, the ``centre-of-mass'' Dirac bispinor index is coupled to the Standard Model gauge fields, while (N-1) ``relative'' Dirac bispinor indices are all free indistinguishable physical objects obeying Fermi statistics along with the Pauli principle which requires the full antisymmetry with respect to ``relative'' Dirac indices. This allows only for three Dirac-type equations with N = 1,3,5 in the case of N odd, and two with N = 2,4 in the case of N even. The first of these results implies unavoidably the existence of three and only three generations of fundamental fermions, namely leptons and quarks, as labelled by the Standard Model signature. At the end, a comment is added on the possible shape of Dirac 3x3 mass matrices for four sorts of spin-1/2 fundamental fermions appearing in three generations. For charged leptons a prediction is mτ = 1776.80 MeV, when the input of experimental me and mμ is used.

  16. Estimating implied rates of discount in healthcare decision-making.

    Science.gov (United States)

    West, R R; McNabb, R; Thompson, A G H; Sheldon, T A; Grimley Evans, J

    2003-01-01

    To consider whether implied rates of discounting from the perspectives of individual and society differ, and whether implied rates of discounting in health differ from those implied in choices involving finance or "goods". The study comprised first a review of economics, health economics and social science literature and then an empirical estimate of implied rates of discounting in four fields: personal financial, personal health, public financial and public health, in representative samples of the public and of healthcare professionals. Samples were drawn in the former county and health authority district of South Glamorgan, Wales. The public sample was a representative random sample of men and women, aged over 18 years and drawn from electoral registers. The health professional sample was drawn at random with the cooperation of professional leads to include doctors, nurses, professions allied to medicine, public health, planners and administrators. The literature review revealed few empirical studies in representative samples of the population, few direct comparisons of public with private decision-making and few direct comparisons of health with financial discounting. Implied rates of discounting varied widely and studies suggested that discount rates are higher the smaller the value of the outcome and the shorter the period considered. The relationship between implied discount rates and personal attributes was mixed, possibly reflecting the limited nature of the samples. Although there were few direct comparisons, some studies found that individuals apply different rates of discount to social compared with private comparisons and health compared with financial. The present study also found a wide range of implied discount rates, with little systematic effect of age, gender, educational level or long-term illness. There was evidence, in both samples, that people chose a lower rate of discount in comparisons made on behalf of society than in comparisons made for

  17. Implied liquidity : towards stochastic liquidity modeling and liquidity trading

    NARCIS (Netherlands)

    Corcuera, J.M.; Guillaume, F.M.Y.; Madan, D.B.; Schoutens, W.

    2010-01-01

    In this paper we introduce the concept of implied (il)liquidity of vanilla options. Implied liquidity is based on the fundamental theory of conic finance, in which the one-price model is abandoned and replaced by a two-price model giving bid and ask prices for traded assets. The pricing is done by

  18. Kinetic energy principle and neoclassical toroidal torque in tokamaks

    International Nuclear Information System (INIS)

    Park, Jong-Kyu

    2011-01-01

    It is shown that when tokamaks are perturbed, the kinetic energy principle is closely related to the neoclassical toroidal torque by the action invariance of particles. Especially when tokamaks are perturbed from scalar pressure equilibria, the imaginary part of the potential energy in the kinetic energy principle is equivalent to the toroidal torque by the neoclassical toroidal viscosity. A unified description therefore should be made for both physics. It is also shown in this case that the potential energy operator can be self-adjoint and thus the stability calculation can be simplified by minimizing the potential energy.

  19. Effectance, committed effective dose equivalent and annual limits on intake: what are the changes?

    International Nuclear Information System (INIS)

    Kendall, G.M.; Stather, J.W.; Phipps, A.W.

    1990-01-01

    This paper outlines the concept of effectance, compares committed effectance with the old committed effective dose equivalent and goes on to discuss changes in the annual limits on intakes and the maximum organ doses which would result from an intake of an ALI (Annual Limit of Intake). It is shown that committed effectance is usually, but not always, higher than committed effective dose equivalent. ALIS are usually well below those resulting from the ICRP Publication 30 scheme. However, if the ALI were based only on a limit on effectance it would imply a high dose to specific organs for certain nuclides. In order to control maximum organ doses an explicit limit could be introduced. However, this would destroy some of the attractive features of the new scheme. An alternative would be a slight modification to some of the weighting factors. (author)

  20. Asymptotic formulae for implied volatility in the Heston model

    OpenAIRE

    Forde, Martin; Jacquier, Antoine; Mijatovic, Aleksandar

    2009-01-01

    In this paper we prove an approximate formula expressed in terms of elementary functions for the implied volatility in the Heston model. The formula consists of the constant and first order terms in the large maturity expansion of the implied volatility function. The proof is based on saddlepoint methods and classical properties of holomorphic functions.

  1. A variational principle for the plasma centrifuge

    International Nuclear Information System (INIS)

    Ludwig, G.O.

    1986-09-01

    A variational principle is derived which describes the stationary state of the plasma column in a plasma centrifuge. Starting with the fluid equations in a rotating frame the theory is developed using the method of irreversible thermodynamics. This formulation easily leads to an expression for the density distribution of the l-species at sedimentation equilibrium, taking into account the effect of the electric and magnetic forces. Assuming stationary boundary conditions and rigid rotation nonequilibrium states the condition for thermodynamic stability integrated over the volume of the system reduces, under certain restrictions, to the principle of minimum entropy production in the stationary state. This principle yields a variational problem which is equivalent to the original problem posed by the stationary fluid equations. The variational method is useful in achieving approximate solutions that give the electric potential and current distributions in the rotating plasma column consistent with an assumed plasma density profile. (Author) [pt

  2. Small vacuum energy from small equivalence violation in scalar gravity

    International Nuclear Information System (INIS)

    Agrawal, Prateek; Sundrum, Raman

    2017-01-01

    The theory of scalar gravity proposed by Nordström, and refined by Einstein and Fokker, provides a striking analogy to general relativity. In its modern form, scalar gravity appears as the low-energy effective field theory of the spontaneous breaking of conformal symmetry within a CFT, and is AdS/CFT dual to the original Randall-Sundrum I model, but without a UV brane. Scalar gravity faithfully exhibits several qualitative features of the cosmological constant problem of standard gravity coupled to quantum matter, and the Weinberg no-go theorem can be extended to this case as well. Remarkably, a solution to the scalar gravity cosmological constant problem has been proposed, where the key is a very small violation of the scalar equivalence principle, which can be elegantly formulated as a particular type of deformation of the CFT. In the dual AdS picture this involves implementing Goldberger-Wise radion stabilization where the Goldberger-Wise field is a pseudo-Nambu Goldstone boson. In quantum gravity however, global symmetries protecting pNGBs are not expected to be fundamental. We provide a natural six-dimensional gauge theory origin for this global symmetry and show that the violation of the equivalence principle and the size of the vacuum energy seen by scalar gravity can naturally be exponentially small. Our solution may be of interest for study of non-supersymmetric CFTs in the spontaneously broken phase.

  3. Small vacuum energy from small equivalence violation in scalar gravity

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, Prateek [Department of Physics, Harvard University,Cambridge, MA 02138 (United States); Sundrum, Raman [Department of Physics, University of Maryland,College Park, MD 20742 (United States)

    2017-05-29

    The theory of scalar gravity proposed by Nordström, and refined by Einstein and Fokker, provides a striking analogy to general relativity. In its modern form, scalar gravity appears as the low-energy effective field theory of the spontaneous breaking of conformal symmetry within a CFT, and is AdS/CFT dual to the original Randall-Sundrum I model, but without a UV brane. Scalar gravity faithfully exhibits several qualitative features of the cosmological constant problem of standard gravity coupled to quantum matter, and the Weinberg no-go theorem can be extended to this case as well. Remarkably, a solution to the scalar gravity cosmological constant problem has been proposed, where the key is a very small violation of the scalar equivalence principle, which can be elegantly formulated as a particular type of deformation of the CFT. In the dual AdS picture this involves implementing Goldberger-Wise radion stabilization where the Goldberger-Wise field is a pseudo-Nambu Goldstone boson. In quantum gravity however, global symmetries protecting pNGBs are not expected to be fundamental. We provide a natural six-dimensional gauge theory origin for this global symmetry and show that the violation of the equivalence principle and the size of the vacuum energy seen by scalar gravity can naturally be exponentially small. Our solution may be of interest for study of non-supersymmetric CFTs in the spontaneously broken phase.

  4. Limits on rare B decays B implies μ+μ-K± and B implies μ+μ- K*

    International Nuclear Information System (INIS)

    Anway-Wiese, C.

    1995-01-01

    We report on a search for flavor-changing neutral current decays of B mesons into γγK * and γγK± using data obtained in the Collider Detector at Fermilab (CDF) 1992 endash 1993 data taking run. To reduce the amount of background in our data we use precise tracking information from the CDF silicon vertex detector to pinpoint the location of the decay vertex of the B candidate, and accept only events which have a large decay time.We compare this data to a B meson signal obtained in a similar fashion, but where the muon pairs originate from ψ decays, and calculate the relative branching ratios. In the absence of any indication of flavor-changing neutral current decays we set an upper limits of BR(B implies μμK ± ) much-gt 3.5x10 -5 , and BR(B implies μμK * )much-gt 5.1x10 -5 at 90% confidence level, which are consistent with Standard Model expectations but leave little room for non-standard physics. copyright 1995 American Institute of Physics

  5. A method to obtain new cross-sections transport equivalent

    International Nuclear Information System (INIS)

    Palmiotti, G.

    1988-01-01

    We present a method, that allows the calculation, by the mean of variational principle, of equivalent cross-sections in order to take into account the transport and mesh size effects on reactivity variation calculations. The method validation has been made in two and three dimensions geometries. The reactivity variations calculated in three dimensional hexagonal geometry with seven points by subassembly using two sets of equivalent cross-sections for control rods are in a very good agreement with the ones of a transport, extrapolated to zero mesh size, calculation. The difficulty encountered in obtaining a good flux distribution has lead to the utilisation of a single set of equivalent cross-sections calculated by starting from an appropriated R-Z model that allows to take into account also the axial transport effects for the control rod followers. The global results in reactivity variations are still satisfactory with a good performance for the flux distribution. The main interest of the proposed method is the possibility to simulate a full 3D transport calculation, with fine mesh size, using a 3D diffusion code, with a larger mesh size. The results obtained should be affected by uncertainties, which do not exceed ± 4% for a large LMFBR control rod worth and for very different rod configurations. This uncertainty is by far smaller than the experimental uncertainties. (author). 5 refs, 8 figs, 9 tabs

  6. Framework model and principles for trusted information sharing in pervasive health.

    Science.gov (United States)

    Ruotsalainen, Pekka; Blobel, Bernd; Nykänen, Pirkko; Seppälä, Antto; Sorvari, Hannu

    2011-01-01

    Trustfulness (i.e. health and wellness information is processed ethically, and privacy is guaranteed) is one of the cornerstones for future Personal Health Systems, ubiquitous healthcare and pervasive health. Trust in today's healthcare is organizational, static and predefined. Pervasive health takes place in an open and untrusted information space where person's lifelong health and wellness information together with contextual data are dynamically collected and used by many stakeholders. This generates new threats that do not exist in today's eHealth systems. Our analysis shows that the way security and trust are implemented in today's healthcare cannot guarantee information autonomy and trustfulness in pervasive health. Based on a framework model of pervasive health and risks analysis of ubiquitous information space, we have formulated principles which enable trusted information sharing in pervasive health. Principles imply that the data subject should have the right to dynamically verify trust and to control the use of her health information, as well as the right to set situation based context-aware personal policies. Data collectors and processors have responsibilities including transparency of information processing, and openness of interests, policies and environmental features. Our principles create a base for successful management of privacy and information autonomy in pervasive health. They also imply that it is necessary to create new data models for personal health information and new architectures which support situation depending trust and privacy management.

  7. Biophysical modelling of phytoplankton communities from first principles using two-layered spheres: Equivalent Algal Populations (EAP) model

    CSIR Research Space (South Africa)

    Robertson Lain, L

    2014-07-01

    Full Text Available (PFT) analysis. To these ends, an initial validation of a new model of Equivalent Algal Populations (EAP) is presented here. This paper makes a first order comparison of two prominent phytoplankton Inherent Optical Property (IOP) models with the EAP...

  8. Application of a value-based equivalency method to assess environmental damage compensation under the European Environmental Liability Directive

    NARCIS (Netherlands)

    Martin-Ortega, J.; Brouwer, R.; Aiking, H.

    2011-01-01

    The Environmental Liability Directive (ELD) establishes a framework of liability based on the 'polluter-pays' principle to prevent and remedy environmental damage. The ELD requires the testing of appropriate equivalency methods to assess the scale of compensatory measures needed to offset damage.

  9. Cellular gauge symmetry and the Li organization principle: General considerations.

    Science.gov (United States)

    Tozzi, Arturo; Peters, James F; Navarro, Jorge; Kun, Wu; Lin, Bi; Marijuán, Pedro C

    2017-12-01

    Based on novel topological considerations, we postulate a gauge symmetry for living cells and proceed to interpret it from a consistent Eastern perspective: the li organization principle. In our framework, the reference system is the living cell, equipped with general symmetries and energetic constraints standing for the intertwined biochemical, metabolic and signaling pathways that allow the global homeostasis of the system. Environmental stimuli stand for forces able to locally break the symmetry of metabolic/signaling pathways, while the species-specific DNA is the gauge field that restores the global homeostasis after external perturbations. We apply the Borsuk-Ulam Theorem (BUT) to operationalize a methodology in terms of topology/gauge fields and subsequently inquire about the evolution from inorganic to organic structures and to the prokaryotic and eukaryotic modes of organization. We converge on the strategic role that second messengers have played regarding the emergence of a unitary gauge field with profound evolutionary implications. A new avenue for a deeper investigation of biological complexity looms. Philosophically, we might be reminded of the duality between two essential concepts proposed by the great Chinese synthesizer Zhu Xi (in the XIII Century). On the one side the li organization principle, equivalent to the dynamic interplay between symmetry and information; and on the other side the qi principle, equivalent to the energy participating in the process-both always interlinked with each other. In contemporary terms, it would mean the required interconnection between information and energy, and the necessity to revise essential principles of information philosophy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. US Implied Volatility as A predictor of International Returns

    Directory of Open Access Journals (Sweden)

    Mehmet F. Dicle

    2017-12-01

    Full Text Available This study provides evidence of the US implied volatility’s e ect on international equitymarkets’ returns. This evidence has two main implications: i investors may find that foreign equityreturns adjusting to US implied volatility may not provide true diversification benefits, and ii foreignequity returns may be predicted using US implied volatility. Our sample includes US volatility index(VIX and major equity indexes in twenty countries for the period between January, 2000 throughJuly, 2017. VIX leads eighteen of the international markets and Granger causes seventeen of themarkets after controlling for the S&P-500 index returns and the 2007/2008 US financial crisis. USinvestors looking to diversify US risk may find that international equities may not provide intendeddiversification benefits. Our evidence provides support for predictability of international equity returnsbased on US volatility.

  11. Greatest Happiness Principle in a Complex System: Maximisation versus Driving Force

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2012-06-01

    Full Text Available From philosophical point of view, micro-founded economic theories depart from the principle of the pursuit of the greatest happiness. From mathematical point of view, micro-founded economic theories depart from the utility maximisation program. Though economists are aware of the serious limitations of the equilibrium analysis, they remain in that framework. We show that the maximisation principle, which implies the equilibrium hypothesis, is responsible for this impasse. We formalise the pursuit of the greatest happiness principle by the help of the driving force postulate: the volumes of activities depend on the expected wealth increase. In that case we can get rid of the equilibrium hypothesis and have new insights into economic theory. For example, in what extent standard economic results depend on the equilibrium hypothesis?

  12. Maximum principle for a stochastic delayed system involving terminal state constraints.

    Science.gov (United States)

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  13. Relativistic transformation law of quantum fields: A slight generalization consistent with the equivalence of all Lorentz frames

    International Nuclear Information System (INIS)

    Ingraham, R.L.

    1985-01-01

    The well-known relativistic transformation law of quantum fields satisfies the relativity principle, which asserts the complete equivalence of all Lorentz (inertial) frames as far as physical measurements go. We point out a slight generalization which is allowed by the relativity principle, but violates a further, tacit assumption usually made in connection with it but which is actually logically independent of it and subject to a feasible experimental test. The interest of the generalization is that it permits the incorporation of an ultraviolet cutoff in a simple, direct way which avoids the usual difficulties

  14. High-Resolution Near-Infrared Spectroscopy of an Equivalent Width-Selected Sample of Starbursting Dwarf Galaxies

    Science.gov (United States)

    Maseda, Michael V.; VanDerWeL, Arjen; DaChuna, Elisabete; Rix, Hans-Walter; Pacafichi, Camilla; Momcheva, Ivelina; Brammer, Gabriel B.; Franx, Marijn; VanDokkum, Pieter; Bell, Eric F.; hide

    2013-01-01

    Spectroscopic observations from the Large Binocular Telescope and the Very Large Telescope reveal kinematically narrow lines (approx. 50 km/s) for a sample of 14 Extreme Emission Line Galaxies (EELGs) at redshifts 1.4 < zeta < 2.3. These measurements imply that the total dynamical masses of these systems are low ( 3 × 10(exp 9) M). Their large [O III]5007 equivalent widths (500 - 1100 A) and faint blue continuum emission imply young ages of 10-100 Myr and stellar masses of 10(exp 8)-10(exp 9) M, confirming the presence of a violent starburst. The stellar mass formed in this vigorous starburst phase thus represents a large fraction of the total (dynamical) mass, without a significantly massive underlying population of older stars. The occurrence of such intense events in shallow potentials strongly suggests that supernova-driven winds must be of critical importance in the subsequent evolution of these systems.

  15. Principle and methods for measurement of snow water equivalent by detection of natural gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Endrestoel, G O [Institutt for Atomenergi, Kjeller (Norway)

    1979-01-01

    The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion an estimate of the precision that can be obtained by these methods is given.

  16. Principle and methods for measurement of snow water equivalent by detection of natural gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Endrestol, G O

    1979-01-01

    The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion estimates of the precision that can be obtained by these methods are given.

  17. Implied adjusted volatility functions: Empirical evidence from Australian index option market

    Science.gov (United States)

    Harun, Hanani Farhah; Hafizah, Mimi

    2015-02-01

    This study aims to investigate the implied adjusted volatility functions using the different Leland option pricing models and to assess whether the use of the specified implied adjusted volatility function can lead to an improvement in option valuation accuracy. The implied adjusted volatility is investigated in the context of Standard and Poor/Australian Stock Exchange (S&P/ASX) 200 index options over the course of 2001-2010, which covers the global financial crisis in the mid-2007 until the end of 2008. Both in- and out-of-sample resulted in approximately similar pricing error along the different Leland models. Results indicate that symmetric and asymmetric models of both moneyness ratio and logarithmic transformation of moneyness provide the overall best result in both during and post-crisis periods. We find that in the different period of interval (pre-, during and post-crisis) is subject to a different implied adjusted volatility function which best explains the index options. Hence, it is tremendously important to identify the intervals beforehand in investigating the implied adjusted volatility function.

  18. Estimation of equivalent dose on the ends of hemodynamic physicians during neurological procedures

    International Nuclear Information System (INIS)

    Squair, Peterson L.; Souza, Luiz C. de; Oliveira, Paulo Marcio C. de

    2005-01-01

    The estimation of doses in the hands of physicists during hemodynamic procedures is important to verify the application of radiation protection related to the optimization and limit of dose, principles required by the Portaria 453/98 of Ministry of Health/ANVISA, Brazil. It was checked the levels of exposure of the hands of doctors during the use of the equipment in hemodynamic neurological procedures through dosimetric rings with thermoluminescent dosemeters detectors of LiF: Mg, Ti (TLD-100), calibrated in personal Dose equivalent HP (0.07). The average equivalent dose in the end obtained was 41.12. μSv per scan with an expanded uncertainty of 20% for k = 2. This value is relative to the hemodynamic Neurology procedure using radiological protection procedures accessible to minimize the dose

  19. Level Shifts in Volatility and the Implied-Realized Volatility Relation

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; de Magistris, Paolo Santucci

    We propose a simple model in which realized stock market return volatility and implied volatility backed out of option prices are subject to common level shifts corresponding to movements between bull and bear markets. The model is estimated using the Kalman filter in a generalization to the mult......We propose a simple model in which realized stock market return volatility and implied volatility backed out of option prices are subject to common level shifts corresponding to movements between bull and bear markets. The model is estimated using the Kalman filter in a generalization...... to the multivariate case of the univariate level shift technique by Lu and Perron (2008). An application to the S&P500 index and a simulation experiment show that the recently documented empirical properties of strong persistence in volatility and forecastability of future realized volatility from current implied...... volatility, which have been interpreted as long memory (or fractional integration) in volatility and fractional cointegration between implied and realized volatility, are accounted for by occasional common level shifts....

  20. Capital taxation : principles , properties and optimal taxation issues

    OpenAIRE

    Antonin, Céline; Touze, Vincent

    2017-01-01

    This article addresses the issue of capital taxation relying on three levels of analysis. The first level deals with the multiple ways to tax capital (income or value, proportional or progressive taxation, and the temporality of the taxation) and presents some of France's particular features within a heterogeneous European context. The second area of investigation focuses on the main dynamic properties generated by capital taxation: the principle of equivalence with a tax on consu...

  1. Test of the Weak Equivalence Principle using LIGO observations of GW150914 and Fermi observations of GBM transient 150914

    Directory of Open Access Journals (Sweden)

    Molin Liu

    2017-07-01

    Full Text Available About 0.4 s after the Laser Interferometer Gravitational-Wave Observatory (LIGO detected a transient gravitational-wave (GW signal GW150914, the Fermi Gamma-ray Burst Monitor (GBM also found a weak electromagnetic transient (GBM transient 150914. Time and location coincidences favor a possible association between GW150904 and GBM transient 150914. Under this possible association, we adopt Fermi's electromagnetic (EM localization and derive constraints on possible violations of the Weak Equivalence Principle (WEP from the observations of two events. Our calculations are based on four comparisons: (1 The first is the comparison of the initial GWs detected at the two LIGO sites. From the different polarizations of these initial GWs, we obtain a limit on any difference in the parametrized post-Newtonian (PPN parameter Δγ≲10−10. (2 The second is a comparison of GWs and possible EM waves. Using a traditional super-Eddington accretion model for GBM transient 150914, we again obtain an upper limit Δγ≲10−10. Compared with previous results for photons and neutrinos, our limits are five orders of magnitude stronger than those from PeV neutrinos in blazar flares, and seven orders stronger than those from MeV neutrinos in SN1987A. (3 The third is a comparison of GWs with different frequencies in the range [35 Hz, 250 Hz]. (4 The fourth is a comparison of EM waves with different energies in the range [1 keV, 10 MeV]. These last two comparisons lead to an even stronger limit, Δγ≲10−8. Our results highlight the potential of multi-messenger signals exploiting different emission channels to strengthen existing tests of the WEP.

  2. Electric cars : The climate impact of electric cars, focusing on carbon dioxide equivalent emissions

    OpenAIRE

    Ly, Sandra; Sundin, Helena; Thell, Linda

    2012-01-01

    This bachelor thesis examines and models the emissions of carbon dioxide equivalents of the composition of automobiles in Sweden 2012. The report will be based on three scenarios of electricity valuation principles, which are a snapshot perspective, a retrospective perspective and a future perspective. The snapshot perspective includes high and low values for electricity on the margin, the retrospective perspective includes Nordic and European electricity mix and the future perspective includ...

  3. “Stringy” coherent states inspired by generalized uncertainty principle

    Science.gov (United States)

    Ghosh, Subir; Roy, Pinaki

    2012-05-01

    Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.

  4. “Stringy” coherent states inspired by generalized uncertainty principle

    International Nuclear Information System (INIS)

    Ghosh, Subir; Roy, Pinaki

    2012-01-01

    Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.

  5. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  6. On Equivalence of Nonequilibrium Thermodynamic and Statistical Entropies

    Directory of Open Access Journals (Sweden)

    Purushottam D. Gujrati

    2015-02-01

    Full Text Available We review the concept of nonequilibrium thermodynamic entropy and observables and internal variables as state variables, introduced recently by us, and provide a simple first principle derivation of additive statistical entropy, applicable to all nonequilibrium states by treating thermodynamics as an experimental science. We establish their numerical equivalence in several cases, which includes the most important case when the thermodynamic entropy is a state function. We discuss various interesting aspects of the two entropies and show that the number of microstates in the Boltzmann entropy includes all possible microstates of non-zero probabilities even if the system is trapped in a disjoint component of the microstate space. We show that negative thermodynamic entropy can appear from nonnegative statistical entropy.

  7. Equivalent formulations of “the equation of life”

    International Nuclear Information System (INIS)

    Ao Ping

    2014-01-01

    Motivated by progress in theoretical biology a recent proposal on a general and quantitative dynamical framework for nonequilibrium processes and dynamics of complex systems is briefly reviewed. It is nothing but the evolutionary process discovered by Charles Darwin and Alfred Wallace. Such general and structured dynamics may be tentatively named “the equation of life”. Three equivalent formulations are discussed, and it is also pointed out that such a quantitative dynamical framework leads naturally to the powerful Boltzmann-Gibbs distribution and the second law in physics. In this way, the equation of life provides a logically consistent foundation for thermodynamics. This view clarifies a particular outstanding problem and further suggests a unifying principle for physics and biology. (topical review - statistical physics and complex systems)

  8. Principle-based concept analysis: Caring in nursing education.

    Science.gov (United States)

    Salehian, Maryam; Heydari, Abbas; Aghebati, Nahid; Karimi Moonaghi, Hossein; Mazloom, Seyed Reza

    2016-03-01

    The aim of this principle-based concept analysis was to analyze caring in nursing education and to explain the current state of the science based on epistemologic, pragmatic, linguistic, and logical philosophical principles. A principle-based concept analysis method was used to analyze the nursing literature. The dataset included 46 English language studies, published from 2005 to 2014, and they were retrieved through PROQUEST, MEDLINE, CINAHL, ERIC, SCOPUS, and SID scientific databases. The key dimensions of the data were collected using a validated data-extraction sheet. The four principles of assessing pragmatic utility were used to analyze the data. The data were managed by using MAXQDA 10 software. The scientific literature that deals with caring in nursing education relies on implied meaning. Caring in nursing education refers to student-teacher interactions that are formed on the basis of human values and focused on the unique needs of the students (epistemological principle). The result of student-teacher interactions is the development of both the students and the teachers. Numerous applications of the concept of caring in nursing education are available in the literature (pragmatic principle). There is consistency in the meaning of the concept, as a central value of the faculty-student interaction (linguistic principle). Compared with other related concepts, such as "caring pedagogy," "value-based education," and "teaching excellence," caring in nursing education does not have exact and clear conceptual boundaries (logic principle). Caring in nursing education was identified as an approach to teaching and learning, and it is formed based on teacher-student interactions and sustainable human values. A greater understanding of the conceptual basis of caring in nursing education will improve the caring behaviors of teachers, create teaching-learning environments, and help experts in curriculum development.

  9. A review of the generalized uncertainty principle

    International Nuclear Information System (INIS)

    Tawfik, Abdel Nasser; Diab, Abdel Magied

    2015-01-01

    Based on string theory, black hole physics, doubly special relativity and some ‘thought’ experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed. (review)

  10. The meaning and the principle of determination of the effective dose equivalent in radiation protection

    International Nuclear Information System (INIS)

    Drexler, G.; Williams, G.; Zankl, M.

    1985-01-01

    Since the introduction of the quantity ''effective dose equivalent'' within the framework of new radiation concepts, the meaning and interpretation of the quantity is often discussed and debated. Because of its adoption as a limiting quantity in many international and national laws, it is necessary to be able to interpret this main radiation protection quantity. Examples of organ doses and the related Hsub(E) values in occupational and medical exposures are presented and the meaning of the quantity is considered for whole body exposures to external and internal photon sources, as well as for partial body external exposures to photons. (author)

  11. The Precautionary Principle and the Tolerability of Blood Transfusion Risks.

    Science.gov (United States)

    Kramer, Koen; Zaaijer, Hans L; Verweij, Marcel F

    2017-03-01

    Tolerance for blood transfusion risks is very low, as evidenced by the implementation of expensive blood tests and the rejection of gay men as blood donors. Is this low risk tolerance supported by the precautionary principle, as defenders of such policies claim? We discuss three constraints on applying (any version of) the precautionary principle and show that respecting these implies tolerating certain risks. Consistency means that the precautionary principle cannot prescribe precautions that it must simultaneously forbid taking, considering the harms they might cause. Avoiding counterproductivity requires rejecting precautions that cause more harm than they prevent. Proportionality forbids taking precautions that are more harmful than adequate alternatives. When applying these constraints, we argue, attention should not be restricted to harms that are human caused or that affect human health or the environment. Tolerating transfusion risks can be justified if available precautions have serious side effects, such as high social or economic costs.

  12. Fundamental Tactical Principles of Soccer: A Comparison of Different Age Groups.

    Science.gov (United States)

    Borges, Paulo Henrique; Guilherme, José; Rechenchosky, Leandro; da Costa, Luciane Cristina Arantes; Rinadi, Wilson

    2017-09-01

    The fundamental tactical principles of the game of soccer represent a set of action rules that guide behaviours related to the management of game space. The aim of this study was to compare the performance of fundamental offensive and defensive tactical principles among youth soccer players from 12 to 17 years old. The sample consisted of 3689 tactical actions performed by 48 soccer players in three age categories: under 13 (U-13), under 15 (U-15), and under 17 (U-17). Tactical performance was measured using the System of Tactical Assessment in Soccer (FUT-SAT). The Kruskal Wallis, Mann-Whitney U, Friedman, Wilcoxon, and Cohen's Kappa tests were used in the study analysis. The results showed that the principles of "offensive coverage" (p = 0.01) and "concentration" (p = 0.04) were performed more frequently by the U-17 players than the U-13 players. The tactical principles "width and length" (p principles are performed varies between the gaming categories, which implies that there is valuation of defensive security and a progressive increase in "offensive coverage" caused by increased confidence and security in offensive actions.

  13. Motor mapping of implied actions during perception of emotional body language.

    Science.gov (United States)

    Borgomaneri, Sara; Gazzola, Valeria; Avenanti, Alessio

    2012-04-01

    Perceiving and understanding emotional cues is critical for survival. Using the International Affective Picture System (IAPS) previous TMS studies have found that watching humans in emotional pictures increases motor excitability relative to seeing landscapes or household objects, suggesting that emotional cues may prime the body for action. Here we tested whether motor facilitation to emotional pictures may reflect the simulation of the human motor behavior implied in the pictures occurring independently of its emotional valence. Motor-evoked potentials (MEPs) to single-pulse TMS of the left motor cortex were recorded from hand muscles during observation and categorization of emotional and neutral pictures. In experiment 1 participants watched neutral, positive and negative IAPS stimuli, while in experiment 2, they watched pictures depicting human emotional (joyful, fearful), neutral body movements and neutral static postures. Experiment 1 confirms the increase in excitability for emotional IAPS stimuli found in previous research and shows, however, that more implied motion is perceived in emotional relative to neutral scenes. Experiment 2 shows that motor excitability and implied motion scores for emotional and neutral body actions were comparable and greater than for static body postures. In keeping with embodied simulation theories, motor response to emotional pictures may reflect the simulation of the action implied in the emotional scenes. Action simulation may occur independently of whether the observed implied action carries emotional or neutral meanings. Our study suggests the need of controlling implied motion when exploring motor response to emotional pictures of humans. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality

    Science.gov (United States)

    Ayala, Mario; Carinci, Gioia; Redig, Frank

    2018-06-01

    We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.

  15. A general maximum entropy framework for thermodynamic variational principles

    International Nuclear Information System (INIS)

    Dewar, Roderick C.

    2014-01-01

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law

  16. A general maximum entropy framework for thermodynamic variational principles

    Energy Technology Data Exchange (ETDEWEB)

    Dewar, Roderick C., E-mail: roderick.dewar@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.

  17. Aplikasi Algoritma Biseksi dan Newton-Raphson dalam Menaksir Nilai Volatilitas Implied

    Directory of Open Access Journals (Sweden)

    Komang Dharmawan

    2012-11-01

    Full Text Available Volatilitas adalah suatu besaran yang mengukuran seberapa jauh suatu harga sahambergerak dalam suatu periode tertentu dapat juga diartikan sebagai persentase simpanganbaku dari perubahan harga harian suatu saham. Menurut teori yang dikembangkan oleh Black-Scholes in 1973, semua harga opsi dengan ’underlying asset’ dan waktu jatuh tempo yang samatetapi memiliki nilai exercise yang berbeda akan memiliki nilai volatilitas implied yang sama.Model Black-Scholes dapat dipakai mengestimasi nilai volatilitas implied dari suatu sahamdengan mencari sulusi numerik dari persamaan invers dari model Black-Scholes. Makalah inimendemonstrasikan bagaimana menghitung nilai volatilitas implied suatu saham dengan mengasumsikanbahwa model Black-schole adalah benar dan suatu kontrak opsi dengan denganumur kontrak yang sama akan memiliki harga yang sama. Menggunakan data harga opsi SonyCorporation (SNE, Cisco Systems, Inc (CSCO, dan Canon, Inc (CNJ diperoleh bahwa, ImpliedVolatility memberikan harga yang lebih murah dibandingkan dengan harga opsi darivolatilitas yang dihitung dari data historis. Selain itu, dari hasil iterasi yang diperoleh, metodeNewton-Raphson lebih cepat konvergen dibandingkan dengan metode Bisection.

  18. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    OpenAIRE

    S. Mori; K. Kitsukawa; M. Hisakado

    2006-01-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution h...

  19. Long memory and the relation between implied and realized volatility

    OpenAIRE

    Federico Bandi; Benoit Perron

    2003-01-01

    We argue that the conventional predictive regression between implied volatility (regressor) and realized volatility over the remaining life of the option (regressand) is likely to be a fractional cointegrating relation. Since cointegration is associated with long-run comovements, this finding modifies the usual interpretation of such regression as a study towards assessing option market efficiency (given a certain option pricing model) and/or short-term unbiasedness of implied volatility as a...

  20. Design principles for data- and change-oriented organisational analysis in workplace health promotion.

    Science.gov (United States)

    Inauen, A; Jenny, G J; Bauer, G F

    2012-06-01

    This article focuses on organizational analysis in workplace health promotion (WHP) projects. It shows how this analysis can be designed such that it provides rational data relevant to the further context-specific and goal-oriented planning of WHP and equally supports individual and organizational change processes implied by WHP. Design principles for organizational analysis were developed on the basis of a narrative review of the guiding principles of WHP interventions and organizational change as well as the scientific principles of data collection. Further, the practical experience of WHP consultants who routinely conduct organizational analysis was considered. This resulted in a framework with data-oriented and change-oriented design principles, addressing the following elements of organizational analysis in WHP: planning the overall procedure, data content, data-collection methods and information processing. Overall, the data-oriented design principles aim to produce valid, reliable and representative data, whereas the change-oriented design principles aim to promote motivation, coherence and a capacity for self-analysis. We expect that the simultaneous consideration of data- and change-oriented design principles for organizational analysis will strongly support the WHP process. We finally illustrate the applicability of the design principles to health promotion within a WHP case study.

  1. EQUIVALENCE VERSUS NON-EQUIVALENCE IN ECONOMIC TRANSLATION

    Directory of Open Access Journals (Sweden)

    Cristina, Chifane

    2012-01-01

    Full Text Available This paper aims at highlighting the fact that “equivalence” represents a concept worth revisiting and detailing upon when tackling the translation process of economic texts both from English into Romanian and from Romanian into English. Far from being exhaustive, our analysis will focus upon the problems arising from the lack of equivalence at the word level. Consequently, relevant examples from the economic field will be provided to account for the following types of non-equivalence at word level: culturespecific concepts; the source language concept is not lexicalised in the target language; the source language word is semantically complex; differences in physical and interpersonal perspective; differences in expressive meaning; differences in form; differences in frequency and purpose of using specific forms and the use of loan words in the source text. Likewise, we shall illustrate a number of translation strategies necessary to deal with the afore-mentioned cases of non-equivalence: translation by a more general word (superordinate; translation by a more neutral/less expressive word; translation by cultural substitution; translation using a loan word or loan word plus explanation; translation by paraphrase using a related word; translation by paraphrase using unrelated words; translation by omission and translation by illustration.

  2. Radioactive waste equivalence

    International Nuclear Information System (INIS)

    Orlowski, S.; Schaller, K.H.

    1990-01-01

    The report reviews, for the Member States of the European Community, possible situations in which an equivalence concept for radioactive waste may be used, analyses the various factors involved, and suggests guidelines for the implementation of such a concept. Only safety and technical aspects are covered. Other aspects such as commercial ones are excluded. Situations where the need for an equivalence concept has been identified are processes where impurities are added as a consequence of the treatment and conditioning process, the substitution of wastes from similar waste streams due to the treatment process, and exchange of waste belonging to different waste categories. The analysis of factors involved and possible ways for equivalence evaluation, taking into account in particular the chemical, physical and radiological characteristics of the waste package, and the potential risks of the waste form, shows that no simple all-encompassing equivalence formula may be derived. Consequently, a step-by-step approach is suggested, which avoids complex evaluations in the case of simple exchanges

  3. New recommendations for dose equivalent

    International Nuclear Information System (INIS)

    Bengtsson, G.

    1985-01-01

    In its report 39, the International Commission on Radiation Units and Measurements (ICRU), has defined four new quantities for the determination of dose equivalents from external sources: the ambient dose equivalent, the directional dose equivalent, the individual dose equivalent, penetrating and the individual dose equivalent, superficial. The rationale behind these concepts and their practical application are discussed. Reference is made to numerical values of these quantities which will be the subject of a coming publication from the International Commission on Radiological Protection, ICRP. (Author)

  4. Equivalent models of wind farms by using aggregated wind turbines and equivalent winds

    International Nuclear Information System (INIS)

    Fernandez, L.M.; Garcia, C.A.; Saenz, J.R.; Jurado, F.

    2009-01-01

    As a result of the increasing wind farms penetration on power systems, the wind farms begin to influence power system, and therefore the modeling of wind farms has become an interesting research topic. In this paper, new equivalent models of wind farms equipped with wind turbines based on squirrel-cage induction generators and doubly-fed induction generators are proposed to represent the collective behavior on large power systems simulations, instead of using a complete model of wind farms where all the wind turbines are modeled. The models proposed here are based on aggregating wind turbines into an equivalent wind turbine which receives an equivalent wind of the ones incident on the aggregated wind turbines. The equivalent wind turbine presents re-scaled power capacity and the same complete model as the individual wind turbines, which supposes the main feature of the present equivalent models. Two equivalent winds are evaluated in this work: (1) the average wind from the ones incident on the aggregated wind turbines with similar winds, and (2) an equivalent incoming wind derived from the power curve and the wind incident on each wind turbine. The effectiveness of the equivalent models to represent the collective response of the wind farm at the point of common coupling to grid is demonstrated by comparison with the wind farm response obtained from the detailed model during power system dynamic simulations, such as wind fluctuations and a grid disturbance. The present models can be used for grid integration studies of large power system with an important reduction of the model order and the computation time

  5. Implied and realized volatility in the cross-section of equity options

    DEFF Research Database (Denmark)

    Ammann, Manuel; Skovmand, David; Verhofen, Michael

    2009-01-01

    Using a complete sample of US equity options, we analyze patterns of implied volatility in the cross-section of equity options with respect to stock characteristics. We find that high-beta stocks, small stocks, stocks with a low-market-to-book ratio, and non-momentum stocks trade at higher implied...

  6. The Mayer-Joule Principle: The Foundation of the First Law of Thermodynamics

    Science.gov (United States)

    Newburgh, Ronald; Leff, Harvey S.

    2011-01-01

    To most students today the mechanical equivalent of heat, called the Mayer-Joule principle, is simply a way to convert from calories to joules and vice versa. However, in linking work and heat--once thought to be disjointed concepts--it goes far beyond unit conversion. Heat had eluded understanding for two centuries after Galileo Galilei…

  7. A generalized Principle of Relativity

    International Nuclear Information System (INIS)

    Felice, Fernando de; Preti, Giovanni

    2009-01-01

    The Theory of Relativity stands as a firm groundstone on which modern physics is founded. In this paper we bring to light an hitherto undisclosed richness of this theory, namely its admitting a consistent reformulation which is able to provide a unified scenario for all kinds of particles, be they lightlike or not. This result hinges on a generalized Principle of Relativity which is intrinsic to Einstein's theory - a fact which went completely unnoticed before. The road leading to this generalization starts, in the very spirit of Relativity, from enhancing full equivalence between the four spacetime directions by requiring full equivalence between the motions along these four spacetime directions as well. So far, no measurable spatial velocity in the direction of the time axis has ever been defined, on the same footing of the usual velocities - the 'space-velocities' - in the local three-space of a given observer. In this paper, we show how Relativity allows such a 'time-velocity' to be defined in a very natural way, for any particle and in any reference frame. As a consequence of this natural definition, it also follows that the time- and space-velocity vectors sum up to define a spacelike 'world-velocity' vector, the modulus of which - the world-velocity - turns out to be equal to the Maxwell's constant c, irrespective of the observer who measures it. This measurable world-velocity (not to be confused with the space-velocities we are used to deal with) therefore represents the speed at which all kinds of particles move in spacetime, according to any observer. As remarked above, the unifying scenario thus emerging is intrinsic to Einstein's Theory; it extends the role traditionally assigned to Maxwell's constant c, and can therefore justly be referred to as 'a generalized Principle of Relativity'.

  8. Large-scale subduction of continental crust implied by India-Asia mass-balance calculation

    Science.gov (United States)

    Ingalls, Miquela; Rowley, David B.; Currie, Brian; Colman, Albert S.

    2016-11-01

    Continental crust is buoyant compared with its oceanic counterpart and resists subduction into the mantle. When two continents collide, the mass balance for the continental crust is therefore assumed to be maintained. Here we use estimates of pre-collisional crustal thickness and convergence history derived from plate kinematic models to calculate the crustal mass balance in the India-Asia collisional system. Using the current best estimates for the timing of the diachronous onset of collision between India and Eurasia, we find that about 50% of the pre-collisional continental crustal mass cannot be accounted for in the crustal reservoir preserved at Earth's surface today--represented by the mass preserved in the thickened crust that makes up the Himalaya, Tibet and much of adjacent Asia, as well as southeast Asian tectonic escape and exported eroded sediments. This implies large-scale subduction of continental crust during the collision, with a mass equivalent to about 15% of the total oceanic crustal subduction flux since 56 million years ago. We suggest that similar contamination of the mantle by direct input of radiogenic continental crustal materials during past continent-continent collisions is reflected in some ocean crust and ocean island basalt geochemistry. The subduction of continental crust may therefore contribute significantly to the evolution of mantle geochemistry.

  9. Dextran derivatives modulate collagen matrix organization in dermal equivalent.

    Science.gov (United States)

    Frank, Laetitia; Lebreton-Decoster, Corinne; Godeau, Gaston; Coulomb, Bernard; Jozefonvicz, Jacqueline

    2006-01-01

    Dextran derivatives can protect heparin binding growth factor implied in wound healing, such as transforming growth factor-beta1 (TGF-beta1) and fibroblast growth factor-2 (FGF-2). The first aim of this study was to investigate the effect of these compounds on human dermal fibroblasts in culture with or without TGF-beta1. Several dextran derivatives obtained by substitution of methylcarboxylate (MC), benzylamide (B) and sulphate (Su) groups were used to determine the effects of each compound on fibroblast growth in vitro. The data indicate that sulphate groups are essential to act on the fibroblast proliferation. The dextran derivative LS21 DMCBSu has been chosen to investigate its effect on dermal wound healing process. Fibroblasts cultured in collagenous matrices named dermal equivalent were treated with the bioactive polymer alone or associated to TGF-beta1 or FGF-2. Cross-sections of dermal equivalent observed by histology or immunohistochemistry, demonstrated that the bioactive polymer accelerates the collagen matrices organization and stimulates the human type-III collagen expression. This bioactive polymer induces apoptosis of myofibroblast, property which may be beneficial in treatment of hypertrophic scar. Culture media analyzed by zymography and Western blot showed that this polymer significantly increases the secretion of zymogen and active form of matrix metalloproteinase-2 (MMP-2), involved in granulation tissue formation. These data suggest that this bioactive polymer has properties which may be beneficial in the treatment of wound healing.

  10. Long memory persistence in the factor of Implied volatility dynamics

    OpenAIRE

    Härdle, Wolfgang Karl; Mungo, Julius

    2007-01-01

    The volatility implied by observed market prices as a function of the strike and time to maturity form an Implied Volatility Surface (IV S). Practical applications require reducing the dimension and characterize its dynamics through a small number of factors. Such dimension reduction is summarized by a Dynamic Semiparametric Factor Model (DSFM) that characterizes the IV S itself and their movements across time by a multivariate time series of factor loadings. This paper focuses on investigati...

  11. The monetary value of the collective dose equivalent unit (person-rem)

    International Nuclear Information System (INIS)

    Rodgers, Reginald C.

    1978-01-01

    In the design and operation of nuclear power reactor facilities, it is recommended that radiation exposures to the workers and the general public be kept as 'low as reasonably achievable' (ALARA). In the process of implementing this principle cost-benefit evaluations are part of the decision making process. For this reason a monetary value has to be assigned to the collective dose equivalent unit (person-rem). The various factors such as medical health care, societal penalty and manpower replacement/saving are essential ingredients to determine a monetary value for the person-rem. These factors and their dependence on the level of risk (or exposure level) are evaluated. Monetary values of well under $100 are determined for the public dose equivalent unit. The occupational worker person-rem value is determined to be in the range of $500 to about $5000 depending on the exposure level and the type of worker and his affiliation, i.e., temporary or permanent. A discussion of the variability and the range of the monetary values will be presented. (author)

  12. Taxation of Outbound Direct Investment: Economic Principles and Tax Policy Considerations

    OpenAIRE

    Michael P Devereux

    2008-01-01

    This paper reviews economic principles for optimality of the taxation of international profit, from both a global and national perspective. It argues that for traditional systems based on the residence of the investor or the source of the income, nothing less than full harmonization across countries can achieve global optimality. The conditions for national optimality are more difficult to identify, but are most likely to imply source-based taxation. However, source-based taxation requires an...

  13. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  14. Correspondences. Equivalence relations

    International Nuclear Information System (INIS)

    Bouligand, G.M.

    1978-03-01

    We comment on sections paragraph 3 'Correspondences' and paragraph 6 'Equivalence Relations' in chapter II of 'Elements de mathematique' by N. Bourbaki in order to simplify their comprehension. Paragraph 3 exposes the ideas of a graph, correspondence and map or of function, and their composition laws. We draw attention to the following points: 1) Adopting the convention of writting from left to right, the composition law for two correspondences (A,F,B), (U,G,V) of graphs F, G is written in full generality (A,F,B)o(U,G,V) = (A,FoG,V). It is not therefore assumed that the co-domain B of the first correspondence is identical to the domain U of the second (EII.13 D.7), (1970). 2) The axiom of choice consists of creating the Hilbert terms from the only relations admitting a graph. 3) The statement of the existence theorem of a function h such that f = goh, where f and g are two given maps having the same domain (of definition), is completed if h is more precisely an injection. Paragraph 6 considers the generalisation of equality: First, by 'the equivalence relation associated with a map f of a set E identical to (x is a member of the set E and y is a member of the set E and x:f = y:f). Consequently, every relation R(x,y) which is equivalent to this is an equivalence relation in E (symmetrical, transitive, reflexive); then R admits a graph included in E x E, etc. Secondly, by means of the Hilbert term of a relation R submitted to the equivalence. In this last case, if R(x,y) is separately collectivizing in x and y, theta(x) is not the class of objects equivalent to x for R (EII.47.9), (1970). The interest of bringing together these two subjects, apart from this logical order, resides also in the fact that the theorem mentioned in 3) can be expressed by means of the equivalence relations associated with the functions f and g. The solutions of the examples proposed reveal their simplicity [fr

  15. The cosmological principle is not in the sky

    Science.gov (United States)

    Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan

    2017-08-01

    The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.

  16. The twin paradox and the principle of relativity

    International Nuclear Information System (INIS)

    Grøn, Øyvind

    2013-01-01

    The twin paradox is intimately related to the principle of relativity. Two twins A and B meet, travel away from each other and meet again. From the point of view of A, B is the traveller. Thus, A predicts B to be younger than A herself, and vice versa. Both cannot be correct. The special relativistic solution is to say that if one of the twins, say A, was inertial during the separation, she will be the older one. Since the principle of relativity is not valid for accelerated motion according to the special theory of relativity B cannot consider herself as at rest permanently because she must accelerate in order to return to her sister. A general relativistic solution is to say that due to the principle of equivalence B can consider herself as at rest, but she must invoke the gravitational change of time in order to predict correctly the age of A during their separation. However one may argue that the fact that B is younger than A shows that B was accelerated, not A, and hence the principle of relativity is not valid for accelerated motion in the general theory of relativity either. I here argue that perfect inertial dragging may save the principle of relativity, and that this requires a new model of the Minkowski spacetime where the cosmic mass is represented by a massive shell with radius equal to its own Schwarzschild radius. (paper)

  17. The Principle of Will Autonomy in the Obligatory Law

    Directory of Open Access Journals (Sweden)

    MA. Shyhrete Kastrati

    2015-06-01

    Full Text Available The principle of autonomy of will is legislated with the Article 2 of the Law no. 04/L–077 on Obligational Relationships1, thereby providing the legal grounds for the regulation of legal relations between parties in obligational relationship. This study aims to provide a contribution to the theory and practice, and also aims at providing a modest contribution to the obligational law doctrine in Kosovo. The purpose of the paper is to explore the gaps and weaknesses in practical implementation of the principle, which represents the main pillar of obligational law. In this paper, combined methods were used, including research and descriptive methods, analysis and synthesis, comparative and normative methods. The exploration method was used throughout the paper, and entails the collection of hard-copy and electronic materials. The descriptive method implies a description of concepts, important thoughts of legal science, and in this case, on the principle of autonomy of will, thereby using literature of various authors. The analytical and synthetic methodology is aimed at achieving the study objectives, the recognition of the principle of autonomy of will, practical implementation thereof, and conclusions. The comparative method was applied in comparing the implementation of the principle in the Law on Obligational Relationships of Kosovo and the Law on Obligational Relationships of the former Socialist Federal Republic of Kosovo, and the Civil Code of the Republic of Albania. The normative method was necessary, since the topic of the study is about legal norms.

  18. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1987-11-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab

  19. Effective dose equivalent

    International Nuclear Information System (INIS)

    Huyskens, C.J.; Passchier, W.F.

    1988-01-01

    The effective dose equivalent is a quantity which is used in the daily practice of radiation protection as well as in the radiation hygienic rules as measure for the health risks. In this contribution it is worked out upon which assumptions this quantity is based and in which cases the effective dose equivalent can be used more or less well. (H.W.)

  20. Equivalence relations of AF-algebra extensions

    Indian Academy of Sciences (India)

    In this paper, we consider equivalence relations of *-algebra extensions and describe the relationship between the isomorphism equivalence and the unitary equivalence. We also show that a certain group homomorphism is the obstruction for these equivalence relations to be the same.

  1. Implied reading direction and prioritization of letter encoding.

    Science.gov (United States)

    Holcombe, Alex O; Nguyen, Elizabeth H L; Goodbourn, Patrick T

    2017-10-01

    Capacity limits hinder processing of multiple stimuli, contributing to poorer performance for identifying two briefly presented letters than for identifying a single letter. Higher accuracy is typically found for identifying the letter on the left, which has been attributed to a right-hemisphere dominance for selective attention. Here, we use rapid serial visual presentation (RSVP) of letters in two locations at once. The letters to be identified are simultaneous and cued by rings. In the first experiment, we manipulated implied reading direction by rotating or mirror-reversing the letters to face to the left rather than to the right. The left-side performance advantage was eliminated. In the second experiment, letters were positioned above and below fixation, oriented such that they appeared to face downward (90° clockwise rotation) or upward (90° counterclockwise rotation). Again consistent with an effect of implied reading direction, performance was better for the top position in the downward condition, but not in the upward condition. In both experiments, mixture modeling of participants' report errors revealed that attentional sampling from the two locations was approximately simultaneous, ruling out the theory that the letter on one side was processed first, followed by a shift of attention to sample the other letter. Thus, the orientation of the letters apparently controls not when the letters are sampled from the scene, but rather the dynamics of a subsequent process, such as tokenization or memory consolidation. Implied reading direction appears to determine the letter prioritized at a high-level processing bottleneck. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    International Nuclear Information System (INIS)

    Yao Dezhong; He Bin

    2003-01-01

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping

  3. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yao Dezhong [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu City, 610054, Sichuan Province (China); He Bin [The University of Illinois at Chicago, IL (United States)

    2003-11-07

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping.

  4. The transnational ne bis in idem principle in the EU. Mutual recognition and equivalent protection of human rights

    Directory of Open Access Journals (Sweden)

    John A.E. Vervaele

    2005-12-01

    Full Text Available The deepening and widening of European integration has led to an increase in transborder crime. Concurrent prosecution and sanctioning by several Member States is not only a problem in inter-state relations and an obstacle in the European integration process, but also a violation of the ne bis in idem principle, defined as a transnational human right in a common judicial area. This article analyzes whether and to what extent the ECHR has contributed and may continue to contribute to the development of such a common ne bis in idem standard in Europe. It is also examined whether the application of the ne bis in idem principle in classic inter-state judicial cooperation in criminal matters in the framework of the Council of Europe may make such a contribution as well. The transnational function of the ne bis in idem principle is discussed in the light of the Court of Justice’s case law on ne bis in idem in the framework of the area of Freedom, Security and Justice. Finally the inherent tension between mutual recognition and the protection of human rights in transnational justice is analyzed by looking at the insertion of the ne bis in idem principle in the Framework Decision on the European arrest warrant.

  5. An Invariance Principle to Ferrari-Spohn Diffusions

    Science.gov (United States)

    Ioffe, Dmitry; Shlosman, Senya; Velenik, Yvan

    2015-06-01

    We prove an invariance principle for a class of tilted 1 + 1-dimensional SOS models or, equivalently, for a class of tilted random walk bridges in . The limiting objects are stationary reversible ergodic diffusions with drifts given by the logarithmic derivatives of the ground states of associated singular Sturm-Liouville operators. In the case of a linear area tilt, we recover the Ferrari-Spohn diffusion with log-Airy drift, which was derived in Ferrari and Spohn (Ann Probab 33(4):1302—1325, 2005) in the context of Brownian motions conditioned to stay above circular and parabolic barriers.

  6. Correlation Structures of Correlated Binomial Models and Implied Default Distribution

    Science.gov (United States)

    Mori, Shintaro; Kitsukawa, Kenji; Hisakado, Masato

    2008-11-01

    We show how to analyze and interpret the correlation structures, the conditional expectation values and correlation coefficients of exchangeable Bernoulli random variables. We study implied default distributions for the iTraxx-CJ tranches and some popular probabilistic models, including the Gaussian copula model, Beta binomial distribution model and long-range Ising model. We interpret the differences in their profiles in terms of the correlation structures. The implied default distribution has singular correlation structures, reflecting the credit market implications. We point out two possible origins of the singular behavior.

  7. The action principle for a system of differential equations

    International Nuclear Information System (INIS)

    Gitman, D M; Kupriyanov, V G

    2007-01-01

    We consider the problem of constructing an action functional for physical systems whose classical equations of motion cannot be directly identified with Euler-Lagrange equations for an action principle. Two ways of constructing the action principle are presented. From simple consideration, we derive the necessary and sufficient conditions for the existence of a multiplier matrix which can endow a prescribed set of second-order differential equations with the structure of the Euler-Lagrange equations. An explicit form of the action is constructed if such a multiplier exists. If a given set of differential equations cannot be derived from an action principle, one can reformulate such a set in an equivalent first-order form which can always be treated as the Euler-Lagrange equations of a certain action. We construct such an action explicitly. There exists an ambiguity (not reduced to a total time derivative) in associating a Lagrange function with a given set of equations. We present a complete description of this ambiguity. The general procedure is illustrated by several examples

  8. The action principle for a system of differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D M [Instituto de FIsica, Universidade de Sao Paulo (Brazil); Kupriyanov, V G [Instituto de FIsica, Universidade de Sao Paulo (Brazil)

    2007-08-17

    We consider the problem of constructing an action functional for physical systems whose classical equations of motion cannot be directly identified with Euler-Lagrange equations for an action principle. Two ways of constructing the action principle are presented. From simple consideration, we derive the necessary and sufficient conditions for the existence of a multiplier matrix which can endow a prescribed set of second-order differential equations with the structure of the Euler-Lagrange equations. An explicit form of the action is constructed if such a multiplier exists. If a given set of differential equations cannot be derived from an action principle, one can reformulate such a set in an equivalent first-order form which can always be treated as the Euler-Lagrange equations of a certain action. We construct such an action explicitly. There exists an ambiguity (not reduced to a total time derivative) in associating a Lagrange function with a given set of equations. We present a complete description of this ambiguity. The general procedure is illustrated by several examples.

  9. Asset allocation using option-implied moments

    Science.gov (United States)

    Bahaludin, H.; Abdullah, M. H.; Tolos, S. M.

    2017-09-01

    This study uses an option-implied distribution as the input in asset allocation. The computation of risk-neutral densities (RND) are based on the Dow Jones Industrial Average (DJIA) index option and its constituents. Since the RNDs estimation does not incorporate risk premium, the conversion of RND into risk-world density (RWD) is required. The RWD is obtained through parametric calibration using the beta distributions. The mean, volatility, and covariance are then calculated to construct the portfolio. The performance of the portfolio is evaluated by using portfolio volatility and Sharpe ratio.

  10. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  11. PRINCIPLE ON THE LAND REGISTER IN THE INTERPRETATION OF JURISPRUDENCE

    Directory of Open Access Journals (Sweden)

    Hamid Mutapčić

    2016-04-01

    Full Text Available For a longer period of time land registers in Bosnia and Herzegovina do not reflect the actual situation regarding property rights. The reasons should be sought in the poor quality of and inconsistent legislation that allowed non-registered acquisition of real property rights. On the basis of such legislation earlier Yugoslav jurisprudence had permanently denied the acquisition of property rights based on the principle of trust in the land registry. A new definition of the principle of trust, which implies the protection of the rights acquired on the basis of incorrect and incomplete land registry status, was introduced with the entry into force of the new entity laws on land registry. The main intention of the legislature is reaffirmation of the land registry and its basic principles, which is a precondition for faster and easier real estate transactions. However, the new law provides for real solutions that prevent the full application of the principle of trust, which results in the adoption of different and unequal judicial decisions. The paper presents analysis of such legal solutions, also defects that generate the emergence of different concepts of law are detected, and proposals de lege ferenda are listed in order to create the legal conditions for uniform jurisprudence.

  12. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    Directory of Open Access Journals (Sweden)

    Frantisek Jelenciak

    2015-12-01

    Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.

  13. Information Leakage from Logically Equivalent Frames

    Science.gov (United States)

    Sher, Shlomi; McKenzie, Craig R. M.

    2006-01-01

    Framing effects are said to occur when equivalent frames lead to different choices. However, the equivalence in question has been incompletely conceptualized. In a new normative analysis of framing effects, we complete the conceptualization by introducing the notion of information equivalence. Information equivalence obtains when no…

  14. 21 CFR 26.9 - Equivalence determination.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Equivalence determination. 26.9 Section 26.9 Food... Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.9 Equivalence determination... document insufficient evidence of equivalence, lack of opportunity to assess equivalence or a determination...

  15. Modeling and Forecasting the Implied Volatility of the WIG20 Index

    OpenAIRE

    Buszkowska-Khemissi, Eliza; Płuciennik, Piotr

    2007-01-01

    The implied volatility is one of the most important notions in the financial market. It informs about the volatility forecasted by the participans of the market. In this paper we calculate the daily implied volatility from options on the WIG20 index. First we test the long memory property of the time series obtained in such a way, and then we model and forcast it as ARFIMA process

  16. Editorial: New operational dose equivalent quantities

    International Nuclear Information System (INIS)

    Harvey, J.R.

    1985-01-01

    The ICRU Report 39 entitled ''Determination of Dose Equivalents Resulting from External Radiation Sources'' is briefly discussed. Four new operational dose equivalent quantities have been recommended in ICRU 39. The 'ambient dose equivalent' and the 'directional dose equivalent' are applicable to environmental monitoring and the 'individual dose equivalent, penetrating' and the 'individual dose equivalent, superficial' are applicable to individual monitoring. The quantities should meet the needs of day-to-day operational practice, while being acceptable to those concerned with metrological precision, and at the same time be used to give effective control consistent with current perceptions of the risks associated with exposure to ionizing radiations. (U.K.)

  17. Implied Volatility Surface: Construction Methodologies and Characteristics

    OpenAIRE

    Cristian Homescu

    2011-01-01

    The implied volatility surface (IVS) is a fundamental building block in computational finance. We provide a survey of methodologies for constructing such surfaces. We also discuss various topics which can influence the successful construction of IVS in practice: arbitrage-free conditions in both strike and time, how to perform extrapolation outside the core region, choice of calibrating functional and selection of numerical optimization algorithms, volatility surface dynamics and asymptotics.

  18. Mixed field dose equivalent measuring instruments

    International Nuclear Information System (INIS)

    Brackenbush, L.W.; McDonald, J.C.; Endres, G.W.R.; Quam, W.

    1985-01-01

    In the past, separate instruments have been used to monitor dose equivalent from neutrons and gamma rays. It has been demonstrated that it is now possible to measure simultaneously neutron and gamma dose with a single instrument, the tissue equivalent proportional counter (TEPC). With appropriate algorithms dose equivalent can also be determined from the TEPC. A simple ''pocket rem meter'' for measuring neutron dose equivalent has already been developed. Improved algorithms for determining dose equivalent for mixed fields are presented. (author)

  19. Characterization of revenue equivalence

    NARCIS (Netherlands)

    Heydenreich, B.; Müller, R.; Uetz, Marc Jochen; Vohra, R.

    2009-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called revenue equivalence. We give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The characterization holds

  20. Characterization of Revenue Equivalence

    NARCIS (Netherlands)

    Heydenreich, Birgit; Müller, Rudolf; Uetz, Marc Jochen; Vohra, Rakesh

    2008-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called \\emph{revenue equivalence}. In this paper we give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The

  1. Market-implied risk-neutral probabilities, actual probabilities, credit risk and news

    Directory of Open Access Journals (Sweden)

    Shashidhar Murthy

    2011-09-01

    Full Text Available Motivated by the credit crisis, this paper investigates links between risk-neutral probabilities of default implied by markets (e.g. from yield spreads and their actual counterparts (e.g. from ratings. It discusses differences between the two and clarifies underlying economic intuition using simple representations of credit risk pricing. Observed large differences across bonds in the ratio of the two probabilities are shown to imply that apparently safer securities can be more sensitive to news.

  2. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. I. The normal modes

    International Nuclear Information System (INIS)

    Comandi, G.L.; Chiofalo, M.L.; Toncelli, R.; Bramanti, D.; Polacco, E.; Nobili, A.M.

    2006-01-01

    Recent theoretical work suggests that violation of the equivalence principle might be revealed in a measurement of the fractional differential acceleration η between two test bodies-of different compositions, falling in the gravitational field of a source mass--if the measurement is made to the level of η≅10 -13 or better. This being within the reach of ground based experiments gives them a new impetus. However, while slowly rotating torsion balances in ground laboratories are close to reaching this level, only an experiment performed in a low orbit around the Earth is likely to provide a much better accuracy. We report on the progress made with the 'Galileo Galilei on the ground' (GGG) experiment, which aims to compete with torsion balances using an instrument design also capable of being converted into a much higher sensitivity space test. In the present and following articles (Part I and Part II), we demonstrate that the dynamical response of the GGG differential accelerometer set into supercritical rotation-in particular, its normal modes (Part I) and rejection of common mode effects (Part II)-can be predicted by means of a simple but effective model that embodies all the relevant physics. Analytical solutions are obtained under special limits, which provide the theoretical understanding. A simulation environment is set up, obtaining a quantitative agreement with the available experimental data on the frequencies of the normal modes and on the whirling behavior. This is a needed and reliable tool for controlling and separating perturbative effects from the expected signal, as well as for planning the optimization of the apparatus

  3. Verification of the weak equivalence principle of supports and heavy masses using SQUIDs; Ueberpruefung des schwachen Aequivalenzprinzips von Traegern und schwerer Masse mittels Squids

    Energy Technology Data Exchange (ETDEWEB)

    Vodel, W.; Nietzsche, S.; Neubert, R. [Friedrich-Schiller-Universitaet Jena (Germany). Inst. fuer Festkoerperphysik; Dittus, H. [Univ. Bremen (Germany). Zentrum fuer angewandte Raumfahrttechnologie und Mikrogravitation

    2003-07-01

    The weak equivalence principle is one of the fundamental hypotheses of general relativity and one of the key elements of our physical picture of the world, but since Galileo there has been no satisfactory way of verifying it. The new SQUID technology may offer a solution. The contribution presents the experiments of Jena University. Applications are envisaged, e.g., in the STEP space mission of the NASA/ESA. [German] Das Schwache Aequivalenzprinzip ist eine der grundlegenden Hypothesen der Allgemeinen Relativitaetstheorie und damit einer der Grundpfeiler unseres physikalischen Weltbildes. Obwohl es seit den ersten Experimenten von Galileo Galilei am Schiefen Turm zu Pisa im Jahre 1638 bis heute schon zahlreiche und immer praeziser werdende Messungen zur Ueberpruefung der Aequivalenz von schwerer und traeger Masse gegeben hat, ist die strenge Gueltigkeit dieses fundamentalen Prinzips experimentell vergleichsweise unzureichend bestimmt. Neuere Methoden, wie der Einsatz SQUID-basierter Messtechnik und die Durchfuehrung von Experimenten auf Satelliten, lassen Verbesserungen schon in naher Zukunft erwarten, so dass theoretische Ueberlegungen zur Vereinigung aller uns bekannten physikalischen Wechselwirkungen, die eine Verletzung des Schwachen Aequivalenzprinzips voraussagen, experimentell eingegrenzt werden koennten. Der Beitrag gibt einen Ueberblick ueber die an der Universitaet Jena entwickelte SQUID-basierte Messtechnik zum Test des Aequivalenzprinzips und fasst die bisher bei Freifallversuchen am Fallturm Bremen erzielten experimentellen Ergebnisse zusammen. Ein Ausblick auf die geplante Raumfahrtmission STEP der NASA/ESA zum Praezisionstest des Schwachen Aequivalenzprinzips schliesst den Beitrag ab. (orig.)

  4. Necessary and sufficient conditions for non-perturbative equivalences of large Nc orbifold gauge theories

    International Nuclear Information System (INIS)

    Kovtun, Pave; Uensal, Mithat; Yaffe, Laurence G.

    2005-01-01

    Large N coherent state methods are used to study the relation between U(N c ) gauge theories containing adjoint representation matter fields and their orbifold projections. The classical dynamical systems which reproduce the large N c limits of the quantum dynamics in parent and daughter orbifold theories are compared. We demonstrate that the large N c dynamics of the parent theory, restricted to the subspace invariant under the orbifold projection symmetry, and the large N c dynamics of the daughter theory, restricted to the untwisted sector invariant under 'theory space' permutations, coincide. This implies equality, in the large N c limit, between appropriately identified connected correlation functions in parent and daughter theories, provided the orbifold projection symmetry is not spontaneously broken in the parent theory and the theory space permutation symmetry is not spontaneously broken in the daughter. The necessity of these symmetry realization conditions for the validity of the large N c equivalence is unsurprising, but demonstrating the sufficiency of these conditions is new. This work extends an earlier proof of non-perturbative large N c equivalence which was only valid in the phase of the (lattice regularized) theories continuously connected to large mass and strong coupling

  5. Elaboration of the recently proposed test of Pauli's principle under strong interactions

    International Nuclear Information System (INIS)

    Ktorides, C.N.; Myung, H.C.; Santilli, R.M.

    1980-01-01

    The primary objective of this paper is to stimulate the experimental verification of the validity or invalidity of Pauli's principle under strong interactions. We first outline the most relevant steps in the evolution of the notion of particle. The spin as well as other intrinsic characteristics of extended, massive, particles under electromagnetic interactions at large distances might be subjected to a mutation under additional strong interactions at distances smaller than their charge radius. These dynamical effects can apparently be conjectured to account for the nonpointlike nature of the particles, their necessary state of penetration to activate the strong interactions, and the consequential emergence of broader forces which imply the breaking of the SU(2)-spin symmetry. We study a characterization of the mutated value of the spin via the transition from the associative enveloping algebra of SU(2) to a nonassociative Lie-admissible form. The departure from the original associative product then becomes directly representative of the breaking of the SU(2)-spin symmetry, the presence of forces more general than those derivable from a potential, and the mutated value of the spin. In turn, such a departure of the spin from conventional quantum-mechanical values implies the inapplicability of Pauli's exclusion principle under strong interactions, because, according to this hypothesis, particles that are fermions under long-range electromagnetic interactions are no longer fermions under these broader, short-range, forces. In nuclear physics possible deviations from Pauli's exclusion principle can at most be very small. These experimental data establish that, for the nuclei considered, nucleons are in a partial state of penetration of their charge volumes although of small statistical character

  6. Analysing Discursive Practices in Legal Research: How a Single Remark Implies a Paradigm

    Directory of Open Access Journals (Sweden)

    Paul van den Hoven

    2017-12-01

    Full Text Available Different linguistic theories of meaning (semantic theories imply different methods to discuss meaning. Discussing meaning is what legal practitioners frequently do to decide legal issues and, subsequently, legal scholars analyse in their studies these discursive practices of parties, judges and legal experts. Such scholarly analysis reveals a methodical choice on how to discuss meaning and therefore implies positioning oneself towards a semantic theory of meaning, whether the scholar is aware of this or not. Legal practitioners may not be bound to be consistent in their commitment to semantic theories, as their task is to decide legal issues. Legal scholars, however, should be consistent because commitment to a semantic theory implies a distinct position towards important legal theoretical doctrines. In this paper three examples are discussed that require an articulated position of the legal scholar because the discursive practices of legal practitioners show inconsistencies. For each of these examples it can be shown that a scholar’s methodic choice implies commitment to a specific semantic theory, and that adopting such a theory implies a distinct position towards the meaning of the Rule of Law, the separation of powers doctrine and the institutional position of the judge.

  7. On the operator equivalents

    International Nuclear Information System (INIS)

    Grenet, G.; Kibler, M.

    1978-06-01

    A closed polynomial formula for the qth component of the diagonal operator equivalent of order k is derived in terms of angular momentum operators. The interest in various fields of molecular and solid state physics of using such a formula in connection with symmetry adapted operator equivalents is outlined

  8. Degeneracy relations in QCD and the equivalence of two systematic all-orders methods for setting the renormalization scale

    Directory of Open Access Journals (Sweden)

    Huan-Yu Bi

    2015-09-01

    Full Text Available The Principle of Maximum Conformality (PMC eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I; the other, more recent, method (PMC-II uses the Rδ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfy all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio Re+e− and the Higgs partial width Γ(H→bb¯. Both methods lead to the same resummed (‘conformal’ series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {βi}-terms in the pQCD expansion are taken into account. We also show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.

  9. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Science.gov (United States)

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  10. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Science.gov (United States)

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  11. Strong equivalence, Lorentz and CPT violation, anti-hydrogen spectroscopy and gamma-ray burst polarimetry

    International Nuclear Information System (INIS)

    Shore, Graham M.

    2005-01-01

    The strong equivalence principle, local Lorentz invariance and CPT symmetry are fundamental ingredients of the quantum field theories used to describe elementary particle physics. Nevertheless, each may be violated by simple modifications to the dynamics while apparently preserving the essential fundamental structure of quantum field theory itself. In this paper, we analyse the construction of strong equivalence, Lorentz and CPT violating Lagrangians for QED and review and propose some experimental tests in the fields of astrophysical polarimetry and precision atomic spectroscopy. In particular, modifications of the Maxwell action predict a birefringent rotation of the direction of linearly polarised radiation from synchrotron emission which may be studied using radio galaxies or, potentially, gamma-ray bursts. In the Dirac sector, changes in atomic energy levels are predicted which may be probed in precision spectroscopy of hydrogen and anti-hydrogen atoms, notably in the Doppler-free, two-photon 1s-2s and 2s-nd (n∼10) transitions

  12. Tachyons imply the existence of a privileged frame

    Energy Technology Data Exchange (ETDEWEB)

    Sjoedin, T.; Heylighen, F.

    1985-12-16

    It is shown that the existence of faster-than-light signals (tachyons) would imply the existence (and detectability) of a privileged inertial frame and that one can avoid all problems with reversed-time order only by using absolute synchronization instead of the standard one. The connection between these results and the EPR-paradox is discussed.

  13. Investigation of Equivalent Circuit for PEMFC Assessment

    International Nuclear Information System (INIS)

    Myong, Kwang Jae

    2011-01-01

    Chemical reactions occurring in a PEMFC are dominated by the physical conditions and interface properties, and the reactions are expressed in terms of impedance. The performance of a PEMFC can be simply diagnosed by examining the impedance because impedance characteristics can be expressed by an equivalent electrical circuit. In this study, the characteristics of a PEMFC are assessed using the AC impedance and various equivalent circuits such as a simple equivalent circuit, equivalent circuit with a CPE, equivalent circuit with two RCs, and equivalent circuit with two CPEs. It was found in this study that the characteristics of a PEMFC could be assessed using impedance and an equivalent circuit, and the accuracy was highest for an equivalent circuit with two CPEs

  14. Language comprehenders retain implied shape and orientation of objects.

    Science.gov (United States)

    Pecher, Diane; van Dantzig, Saskia; Zwaan, Rolf A; Zeelenberg, René

    2009-06-01

    According to theories of embodied cognition, language comprehenders simulate sensorimotor experiences to represent the meaning of what they read. Previous studies have shown that picture recognition is better if the object in the picture matches the orientation or shape implied by a preceding sentence. In order to test whether strategic imagery may explain previous findings, language comprehenders first read a list of sentences in which objects were mentioned. Only once the complete list had been read was recognition memory tested with pictures. Recognition performance was better if the orientation or shape of the object matched that implied by the sentence, both immediately after reading the complete list of sentences and after a 45-min delay. These results suggest that previously found match effects were not due to strategic imagery and show that details of sensorimotor simulations are retained over longer periods.

  15. GAUGE PRINCIPLE AND VARIATIONAL FORMULATION FOR FLOWS OF AN IDEAL FLUID

    Institute of Scientific and Technical Information of China (English)

    KAMBE Tsutomu

    2003-01-01

    A gauge principle is applied to mass flows of an ideal compressible fluid subject to Galilei transformation. A free-field Lagrangian defined at the outset is invariant with respect to global SO(3) gauge transformations as well as Galilei transformations. The action principle leads to the equation of potential flows under constraint of a continuity equation. However, the irrotational flow is not invariant with respect to local SO(3) gauge transformations. According to the gauge principle,a gauge-covariant derivative is defined by introducing a new gauge field. Galilei invariance of the derivative requires the gauge field to coincide with the vorticity, i.e. the curl of the velocity field. A full gauge-covariant variational formulation is proposed on the basis of the Hamilton's principle and an assoicated Lagrangian. By means of an isentropic material variation taking into account individual particle motion, the Euler's equation of motion is derived for isentropic flows by using the covariant derivative. Noether's law associated with global SO(3) gauge invariance leads to the conservation of total angular momentum. In addition, the Lagrangian has a local symmetry of particle permutation which results in local conservation law equivalent to the vorticity equation.

  16. Logically automorphically equivalent knowledge bases

    OpenAIRE

    Aladova, Elena; Plotkin, Tatjana

    2017-01-01

    Knowledge bases theory provide an important example of the field where applications of universal algebra and algebraic logic look very natural, and their interaction with practical problems arising in computer science might be very productive. In this paper we study the equivalence problem for knowledge bases. Our interest is to find out how the informational equivalence is related to the logical description of knowledge. Studying various equivalences of knowledge bases allows us to compare d...

  17. Trust and Reciprocity: Are Effort and Money Equivalent?

    Science.gov (United States)

    Vilares, Iris; Dam, Gregory; Kording, Konrad

    2011-01-01

    Trust and reciprocity facilitate cooperation and are relevant to virtually all human interactions. They are typically studied using trust games: one subject gives (entrusts) money to another subject, which may return some of the proceeds (reciprocate). Currently, however, it is unclear whether trust and reciprocity in monetary transactions are similar in other settings, such as physical effort. Trust and reciprocity of physical effort are important as many everyday decisions imply an exchange of physical effort, and such exchange is central to labor relations. Here we studied a trust game based on physical effort and compared the results with those of a computationally equivalent monetary trust game. We found no significant difference between effort and money conditions in both the amount trusted and the quantity reciprocated. Moreover, there is a high positive correlation in subjects' behavior across conditions. This suggests that trust and reciprocity may be character traits: subjects that are trustful/trustworthy in monetary settings behave similarly during exchanges of physical effort. Our results validate the use of trust games to study exchanges in physical effort and to characterize inter-subject differences in trust and reciprocity, and also suggest a new behavioral paradigm to study these differences. PMID:21364931

  18. Ambient dose equivalent H*(d) - an appropriate philosophy for radiation monitoring onboard aircraft and in space?

    International Nuclear Information System (INIS)

    Vana, N.; Hajek, M.; Berger, T.

    2003-01-01

    In this paper authors deals with the ambient dose equivalent H * (d) and their application for onboard Aircraft and Space station. The discussion and the carried out experiments demonstrated that the philosophy of H * (10) leads to an underestimation of the whole-body radiation exposure when applied onboard aircraft and in space. It therefore has to be considered to introduce a new concept that could be based on microdosimetric principles, offering the unique potential of a more direct correlation to radiobiological parameters

  19. Applicability of ambient dose equivalent H*(d) in mixed radiation fields - a critical discussion

    International Nuclear Information System (INIS)

    Hajek, M.; Vana, N.

    2004-01-01

    skin dose equivalent may well be used as a conservative estimate for the whole body effective dose. The intention of ICRU in introducing H*(d) was that this quantity should be suitable for metrology, and be unified, i.e. the same for all radiation fields. As was demonstrated by our experiments, this demand can only be satisfied for radiation fields encountered in terrestrial standard dosimetry, but it will certainly fail for complexly mixed fields. It therefore has to be discussed to substitute the philosophy of ambient dose equivalent by a new concept that could be based on microdosimetric principles, offering the unique potential of a more direct correlation with radiobiological parameters. (author)

  20. Applicability of Ambient Dose Equivalent H (d) in Mixed Radiation Fields - A Critical Discussion

    International Nuclear Information System (INIS)

    Vana, R.; Hajek, M.; Bergerm, T.

    2004-01-01

    dose equivalent may well be used as a conservative estimate for the whole body effective dose. The intention of ICRU in introducing H(d) was that this quantity should (i) be suitable for metrology, and (ii) be unified, i.e. the same for all radiation fields. As was demonstrated by our experiments, this demand can only be satisfied for radiation fields encountered in terrestrial standard dosimetry, but it will certainly fail for complexly mixed fields. It therefore has to be discussed to substitute the philosophy of ambient dose equivalent by a new concept that could be based on microdosimetric principles, offering the unique potential of a more direct correlation with radiobiological parameters. (Author)

  1. The Impact of Jump Distributions on the Implied Volatility of Variance

    DEFF Research Database (Denmark)

    Nicolato, Elisa; Pisani, Camilla; Pedersen, David Sloth

    2017-01-01

    We consider a tractable affine stochastic volatility model that generalizes the seminal Heston (1993) model by augmenting it with jumps in the instantaneous variance process. In this framework, we consider both realized variance options and VIX options, and we examine the impact of the distribution...... of jumps on the associated implied volatility smile. We provide sufficient conditions for the asymptotic behavior of the implied volatility of variance for small and large strikes. In particular, by selecting alternative jump distributions, we show that one can obtain fundamentally different shapes...

  2. The implied volatility of U.S. interest rates: evidence from callable U. S. Treasuries

    OpenAIRE

    Robert R. Bliss; Ehud I. Ronn

    1995-01-01

    The prices for callable U.S. Treasury securities provide the sole source of evidence concerning the implied volatility of interest rates over the extended 1926-1994 period. This paper uses the prices of callable as well as non-callable Treasury instruments to estimate implied interest rate volatilities for the past sixty years, and, for the more recent 1989-1994 period, the cross-sectional term structures of implied interest rate volatility. We utilize these estimates to perform cross-section...

  3. Intuitive understanding of nonlocality as implied by quantum theory

    International Nuclear Information System (INIS)

    Bohm, D.G.; Hiley, B.J.

    1975-01-01

    The fact is brought out that the essential new quality implied by the quantum theory is nonlocality; i.e., that a system cannot be analyzed into parts whose basic properties do not depend on the state of the whole system. This is done in terms of the causal interpretation of the quantum theory, proposed by one of us (D.B.) in 2952, involving the introduction of the ''quantum potential.'' It is shown that this approach implies a new universal type of description, in which the standard or canonical form is always supersystem-system-subsystem; and this leads to the radically new notion of unbroken wholeness of the entire universe. Finally, some of the implications of extending these notions to the relativity domain, and in so doing, a novel concept of time, in terms of which relativity and quantum theory may eventually be brought together, is indicated

  4. Implied Movement in Static Images Reveals Biological Timing Processing

    Directory of Open Access Journals (Sweden)

    Francisco Carlos Nather

    2015-08-01

    Full Text Available Visual perception is adapted toward a better understanding of our own movements than those of non-conspecifics. The present study determined whether time perception is affected by pictures of different species by considering the evolutionary scale. Static (“S” and implied movement (“M” images of a dog, cheetah, chimpanzee, and man were presented to undergraduate students. S and M images of the same species were presented in random order or one after the other (S-M or M-S for two groups of participants. Movement, Velocity, and Arousal semantic scales were used to characterize some properties of the images. Implied movement affected time perception, in which M images were overestimated. The results are discussed in terms of visual motion perception related to biological timing processing that could be established early in terms of the adaptation of humankind to the environment.

  5. 46 CFR 175.540 - Equivalents.

    Science.gov (United States)

    2010-10-01

    ... Safety Management (ISM) Code (IMO Resolution A.741(18)) for the purpose of determining that an equivalent... Organization (IMO) “Code of Safety for High Speed Craft” as an equivalent to compliance with applicable...

  6. Wijsman Orlicz Asymptotically Ideal -Statistical Equivalent Sequences

    Directory of Open Access Journals (Sweden)

    Bipan Hazarika

    2013-01-01

    in Wijsman sense and present some definitions which are the natural combination of the definition of asymptotic equivalence, statistical equivalent, -statistical equivalent sequences in Wijsman sense. Finally, we introduce the notion of Cesaro Orlicz asymptotically -equivalent sequences in Wijsman sense and establish their relationship with other classes.

  7. The performance of low pressure tissue-equivalent chambers and a new method for parameterising the dose equivalent

    International Nuclear Information System (INIS)

    Eisen, Y.

    1986-01-01

    The performance of Rossi-type spherical tissue-equivalent chambers with equivalent diameters between 0.5 μm and 2 μm was tested experimentally using monoenergetic and polyenergetic neutron sources in the energy region of 10 keV to 14.5 MeV. In agreement with theoretical predictions both chambers failed to provide LET information at low neutron energies. A dose equivalent algorithm was derived that utilises the event distribution but does not attempt to correlate event size with LET. The algorithm was predicted theoretically and confirmed by experiment. The algorithm that was developed determines the neutron dose equivalent, from the data of the 0.5 μm chamber, to better than +-20% over the energy range of 30 keV to 14.5 MeV. The same algorithm also determines the dose equivalent from the data of the 2 μm chamber to better than +-20% over the energy range of 60 keV to 14.5 MeV. The efficiency of the chambers is 33 counts per μSv, or equivalently about 10 counts s -1 per mSv.h -1 . This efficiency enables the measurement of dose equivalent rates above 1 mSv.h -1 for an integration period of 3 s. Integrated dose equivalents can be measured as low as 1 μSv. (author)

  8. Do flow principles of operations management apply to computing centres?

    CERN Document Server

    Abaunza, Felipe; Hameri, Ari-Pekka; Niemi, Tapio

    2014-01-01

    By analysing large data-sets on jobs processed in major computing centres, we study how operations management principles apply to these modern day processing plants. We show that Little’s Law on long-term performance averages holds to computing centres, i.e. work-in-progress equals throughput rate multiplied by process lead time. Contrary to traditional manufacturing principles, the law of variation does not hold to computing centres, as the more variation in job lead times the better the throughput and utilisation of the system. We also show that as the utilisation of the system increases lead times and work-in-progress increase, which complies with traditional manufacturing. In comparison with current computing centre operations these results imply that better allocation of jobs could increase throughput and utilisation, while less computing resources are needed, thus increasing the overall efficiency of the centre. From a theoretical point of view, in a system with close to zero set-up times, as in the c...

  9. The Principle of Polyrepresentation: Document Representations Created by Different Agents

    Directory of Open Access Journals (Sweden)

    Dora Rubinić

    2015-03-01

    Full Text Available Abstract Purpose:The paper gives a review of literature on the principle of polyrepresentation formulated by Ingwersen in the nineties of the 20th century. The principle of polyrepresentation points out the necessity of existence of different cognitive and functional representations of the same document, created by different agents in order to answer to different representations of user’s needs. The main goal of the paper is to give an overview of the principle of polyrepresentation as well as the translation of terms into Croatian which provides an opportunity for further development of terminology of related areas, e.g. information retrieval, subject indexing etc. Methodology/approach: The method used in this paper is the analysis of selected research papers on development of the principle of polyrepresentation. The literature was selected due to its importance and approach to the topic and was limited to papers mostly dealing with subject access to documents.Research limitation: The review was limited to just one aspect of the model – the representations of documents. The second part of the model – the cognitive sphere and its application in IR systems was excluded from this paper.Originality/practical implications: The paper implies the importance of the principle of polyrepresentation in the context of current trends in subject indexing in online systems. Although there is a number of articles referring to the principle, as well as some empirical researches using some elements of the principle, subject access of it is often not included in them. This paper emphasizes the importance of the principle primarily in the context of current trends in subject indexing used in online systems (e. g. use of subject headings or access points in online catalogues, social tagging, including different agents involved in subject indexing online etc.. It also recommends translations of selected terms into Croatian and invites researchers to discuss

  10. Equivalence in Bilingual Lexicography: Criticism and Suggestions*

    Directory of Open Access Journals (Sweden)

    Herbert Ernst Wiegand

    2011-10-01

    Full Text Available

    Abstract: A reminder of general problems in the formation of terminology, as illustrated by theGerman Äquivalence (Eng. equivalence and äquivalent (Eng. equivalent, is followed by a critical discussionof the concept of equivalence in contrastive lexicology. It is shown that especially the conceptof partial equivalence is contradictory in its different manifestations. Consequently attemptsare made to give a more precise indication of the concept of equivalence in the metalexicography,with regard to the domain of the nominal lexicon. The problems of especially the metalexicographicconcept of partial equivalence as well as that of divergence are fundamentally expounded.In conclusion the direction is indicated to find more appropriate metalexicographic versions of theconcept of equivalence.

    Keywords: EQUIVALENCE, LEXICOGRAPHIC EQUIVALENT, PARTIAL EQUIVALENCE,CONGRUENCE, DIVERGENCE, CONVERGENCE, POLYDIVERGENCE, SYNTAGM-EQUIVALENCE,ZERO EQUIVALENCE, CORRESPONDENCE

    Abstrakt: Äquivalenz in der zweisprachigen Lexikographie: Kritik und Vorschläge.Nachdem an allgemeine Probleme der Begriffsbildung am Beispiel von dt. Äquivalenzund dt. äquivalent erinnert wurde, wird zunächst auf Äquivalenzbegriffe in der kontrastiven Lexikologiekritisch eingegangen. Es wird gezeigt, dass insbesondere der Begriff der partiellen Äquivalenzin seinen verschiedenen Ausprägungen widersprüchlich ist. Sodann werden Präzisierungenzu den Äquivalenzbegriffen in der Metalexikographie versucht, die sich auf den Bereich der Nennlexikbeziehen. Insbesondere der metalexikographische Begriff der partiellen Äquivalenz sowie derder Divergenz werden grundsätzlich problematisiert. In welche Richtung man gehen kann, umangemessenere metalexikographische Fassungen des Äquivalenzbegriffs zu finden, wird abschließendangedeutet.

    Stichwörter: ÄQUIVALENZ, LEXIKOGRAPHISCHES ÄQUIVALENT, PARTIELLE ÄQUIVALENZ,KONGRUENZ, DIVERGENZ, KONVERGENZ, POLYDIVERGENZ

  11. Least-entropy generation: Variational principle of Onsager's type for transient hyperbolic heat and mass transfer

    International Nuclear Information System (INIS)

    Sieniutycz, S.; Berry, R.S.

    1992-01-01

    For coupled transfer of the energy and mass in a multicomponent system at mechanical equilibrium a simple thermodynamic theory is developed, and the damped wave equations of change are derived. We show that under nonstationary conditions, where relaxation of diffusive fluxes is essential, the evolution of the distributed coupled transfer of the energy and mass follows the path that minimizes the difference between the total entropy generated within the system and that exchanged by the system. The principle is also valid in the limit in which flux relaxation effects are negligible and the heat and mass transfer, whether steady or not, obeys Onsager's generalization of the Fourier and Fick laws. For coupled steady-state processes the principle goes into that of Onsager, yielding his phenomenological equations. In contrast to the local steady-state nature of Onsager's principle the new principle is global, valid for both stationary and transient situations, and requires no frozen fields. For an isolated, distributed system, in which transient relaxation to equilibrium is the only possible process, the principle implies the least possible increase of the system entropy between any two successive configurations

  12. Selecting the Best Forecasting-Implied Volatility Model Using Genetic Programming

    Directory of Open Access Journals (Sweden)

    Wafa Abdelmalek

    2009-01-01

    Full Text Available The volatility is a crucial variable in option pricing and hedging strategies. The aim of this paper is to provide some initial evidence of the empirical relevance of genetic programming to volatility's forecasting. By using real data from S&P500 index options, the genetic programming's ability to forecast Black and Scholes-implied volatility is compared between time series samples and moneyness-time to maturity classes. Total and out-of-sample mean squared errors are used as forecasting's performance measures. Comparisons reveal that the time series model seems to be more accurate in forecasting-implied volatility than moneyness time to maturity models. Overall, results are strongly encouraging and suggest that the genetic programming approach works well in solving financial problems.

  13. Non-equivalent stringency of ethical review in the Baltic States: a sign of a systematic problem in Europe?

    Science.gov (United States)

    Gefenas, E; Dranseika, V; Cekanauskaite, A; Hug, K; Mezinska, S; Peicius, E; Silis, V; Soosaar, A; Strosberg, M

    2011-01-01

    We analyse the system of ethical review of human research in the Baltic States by introducing the principle of equivalent stringency of ethical review, that is, research projects imposing equal risks and inconveniences on research participants should be subjected to equally stringent review procedures. We examine several examples of non-equivalence or asymmetry in the system of ethical review of human research: (1) the asymmetry between rather strict regulations of clinical drug trials and relatively weaker regulations of other types of clinical biomedical research and (2) gaps in ethical review in the area of non-biomedical human research where some sensitive research projects are not reviewed by research ethics committees at all. We conclude that non-equivalent stringency of ethical review is at least partly linked to the differences in scope and binding character of various international legal instruments that have been shaping the system of ethical review in the Baltic States. Therefore, the Baltic example could also serve as an object lesson to other European countries which might be experiencing similar problems. PMID:20606000

  14. SAPONIFICATION EQUIVALENT OF DASAMULA TAILA

    OpenAIRE

    Saxena, R. B.

    1994-01-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  15. Saponification equivalent of dasamula taila.

    Science.gov (United States)

    Saxena, R B

    1994-07-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  16. Some spectral equivalences between Schroedinger operators

    International Nuclear Information System (INIS)

    Dunning, C; Hibberd, K E; Links, J

    2008-01-01

    Spectral equivalences of the quasi-exactly solvable sectors of two classes of Schroedinger operators are established, using Gaudin-type Bethe ansatz equations. In some instances the results can be extended leading to full isospectrality. In this manner we obtain equivalences between PT-symmetric problems and Hermitian problems. We also find equivalences between some classes of Hermitian operators

  17. Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics

    Science.gov (United States)

    Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.

    2018-04-01

    We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.

  18. Gauge equivalence of the Gross Pitaevskii equation and the equivalent Heisenberg spin chain

    Science.gov (United States)

    Radha, R.; Kumar, V. Ramesh

    2007-11-01

    In this paper, we construct an equivalent spin chain for the Gross-Pitaevskii equation with quadratic potential and exponentially varying scattering lengths using gauge equivalence. We have then generated the soliton solutions for the spin components S3 and S-. We find that the spin solitons for S3 and S- can be compressed for exponentially growing eigenvalues while they broaden out for decaying eigenvalues.

  19. A study on lead equivalent

    International Nuclear Information System (INIS)

    Lin Guanxin

    1991-01-01

    A study on the rules in which the lead equivalent of lead glass changes with the energy of X rays or γ ray is described. The reason of this change is discussed and a new testing method of lead equivalent is suggested

  20. Analytical and numerical construction of equivalent cables.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R; Tucker, G

    2003-08-01

    The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.

  1. Establishing Substantial Equivalence: Transcriptomics

    Science.gov (United States)

    Baudo, María Marcela; Powers, Stephen J.; Mitchell, Rowan A. C.; Shewry, Peter R.

    Regulatory authorities in Western Europe require transgenic crops to be substantially equivalent to conventionally bred forms if they are to be approved for commercial production. One way to establish substantial equivalence is to compare the transcript profiles of developing grain and other tissues of transgenic and conventionally bred lines, in order to identify any unintended effects of the transformation process. We present detailed protocols for transcriptomic comparisons of developing wheat grain and leaf material, and illustrate their use by reference to our own studies of lines transformed to express additional gluten protein genes controlled by their own endosperm-specific promoters. The results show that the transgenes present in these lines (which included those encoding marker genes) did not have any significant unpredicted effects on the expression of endogenous genes and that the transgenic plants were therefore substantially equivalent to the corresponding parental lines.

  2. On uncertainties in definition of dose equivalent

    International Nuclear Information System (INIS)

    Oda, Keiji

    1995-01-01

    The author has entertained always the doubt that in a neutron field, if the measured value of the absorbed dose with a tissue equivalent ionization chamber is 1.02±0.01 mGy, may the dose equivalent be taken as 10.2±0.1 mSv. Should it be 10.2 or 11, but the author considers it is 10 or 20. Even if effort is exerted for the precision measurement of absorbed dose, if the coefficient being multiplied to it is not precise, it is meaningless. [Absorbed dose] x [Radiation quality fctor] = [Dose equivalent] seems peculiar. How accurately can dose equivalent be evaluated ? The descriptions related to uncertainties in the publications of ICRU and ICRP are introduced, which are related to radiation quality factor, the accuracy of measuring dose equivalent and so on. Dose equivalent shows the criterion for the degree of risk, or it is considered only as a controlling quantity. The description in the ICRU report 1973 related to dose equivalent and its unit is cited. It was concluded that dose equivalent can be considered only as the absorbed dose being multiplied by a dimensionless factor. The author presented the questions. (K.I.)

  3. DC cancellation as a method of generating a t2-response and of solving the radial position error in a concentric free-falling two-sphere equivalence-principle experiment in a drag-free satellite

    International Nuclear Information System (INIS)

    Lange, Benjamin

    2010-01-01

    This paper presents a new method for doing a free-fall equivalence-principle (EP) experiment in a satellite at ambient temperature which solves two problems that have previously blocked this approach. By using large masses to change the gravity gradient at the proof masses, the orbit dynamics of a drag-free satellite may be changed in such a way that the experiment can mimic a free-fall experiment in a constant gravitational field on the earth. An experiment using a sphere surrounded by a spherical shell both completely unsupported and free falling has previously been impractical because (1) it is not possible to distinguish between a small EP violation and a slight difference in the semi-major axes of the orbits of the two proof masses and (2) the position difference in the orbit due to an EP violation only grows as t whereas the largest disturbance grows as t 3/2 . Furthermore, it has not been known how to independently measure the positions of a shell and a solid sphere with sufficient accuracy. The measurement problem can be solved by using a two-color transcollimator (see the main text), and since the radial-position-error and t-response problems arise from the earth's gravity gradient and not from its gravity field, one solution is to modify the earth's gravity gradient with local masses fixed in the satellite. Since the gravity gradient at the surface of a sphere, for example, depends only on its density, the gravity gradients of laboratory masses and of the earth unlike their fields are of the same order of magnitude. In a drag-free satellite spinning perpendicular to the orbit plane, two fixed spherical masses whose connecting line parallels the satellite spin axis can generate a dc gravity gradient at test masses located between them which cancels the combined gravity gradient of the earth and differential centrifugal force. With perfect cancellation, the position-error problem vanishes and the response grows as t 2 along a line which always points toward

  4. LEFT-WING ASYMPTOTICS OF THE IMPLIED VOLATILITY IN THE PRESENCE OF ATOMS

    OpenAIRE

    ARCHIL GULISASHVILI

    2015-01-01

    The paper considers the asymptotic behavior of the implied volatility in stochastic asset price models with atoms. In such models, the asset price distribution has a singular component at zero. Examples of models with atoms include the constant elasticity of variance (CEV) model, jump-to-default models, and stochastic models described by processes stopped at the first hitting time of zero. For models with atoms, the behavior of the implied volatility at large strikes is similar to that in mod...

  5. Comprehending Implied Meaning in English as a Foreign Language

    Science.gov (United States)

    Taguchi, Naoko

    2005-01-01

    This study investigated whether second language (L2) proficiency affects pragmatic comprehension, namely the ability to comprehend implied meaning in spoken dialogues, in terms of accuracy and speed of comprehension. Participants included 46 native English speakers at a U.S. university and 160 Japanese students of English in a college in Japan who…

  6. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  7. Runaway dilaton and equivalence principle violations

    CERN Document Server

    Damour, Thibault Marie Alban Guillaume; Veneziano, Gabriele; Damour, Thibault; Piazza, Federico; Veneziano, Gabriele

    2002-01-01

    In a recently proposed scenario, where the dilaton decouples while cosmologically attracted towards infinite bare string coupling, its residual interactions can be related to the amplitude of density fluctuations generated during inflation, and are large enough to be detectable through a modest improvement on present tests of free-fall universality. Provided it has significant couplings to either dark matter or dark energy, a runaway dilaton can also induce time-variations of the natural "constants" within the reach of near-future experiments.

  8. XBRL How It Implies The Audit Process

    Directory of Open Access Journals (Sweden)

    Sepky Mardian

    2015-08-01

    Full Text Available This article aimed to know what is XBRL how it works and it implies to audit process. XBRL as a new tool was expected to produce a timelines reliable and credible financial reporting. With its real-time and interactive data XBRL will help the investor and other stakeholder in receiving storing analyzing the information quickly. While in audit profession XBRL will speed up the audit process save the audit cost and increase the revenue. However in fact XBRL will make it happen if it was implemented and integrated to an information system owned by datainformation provider.

  9. [The precautionary principle: advantages and risks].

    Science.gov (United States)

    Tubiana, M

    2001-04-01

    innovation, highly restrictive administrative procedures, and a waste of funds on the search for the utopian goal of zero risk. Other drawbacks are more insidious. The precautionary principle could contribute to a general feeling of anxiety and unease in the population. It could be used by campaigns to manipulate public opinion in favor of a particular commercial interest or ideology. Furthermore, practitioners and public policy makers could be led to make choices not dictated by a search for the optimal solution but rather a solution that would protect them from future accusations (the so-called umbrella phenomenon). On the international level, the precautionary principle must not be used to mask protectionism. Nevertheless, a clear advantage of the precautionary principle is that it requires decision-makers to explain the rationale behind their decisions, to quantify the risks, and to provide objective information. However, the physician must not be tempted to make patients sign documents certifying that they have been given all relevant information on his or her diagnosis and treatment. This example underlines the role of legal texts and jurisprudence in the application of the precautionary principle. Finally, the precautionary principle implies new obligations for the State. In the field of health and healthcare, the State must undertake actions based on fully open and undisguised decision-making and provide complete information to the public. A pplication of the precautionary principle requires much discernment because the final outcome can be beneficial or harmful, depending on the way it is implemented. The precautionary principle, and its applications, must be precise and detailed within a well-defined framework.

  10. Interpreting the implied meridional oceanic energy transport in AMIP

    International Nuclear Information System (INIS)

    Randall, D.A.; Gleckler, P.J.

    1993-09-01

    The Atmospheric Model Intercomparison Project (AMIP) was outlined in Paper No. CLIM VAR 2.3 (entitled open-quote The validation of ocean surface heat fluxes in AMIP') of these proceedings. Preliminary results of AMIP subproject No. 5 were also summarized. In particular, zonally averaged ocean surface heat fluxes resulting from various AMIP simulations were intercompared, and to the extent possible they were validated with uncertainties in observationally-based estimates of surface heat fluxes. The intercomparison is continued in this paper by examining the Oceanic Meridional Energy Transport (OMET) implied by the net surface heat fluxes of the AMIP simulations. As with the surface heat fluxes of the AMIP simulations. As with the surface heat fluxes, the perspective here will be very cursory. The annual mean implied ocean heat transport can be estimated by integrating the zonally averaged net ocean surface heat flux, N sfc , from one pole to the other. In AGCM simulations (and perhaps reality), the global mean N sfc is typically not in exact balance when averaged over one or more years. Because of this, an important assumption must be made about changes in the distribution of energy in the oceans. Otherwise, the integration will yield a non-zero transport at the endpoint of integration (pole) which is not physically realistic. Here the authors will only look at 10-year means of the AMIP runs, and for simplicity they assume that any long term imbalance in the global averaged N sfc will be sequestered (or released) over the global ocean. Tests have demonstrated that the treatment of how the global average energy imbalance is assumed to be distributed is important, especially when the long term imbalances are in excess of 10 W m -2 . However, this has not had a substantial impact on the qualitative features of the implied heat transport of the AMIP simulations examined thus far

  11. Equivalent Simplification Method of Micro-Grid

    OpenAIRE

    Cai Changchun; Cao Xiangqin

    2013-01-01

    The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...

  12. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    International Nuclear Information System (INIS)

    2000-01-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance

  13. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.

  14. A Technique to Estimate the Equivalent Loss Resistance of Grid-Tied Converters for Current Control Analysis and Design

    DEFF Research Database (Denmark)

    Vidal, Ana; Yepes, Alejandro G.; Fernandez, Francisco Daniel Freijedo

    2015-01-01

    Rigorous analysis and design of the current control loop in voltage source converters (VSCs) requires an accurate modeling. The loop behavior can be significantly influenced by the VSC working conditions. To consider such effect, converter losses should be included in the model, which can be done...... by means of an equivalent series resistance. This paper proposes a method to identify the VSC equivalent loss resistance for the proper tuning of the current control loop. It is based on analysis of the closed-loop transient response provided by a synchronous proportional-integral current controller......, according to the internal model principle. The method gives a set of loss resistance values linked to working conditions, which can be used to improve the tuning of the current controllers, either by online adaptation of the controller gains or by open-loop adaptive adjustment of them according to prestored...

  15. Equivalence relations and the reinforcement contingency.

    Science.gov (United States)

    Sidman, M

    2000-07-01

    Where do equivalence relations come from? One possible answer is that they arise directly from the reinforcement contingency. That is to say, a reinforcement contingency produces two types of outcome: (a) 2-, 3-, 4-, 5-, or n-term units of analysis that are known, respectively, as operant reinforcement, simple discrimination, conditional discrimination, second-order conditional discrimination, and so on; and (b) equivalence relations that consist of ordered pairs of all positive elements that participate in the contingency. This conception of the origin of equivalence relations leads to a number of new and verifiable ways of conceptualizing equivalence relations and, more generally, the stimulus control of operant behavior. The theory is also capable of experimental disproof.

  16. Advising Students or Practicing Law: The Formation of Implied Attorney-Client Relationships with Students

    Science.gov (United States)

    Sheridan, Patricia M.

    2014-01-01

    An attorney-client relationship is traditionally created when both parties formally enter into an express agreement regarding the terms of representation and the payment of fees. There are certain circumstances, however, where the attorney-client relationship can be implied from the parties' conduct. An implied attorney-client relationship may…

  17. Nuclear detectors: principles and applications

    International Nuclear Information System (INIS)

    Belhadj, Marouane

    1999-01-01

    Nuclear technology is a vast domain. It has several applications, for instance in hydrology, it is used in the analysis of underground water, dating by carbon 14, Our study consists on representing the nuclear detectors based on their principle of functioning and their electronic constitution. However, because of some technical problems, we have not made a deepen study on their applications that could certainly have a big support on our subject. In spite of the existence of an equipment of high performance and technology in the centre, it remains to resolve the problem of control of instruments. Therefore, the calibration of these equipment remains the best guaranteed of a good quality of the counting. Besides, it allows us to approach the influence of the external and internal parameters on the equipment and the reasons of errors of measurements, to introduce equivalent corrections. (author). 22 refs

  18. Symmetries of dynamically equivalent theories

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D.M.; Tyutin, I.V. [Sao Paulo Univ., SP (Brazil). Inst. de Fisica; Lebedev Physics Institute, Moscow (Russian Federation)

    2006-03-15

    A natural and very important development of constrained system theory is a detail study of the relation between the constraint structure in the Hamiltonian formulation with specific features of the theory in the Lagrangian formulation, especially the relation between the constraint structure with the symmetries of the Lagrangian action. An important preliminary step in this direction is a strict demonstration, and this is the aim of the present article, that the symmetry structures of the Hamiltonian action and of the Lagrangian action are the same. This proved, it is sufficient to consider the symmetry structure of the Hamiltonian action. The latter problem is, in some sense, simpler because the Hamiltonian action is a first-order action. At the same time, the study of the symmetry of the Hamiltonian action naturally involves Hamiltonian constraints as basic objects. One can see that the Lagrangian and Hamiltonian actions are dynamically equivalent. This is why, in the present article, we consider from the very beginning a more general problem: how the symmetry structures of dynamically equivalent actions are related. First, we present some necessary notions and relations concerning infinitesimal symmetries in general, as well as a strict definition of dynamically equivalent actions. Finally, we demonstrate that there exists an isomorphism between classes of equivalent symmetries of dynamically equivalent actions. (author)

  19. The Functional Principle in Gramatica limbii române (Grammar of the Romanian Language – GALR

    Directory of Open Access Journals (Sweden)

    Simona Redeş

    2010-09-01

    Full Text Available This article is intended to be a brief presentation of the main approaches of functionalism in Western linguistics and how these concepts and principles are reflected into Romanian linguistics, especially since the new Grammar of the Romanian Language, edition 2005. We insist here on the new taxonomy in classes of words and on the functional-syntactic organization which implies some distinctions between the specific functions of words.

  20. Implied Materiality and Material Disclosures of Credit Ratings

    OpenAIRE

    Eccles, Robert G; Youmans, Timothy John

    2015-01-01

    This first of three papers in our series on materiality in credit ratings will examine the materiality of credit ratings from an “implied materiality” and governance disclosure perspective. In the second paper, we will explore the materiality of environmental, social, and governance (ESG) factors in credit ratings’ methodologies and introduce the concept of “layered materiality.” In the third paper, we will evaluate current and potential credit rating agency (CRA) business models based on our...

  1. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  2. The one-dimensional normalised generalised equivalence theory (NGET) for generating equivalent diffusion theory group constants for PWR reflector regions

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-01-01

    An equivalent diffusion theory PWR reflector model is presented, which has as its basis Smith's generalisation of Koebke's Equivalent Theory. This method is an adaptation, in one-dimensional slab geometry, of the Generalised Equivalence Theory (GET). Since the method involves the renormalisation of the GET discontinuity factors at nodal interfaces, it is called the Normalised Generalised Equivalence Theory (NGET) method. The advantages of the NGET method for modelling the ex-core nodes of a PWR are summarized. 23 refs

  3. Wave function collapse implies divergence of average displacement

    OpenAIRE

    Marchewka, A.; Schuss, Z.

    2005-01-01

    We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.

  4. The definition of the individual dose equivalent

    International Nuclear Information System (INIS)

    Ehrlich, Margarete

    1986-01-01

    A brief note examines the choice of the present definition of the individual dose equivalent, the new operational dosimetry quantity for external exposure. The consequences of the use of the individual dose equivalent and the danger facing the individual dose equivalent, as currently defined, are briefly discussed. (UK)

  5. Seismic design principles for the German fast breeder reactor SNR2

    International Nuclear Information System (INIS)

    Rangette, A.M.; Peters, K.A.

    1988-01-01

    The leading aim of a seismic design is, besides protection against seismic impacts, not to enhance the overall risk in the absence of seismic vibrations and, secondly, to avoid competition between operational needs and a seismic structural design. This approach is supported by avoiding overconservatism in the assumption of seismic loads and in the calculation of the structural response. Accordingly the seismic principles are stated as follows: restriction to German or equivalent low seismicity sites with intensities (SSE) lower VIII at frequency lower than 10 -4 /year; best estimate of seismic input-data without further conservatism; no consideration of OBE. The structural design principles are: 1. The secondary character of the seismic excitation is explicitly accounted for; 2. Energy absorption is allowed for by ductility of materials and construction. Accordingly strain criteria are used for failure predictions instead of stress criteria. (author). 1 fig

  6. On the variational principle for the equations of perfect fluid dynamics

    International Nuclear Information System (INIS)

    Serre, D.

    1993-01-01

    One gives a new version of the variational principle δL = 0, L being the usual Lagrangian, for the perfect fluid mechanics. It is formally equivalent to the well-known principle but it gives the first rigorous derivation of the conservation laws (momentum and energy), including the discontinuous case (shock waves, contact discontinuities). Thanks to a new formulation of the constraints, we do not involve any Lagrange multiplier, which in previous works were neither physically relevant, since they do not appear in the Euler equations, nor mathematically relevant. We even give a variational interpretation of the entropy inequality when shock waves occur. Our method covers all aspects of the perfect fluids, including stationary and unstationary motion, compressible and incompressible fluids, axisymmetric case. When the velocity field admits a stream function, the variational principle gives rise to extremal points of the Lagrangian on various infinite dimensional manifolds. For a suitable choice of this manifold, the flow is itself periodic, that is all the fluid particles have a periodic motion with the same period. The flow describes a closed geodesic on some group of diffeomorphisms. (author). 10 refs

  7. 77 FR 32632 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Science.gov (United States)

    2012-06-01

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION... accordance with 40 CFR Part 53, three new equivalent methods: One for measuring concentrations of nitrogen... INFORMATION: In accordance with regulations at 40 CFR Part 53, the EPA evaluates various methods for...

  8. Beyond Language Equivalence on Visibly Pushdown Automata

    DEFF Research Database (Denmark)

    Srba, Jiri

    2009-01-01

    We study (bi)simulation-like preorder/equivalence checking on the class of visibly pushdown automata and its natural subclasses visibly BPA (Basic Process Algebra) and visibly one-counter automata. We describe generic methods for proving complexity upper and lower bounds for a number of studied...... preorders and equivalences like simulation, completed simulation, ready simulation, 2-nested simulation preorders/equivalences and bisimulation equivalence. Our main results are that all the mentioned equivalences and preorders are EXPTIME-complete on visibly pushdown automata, PSPACE-complete on visibly...... one-counter automata and P-complete on visibly BPA. Our PSPACE lower bound for visibly one-counter automata improves also the previously known DP-hardness results for ordinary one-counter automata and one-counter nets. Finally, we study regularity checking problems for visibly pushdown automata...

  9. Bridging the knowledge gap: An analysis of Albert Einstein's popularized presentation of the equivalence of mass and energy.

    Science.gov (United States)

    Kapon, Shulamit

    2014-11-01

    This article presents an analysis of a scientific article written by Albert Einstein in 1946 for the general public that explains the equivalence of mass and energy and discusses the implications of this principle. It is argued that an intelligent popularization of many advanced ideas in physics requires more than the simple elimination of mathematical formalisms and complicated scientific conceptions. Rather, it is shown that Einstein developed an alternative argument for the general public that bypasses the core of the formal derivation of the equivalence of mass and energy to provide a sense of derivation based on the history of science and the nature of scientific inquiry. This alternative argument is supported and enhanced by variety of explanatory devices orchestrated to coherently support and promote the reader's understanding. The discussion centers on comparisons to other scientific expositions written by Einstein for the general public. © The Author(s) 2013.

  10. The equivalence problem for LL- and LR-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus; Gecsec, F.

    It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular

  11. Priority-setting in New Zealand: translating principles into practice.

    Science.gov (United States)

    Ashton, T; Cumming, J; Devlin, N

    2000-07-01

    In May 1998 the New Zealand Health Funding Authority released a discussion paper which proposed a principles-based approach to setting purchasing priorities that incorporates the economic methods of programme budgeting and marginal analysis, and cost-utility analysis. The principles upon which the process was to be based are effectiveness, cost, equity of health outcomes, Maori health and acceptability. This essay describes and critiques issues associated with translating the principles into practice, most particularly the proposed methods for evaluating the effectiveness and measuring the cost of services. It is argued that the proposals make an important contribution towards the development of a method for prioritizing services which challenges our thinking about those services and their goals, and which is systematic, explicit, and transparent. The shift towards 'thinking at the margin' and systematically reviewing the value for money of competing claims on resources is likely to improve the quality of decision-making compared with the status quo. This does not imply that prioritization can, or should, be undertaken by means of any simple formula. Any prioritization process should always be guided by informed judgement. The approach is more appropriate for some services than for others. Key methodological issues that need further consideration include the choice of instrument for measuring health gains, the identification of marginal services, how to combine qualitative and quantitative information, and how to ensure consistency across different levels of decision-making.

  12. Monitoring of Non-Ferrous Wear Debris in Hydraulic Oil by Detecting the Equivalent Resistance of Inductive Sensors

    Directory of Open Access Journals (Sweden)

    Lin Zeng

    2018-03-01

    Full Text Available Wear debris in hydraulic oil contains important information on the operation of equipment, which is important for condition monitoring and fault diagnosis in mechanical equipment. A micro inductive sensor based on the inductive coulter principle is presented in this work. It consists of a straight micro-channel and a 3-D solenoid coil wound on the micro-channel. Instead of detecting the inductance change of the inductive sensor, the equivalent resistance change of the inductive sensor is detected for non-ferrous particle (copper particle monitoring. The simulation results show that the resistance change rate caused by the presence of copper particles is greater than the inductance change rate. Copper particles with sizes ranging from 48 μm to 150 μm were used in the experiment, and the experimental results are in good agreement with the simulation results. By detecting the inductive change of the micro inductive sensor, the detection limit of the copper particles only reaches 70 μm. However, the detection limit can be improved to 48 μm by detecting the equivalent resistance of the inductive sensor. The equivalent resistance method was demonstrated to have a higher detection accuracy than conventional inductive detection methods for non-ferrous particle detection in hydraulic oil.

  13. The biologically equivalent dose BED - Is the approach for calculation of this factor really a reliable basis?

    International Nuclear Information System (INIS)

    Jensen, J.M.; Zimmermann, J.

    2000-01-01

    To predict the effect on tumours in radiotherapy, especially relating to irreversible effects, but also to realize the retrospective assessment the so called L-Q-model is relied on at present. Internal specific organ parameters, such as α, β, γ, T p , T k , and ρ, as well as external parameters, so as D, d, n, V, and V ref , were used for determination of the biologically equivalent dose BED. While the external parameters are determinable with small deviations, the internal parameters depend on biological varieties and dispersons: In some cases the lowest value is assumed to be Δ=±25%. This margin of error goes on to the biologically equivalent dose by means of the principle of superposition of errors. In some selected cases (lung, kidney, skin, rectum) these margins of error were calculated exemplarily. The input errors especially of the internal parameters cause a mean error Δ on the biologically equivalent dose and a dispersion of the single fraction dose d dependent on the organ taking into consideration, of approximately 8-30%. Hence it follows only a very critical and cautious application of those L-Q-algorithms in expert proceedings, and in radiotherapy more experienced based decisions are recommended, instead of acting only upon simple two-dimensional mechanistic ideas. (orig.) [de

  14. On the connection between complementarity and uncertainty principles in the Mach–Zehnder interferometric setting

    International Nuclear Information System (INIS)

    Bosyk, G M; Portesi, M; Holik, F; Plastino, A

    2013-01-01

    We revisit the connection between the complementarity and uncertainty principles of quantum mechanics within the framework of Mach–Zehnder interferometry. We focus our attention on the trade-off relation between complementary path information and fringe visibility. This relation is equivalent to the uncertainty relation of Schrödinger and Robertson for a suitably chosen pair of observables. We show that it is equivalent as well to the uncertainty inequality provided by Landau and Pollak. We also study the relationship of this trade-off relation with a family of entropic uncertainty relations based on Rényi entropies. There is no equivalence in this case, but the different values of the entropic parameter do define regimes that provides us with a tool to discriminate between non-trivial states of minimum uncertainty. The existence of such regimes agrees with previous results of Luis (2011 Phys. Rev. A 84 034101), although their meaning was not sufficiently clear. We discuss the origin of these regimes with the intention of gaining a deeper understanding of entropic measures. (paper)

  15. Equivalent damage of loads on pavements

    CSIR Research Space (South Africa)

    Prozzi, JA

    2009-05-26

    Full Text Available This report describes a new methodology for the determination of Equivalent Damage Factors (EDFs) of vehicles with multiple axle and wheel configurations on pavements. The basic premise of this new procedure is that "equivalent pavement response...

  16. The Rethinking of the Economic Activity Based on Principles of Eco-Efficiency

    Directory of Open Access Journals (Sweden)

    Daniela VÎRJAN

    2011-07-01

    Full Text Available Drought, floods, damaging storms, heat waves, acid rain, climate changes are but a few of the consequences of human action upon the environment. Can we possibly live against the environment? The answer is NO, and as such we must run an economy which respects the principles of eco-efficiency because only so can economic progress go on. Green economy is a great opportunity for all of the world’s countries, and is a real economy which keeps the resource-needs and environment relation in balance, aims towards quality and not quantity, lays emphasis on regeneration, recycling, reuse and creativity. Eco-efficiency implies both innovation towards a high degree of product dematerialization, services and systems, alongside with greatly changing current production and consumption practices. If we produce based on the principle of eco-efficiency we can reduce the effects of the profound economic, ecological, socio-political and cultural-spiritual crisis which marks our planet and countries.

  17. Receipt or Implied Withdrawal of Tacit in Criminal Procedure as Violation of Circumstances Due to the Principles of Lawsuit and Motivation of Decisions

    Directory of Open Access Journals (Sweden)

    Marcelo Serrano Souza

    2016-10-01

    Full Text Available This article analyzes the implicit or tacit receipt of the complaint in criminal proceedings in the light of constitutional principles of due process and the reasons for decisions. The study shows the need for a rational discourse to legitimate judicial decisions in relation to the claimants. The issue is addressed by the deductive method, through doctrinal and jurisprudential research. The article aims to answer whether and to what extent the implicit or tacit receipt of the complaint involves a violation of due process and the reasons for decisions from the perspective of a coherent legal system.

  18. Retina as Reciprocal Spatial Fourier Transform Space Implies ``Wave-transformation'' Functions, String Theory, the Inappropriate Uncertainty Principle, and Predicts ``Quarked'' Protons.

    Science.gov (United States)

    Mc Leod, Roger David; Mc Leod, David M.

    2007-10-01

    Vision, via transform space: ``Nature behaves in a reciprocal way;' also, Rect x pressure-input sense-reports as Sinc p, indicating brain interprets reciprocal ``p'' space as object space. Use Mott's and Sneddon's Wave Mechanics and Its Applications. Wave transformation functions are strings of positron, electron, proton, and neutron; uncertainty is a semantic artifact. Neutrino-string de Broglie-Schr"odinger wave-function models for electron, positron, suggest three-quark models for protons, neutrons. Variably vibrating neutrino-quills of this model, with appropriate mass-energy, can be a vertical proton string, quills leftward; thread string circumferentially, forming three interlinked circles with ``overpasses''. Diameters are 2:1:2, center circle has quills radially outward; call it a down quark, charge --1/3, charge 2/3 for outward quills, the up quarks of outer circles. String overlap summations are nodes; nodes also far left and right. Strong nuclear forces may be --px. ``Dislodging" positron with neutrino switches quark-circle configuration to 1:2:1, `downers' outside. Unstable neutron charge is 0. Atoms build. With scale factors, retinal/vision's, and quantum mechanics,' spatial Fourier transforms/inverses are equivalent.

  19. Implosion scaling and hydro dynamically equivalent target design - Strategy for proof of principle of high gain inertial fusion

    International Nuclear Information System (INIS)

    Murakami, M.; Nishihara, K.; Azechi, H.; Nakatsuka, M.; Kanabe, T.; Miyanaga, N.

    2003-01-01

    Scaling laws for hydro dynamically similar implosions are derived by applying Lie group analysis to the set of partial differential equations for the hydrodynamic system. Physically this implies that any fluid system belonging to a common similarity group evolves quite in the same manner including hydrodynamic instabilities. The scalings strongly depend on the description of the energy transport, i.e., whether the fluid system is heat conductive or adiabatic. Under a fully specified group transformation including prescriptions on the laser wavelength and the ionization state, the hydrodynamic similarity can still be preserved even when the system is cooperated with such other energy sources as classical laser absorption, hot electrons, local alpha heating, and bremsstrahlung loss. The results are expected to give the basis of target design and diagnostics for scaled high gain experiments in future. (author)

  20. 76 FR 55903 - Missisquoi River Technologies; Notice of Termination of Exemption By Implied Surrender and...

    Science.gov (United States)

    2011-09-09

    ... Technologies; Notice of Termination of Exemption By Implied Surrender and Soliciting Comments, Protests, and... Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No.: 10172-038..., among other things, that the Commission reserves the right to revoke an exemption if any term or...

  1. Measurement invariance within and between individuals: a distinct problem in testing the equivalence of intra- and inter-individual model structures.

    Science.gov (United States)

    Adolf, Janne; Schuurman, Noémi K; Borkenau, Peter; Borsboom, Denny; Dolan, Conor V

    2014-01-01

    We address the question of equivalence between modeling results obtained on intra-individual and inter-individual levels of psychometric analysis. Our focus is on the concept of measurement invariance and the role it may play in this context. We discuss this in general against the background of the latent variable paradigm, complemented by an operational demonstration in terms of a linear state-space model, i.e., a time series model with latent variables. Implemented in a multiple-occasion and multiple-subject setting, the model simultaneously accounts for intra-individual and inter-individual differences. We consider the conditions-in terms of invariance constraints-under which modeling results are generalizable (a) over time within subjects, (b) over subjects within occasions, and (c) over time and subjects simultaneously thus implying an equivalence-relationship between both dimensions. Since we distinguish the measurement model from the structural model governing relations between the latent variables of interest, we decompose the invariance constraints into those that involve structural parameters and those that involve measurement parameters and relate to measurement invariance. Within the resulting taxonomy of models, we show that, under the condition of measurement invariance over time and subjects, there exists a form of structural equivalence between levels of analysis that is distinct from full structural equivalence, i.e., ergodicity. We demonstrate how measurement invariance between and within subjects can be tested in the context of high-frequency repeated measures in personality research. Finally, we relate problems of measurement variance to problems of non-ergodicity as currently discussed and approached in the literature.

  2. 7 CFR 1030.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1030.54 Section 1030.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1030.54 Equivalent price. See § 1000.54. ...

  3. Bernoulli's Principle

    Science.gov (United States)

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  4. An Hilbert space approach for a class of arbitrage free implied volatilities models

    OpenAIRE

    Brace, A.; Fabbri, G.; Goldys, B.

    2007-01-01

    We present an Hilbert space formulation for a set of implied volatility models introduced in \\cite{BraceGoldys01} in which the authors studied conditions for a family of European call options, varying the maturing time and the strike price $T$ an $K$, to be arbitrage free. The arbitrage free conditions give a system of stochastic PDEs for the evolution of the implied volatility surface ${\\hat\\sigma}_t(T,K)$. We will focus on the family obtained fixing a strike $K$ and varying $T$. In order to...

  5. 77 FR 2057 - Aquamac Corporation; Notice of Termination of License by Implied Surrender and Soliciting...

    Science.gov (United States)

    2012-01-13

    ... Corporation; Notice of Termination of License by Implied Surrender and Soliciting Comments and Motions To.... Type of Proceeding: Termination of License by Implied Surrender. b. Project No.: 2927-006. c. Date... authorized, the licensee is in violation of the terms and conditions of the license. l. This notice is...

  6. Equivalence in Ventilation and Indoor Air Quality

    Energy Technology Data Exchange (ETDEWEB)

    Sherman, Max; Walker, Iain; Logue, Jennifer

    2011-08-01

    We ventilate buildings to provide acceptable indoor air quality (IAQ). Ventilation standards (such as American Society of Heating, Refrigerating, and Air-Conditioning Enginners [ASHRAE] Standard 62) specify minimum ventilation rates without taking into account the impact of those rates on IAQ. Innovative ventilation management is often a desirable element of reducing energy consumption or improving IAQ or comfort. Variable ventilation is one innovative strategy. To use variable ventilation in a way that meets standards, it is necessary to have a method for determining equivalence in terms of either ventilation or indoor air quality. This study develops methods to calculate either equivalent ventilation or equivalent IAQ. We demonstrate that equivalent ventilation can be used as the basis for dynamic ventilation control, reducing peak load and infiltration of outdoor contaminants. We also show that equivalent IAQ could allow some contaminants to exceed current standards if other contaminants are more stringently controlled.

  7. Energy, Metaphysics, and Space: Ernst Mach's Interpretation of Energy Conservation as the Principle of Causality

    Science.gov (United States)

    Guzzardi, Luca

    2014-06-01

    This paper discusses Ernst Mach's interpretation of the principle of energy conservation (EC) in the context of the development of energy concepts and ideas about causality in nineteenth-century physics and theory of science. In doing this, it focuses on the close relationship between causality, energy conservation and space in Mach's antireductionist view of science. Mach expounds his thesis about EC in his first historical-epistemological essay, Die Geschichte und die Wurzel des Satzes von der Erhaltung der Arbeit (1872): far from being a new principle, it is used from the early beginnings of mechanics independently from other principles; in fact, EC is a pre-mechanical principle which is generally applied in investigating nature: it is, indeed, nothing but a form of the principle of causality. The paper focuses on the scientific-historical premises and philosophical underpinnings of Mach's thesis, beginning with the classic debate on the validity and limits of the notion of cause by Hume, Kant, and Helmholtz. Such reference also implies a discussion of the relationship between causality on the one hand and space and time on the other. This connection plays a major role for Mach, and in the final paragraphs its importance is argued in order to understand his antireductionist perspective, i.e. the rejection of any attempt to give an ultimate explanation of the world via reduction of nature to one fundamental set of phenomena.

  8. Orientifold Planar Equivalence: The Chiral Condensate

    DEFF Research Database (Denmark)

    Armoni, Adi; Lucini, Biagio; Patella, Agostino

    2008-01-01

    The recently introduced orientifold planar equivalence is a promising tool for solving non-perturbative problems in QCD. One of the predictions of orientifold planar equivalence is that the chiral condensates of a theory with $N_f$ flavours of Dirac fermions in the symmetric (or antisymmetric...

  9. 7 CFR 1005.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1005.54 Section 1005.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1005.54 Equivalent price. See § 1000.54. Uniform Prices ...

  10. 7 CFR 1126.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1126.54 Section 1126.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1126.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  11. 7 CFR 1001.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1001.54 Section 1001.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1001.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  12. 7 CFR 1032.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1032.54 Section 1032.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1032.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  13. 7 CFR 1033.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1033.54 Section 1033.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1033.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  14. 7 CFR 1131.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1131.54 Section 1131.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1131.54 Equivalent price. See § 1000.54. Uniform Prices ...

  15. 7 CFR 1006.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1006.54 Section 1006.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1006.54 Equivalent price. See § 1000.54. Uniform Prices ...

  16. 7 CFR 1007.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1007.54 Section 1007.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1007.54 Equivalent price. See § 1000.54. Uniform Prices ...

  17. Revisiting the long memory dynamics of implied-realized volatility relation: A new evidence from wavelet band spectrum regression

    OpenAIRE

    Barunik, Jozef; Barunikova, Michaela

    2015-01-01

    This paper revisits the fractional co-integrating relationship between ex-ante implied volatility and ex-post realized volatility. Previous studies on stock index options have found biases and inefficiencies in implied volatility as a forecast of future volatility. It is argued that the concept of corridor implied volatility (CIV) should be used instead of the popular model-free option-implied volatility (MFIV) when assessing the relation as the latter may introduce bias to the estimation. In...

  18. The Complexity of Identifying Large Equivalence Classes

    DEFF Research Database (Denmark)

    Skyum, Sven; Frandsen, Gudmund Skovbjerg; Miltersen, Peter Bro

    1999-01-01

    We prove that at least 3k−4/k(2k−3)(n/2) – O(k)equivalence tests and no more than 2/k (n/2) + O(n) equivalence tests are needed in the worst case to identify the equivalence classes with at least k members in set of n elements. The upper bound is an improvement by a factor 2 compared to known res...

  19. The role of implied motion in engaging audiences for health promotion: encouraging naps on a college campus.

    Science.gov (United States)

    Mackert, Michael; Lazard, Allison; Guadagno, Marie; Hughes Wagner, Jessica

    2014-01-01

    Lack of sleep among college students negatively impacts health and academic outcomes. Building on research that implied motion imagery increases brain activity, this project tested visual design strategies to increase viewers' engagement with a health communication campaign promoting napping to improve sleep habits. PARTICIPANTS (N = 194) were recruited from a large southwestern university in October 2012. Utilizing an experimental design, participants were assigned to 1 of 3 conditions: an implied motion superhero spokes-character, a static superhero spokes-character, and a control group. The use of implied motion did not achieve the hypothesized effect on message elaboration, but superheroes are a promising persuasive tool for health promotion campaigns for college audiences. Implications for sleep health promotion campaigns and the role of implied motion in message design strategies are discussed, as well as future directions for research on the depiction of implied motion as it relates to theoretical development.

  20. 7 CFR 1124.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1124.54 Section 1124.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Class Prices § 1124.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  1. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  2. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  3. On a variational principle for shape optimization and elliptic free boundary problems

    Directory of Open Access Journals (Sweden)

    Raúl B. González De Paz

    2009-02-01

    Full Text Available A variational principle for several free boundary value problems using a relaxation approach is presented. The relaxed Energy functional is concave and it is defined on a convex set, so that the minimizing points are characteristic functions of sets. As a consequence of the first order optimality conditions, it is shown that the corresponding sets are domains bounded by free boundaries, so that the equivalence of the solution of the relaxed problem with the solution of several free boundary value problem is proved. Keywords: Calculus of variations, optimization, free boundary problems.

  4. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  5. Does “quorum sensing” imply a new type of biological information?

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio

    2002-01-01

    When dealing with biological communication and information, unifying concepts are necessary in order to couple the different “codes” that are being inductively “cracked” and defined at different emergent and “de-emergent” levels of the biological hierarchy. In this paper I compare the type...... of biological information implied by genetic information with that implied in the concept of “quorum sensing” (which refers to a prokaryotic cell-to-cell communication system) in order to explore if such integration is being achieved. I use the Lux operon paradigm and the Vibrio fischeri – Euprymna scolopes...... symbiotic partnership to exemplify the emergence of informational contexts along the biological hierarchy (from molecules to ecologies). I suggest that the biosemiotic epistemological framework can play an integra¬tive role to overcome the limits of dyadic mechanistic descriptions when relating...

  6. Individual chaos implies collective chaos for weakly mixing discrete dynamical systems

    International Nuclear Information System (INIS)

    Liao Gongfu; Ma Xianfeng; Wang Lidong

    2007-01-01

    Let X be a metric space (X,f) a discrete dynamical system, where f:X->X is a continuous function. Let f-bar denote the natural extension of f to the space of all non-empty compact subsets of X endowed with Hausdorff metric induced by d. In this paper we investigate some dynamical properties of f and f-bar . It is proved that f is weakly mixing (mixing) if and only if f-bar is weakly mixing (mixing, respectively). From this, we deduce that weak-mixing of f implies transitivity of f-bar , further, if f is mixing or weakly mixing, then chaoticity of f (individual chaos) implies chaoticity of f-bar (collective chaos) and if X is a closed interval then f-bar is chaotic (in the sense of Devaney) if and only if f is weakly mixing

  7. Covariance of time-ordered products implies local commutativity of fields

    International Nuclear Information System (INIS)

    Greenberg, O.W.

    2006-01-01

    We formulate Lorentz covariance of a quantum field theory in terms of covariance of time-ordered products (or other Green's functions). This formulation of Lorentz covariance implies spacelike local commutativity or anticommutativity of fields, sometimes called microscopic causality or microcausality. With this formulation microcausality does not have to be taken as a separate assumption

  8. THE LEVEL OF PROCESS MANAGEMENT PRINCIPLES APPLICATION IN SMEs IN THE SELECTED REGION OF THE CZECH REPUBLIC

    Directory of Open Access Journals (Sweden)

    Ladislav Rolínek

    2014-10-01

    Full Text Available This paper presents a methodology for calculating the indicators of the implementation of process management in SMEs (MPP and the analysis of results of process management principles use based on the number of employees. The data based of a questionnaire survey in 2011, of 187 small and mediumsized enterprises operating in the South Bohemian Region of the Czech Republic, was taken for the purposes of the research. The level of process management implementation in enterprises can be determined using the evaluation application of its principles (Truneček, 2003; Rolínek et al., 2012. Designed composite indicator MPP reflects the degree of implementation of the principles of process management. MPP is made up of the sum of the points that have been assigned to individual principles of process management, with the maximum score 21. Enterprises that were rated 16-21 points are considered as process managed, 6-15 points for partially managed, less than 6 points gained is procedurally unmanaged business. Process management principles are based on the findings of this indicator and MPP is applied to most medium-sized enterprises, while the least in micro-enterprises, which implies that the number of employees increases the utilization of the principles of process management. Results were adopted by Chi-square test of goodness of fit and correlation coefficient.

  9. Basic humanitarian principles applicable to non-nationals.

    Science.gov (United States)

    Goodwin-gill, G S; Jenny, R K; Perruchoud, R

    1985-01-01

    This article examines the general status in international law of certain fundamental human rights to determine the minimum "no derogation" standards, and then surveys a number of formal agreements between stages governing migration matters, while examining some of the standard-setting work undertaken by the International Labor Organization (ILO) and other institutions. Article 13 of the Universal Declaration of Human Rights, proclaims the right of everyone to leave any country, including his or her own. The anti-discrimination provision is widely drawn and includes national or social origin, birth, or other status. Non-discrimination is frequently the core issue in migration matters; it offers the basis for a principles approach to questions involving non-nationals and their methodological analysis, as well as a standard for the progressive elaboration of institutions and practices. As a general rule, ILO conventions give particular importance to the principle of choice of methods by states for the implementation of standards, as well as to the principle of progressive implementation. Non-discrimination implies equality of opportunity in the work field, inremuneration, job opportunity, trade union rights and benefits, social security, taxation, medical treatment, and accommodation; basic legal guarantees are also matters of concern to migrant workers, including termination of employment, non-renewal of work permits, and expulsion. The generality of human rights is due not because the individual is or is not a member of a partucular group, and claims to such rights are not determinable according to membership, but according to the character of the right in question. The individualized aspect of fundamental human rights requires a case-by-case consideration of claims, and the recognition that to all persons now certain special duties are owed.

  10. Problems of Equivalence in Shona- English Bilingual Dictionaries

    African Journals Online (AJOL)

    rbr

    Page 1 ... translation equivalents in Shona-English dictionaries where lexicographers will be dealing with divergent languages and cultures, traditional practices of lexicography and the absence of reliable ... ideal in translation is to achieve structural and semantic equivalence. Absolute equivalence between any two ...

  11. 10 CFR 474.3 - Petroleum-equivalent fuel economy calculation.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Petroleum-equivalent fuel economy calculation. 474.3..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION § 474.3 Petroleum-equivalent fuel economy calculation. (a) The petroleum-equivalent fuel economy for an electric vehicle is...

  12. The equivalence of a human observer and an ideal observer in binary diagnostic tasks

    Science.gov (United States)

    He, Xin; Samuelson, Frank; Gallas, Brandon D.; Sahiner, Berkman; Myers, Kyle

    2013-03-01

    The Ideal Observer (IO) is "ideal" for given data populations. In the image perception process, as the raw images are degraded by factors such as display and eye optics, there is an equivalent IO (EIO). The EIO uses the statistical information that exits the perception/cognitive degradations as the data. We assume a human observer who received sufficient training, e.g., radiologists, and hypothesize that such a human observer can be modeled as if he is an EIO. To measure the likelihood ratio (LR) distributions of an EIO, we formalize experimental design principles that encourage rationality based on von Neumann and Morgenstern's (vNM) axioms. We present examples to show that many observer study design refinements, although motivated by empirical principles explicitly, implicitly encourage rationality. Our hypothesis is supported by a recent review paper on ROC curve convexity by Pesce, Metz, and Berbaum. We also provide additional evidence based on a collection of observer studies in medical imaging. EIO theory shows that the "sub-optimal" performance of a human observer can be mathematically formalized in the form of an IO, and measured through rationality encouragement.

  13. Can we replace CAPM and the Three-Factor model with Implied Cost of Capital?

    OpenAIRE

    Löthman, Robert; Pettersson, Eric

    2014-01-01

    Researchers criticize predominant expected return models for being imprecise and based on fundamentally flawed assumptions. This dissertation evaluates Implied Cost of Capital, CAPM and the Three-Factor model abilities to estimate returns. We study each models expected return association to realized return and test for abnormal returns. Our sample covers the period 2000 to 2012 and includes 2916 US firms. We find that Implied Cost of Capital has a stronger association with realized returns th...

  14. Course design via Equivalency Theory supports equivalent student grades and satisfaction in online and face-to-face psychology classes

    Directory of Open Access Journals (Sweden)

    David eGarratt-Reed

    2016-05-01

    Full Text Available There has been a recent rapid growth in the number of psychology courses offered online through institutions of higher education. The American Psychological Association (APA has highlighted the importance of ensuring the effectiveness of online psychology courses. Despite this, there have been inconsistent findings regarding student grades, satisfaction, and retention in online psychology units. Equivalency Theory posits that online and classroom-based learners will attain equivalent learning outcomes when equivalent learning experiences are provided. We present a case study of an online introductory psychology unit designed to provide equivalent learning experiences to the pre-existing face-to-face version of the unit. Academic performance, student feedback, and retention data from 866 Australian undergraduate psychology students were examined to assess whether the online unit produced comparable outcomes to the ‘traditional’ unit delivered face-to-face. Student grades did not significantly differ between modes of delivery, except for a group-work based assessment where online students performed more poorly. Student satisfaction was generally high in both modes of the unit, with group-work the key source of dissatisfaction in the online unit. The results provide partial support for Equivalency Theory. The group-work based assessment did not provide an equivalent learning experience for students in the online unit highlighting the need for further research to determine effective methods of engaging students in online group activities. Consistent with previous research, retention rates were significantly lower in the online unit, indicating the need to develop effective strategies to increase online retention rates. While this study demonstrates successes in presenting online students with an equivalent learning experience, we recommend that future research investigates means of successfully facilitating collaborative group-work assessment

  15. No-Arbitrage Condition of Option Implied Volatility and Bandwidth Selection

    Czech Academy of Sciences Publication Activity Database

    Kopa, Miloš; Tichý, T.

    2014-01-01

    Roč. 17, č. 3 (2014), s. 751-755 ISSN 0972-0073 R&D Projects: GA ČR(CZ) GA13-25911S Institutional support: RVO:67985556 Keywords : Option Pricing * Implied Volatility * DAX Index * Local polynomial smoothing Subject RIV: AH - Economics Impact factor: 0.222, year: 2014 http://library.utia.cas.cz/separaty/2014/E/kopa-0429805.pdf

  16. Matching of equivalent field regions

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen; Rengarajan, S.B.

    2005-01-01

    In aperture problems, integral equations for equivalent currents are often found by enforcing matching of equivalent fields. The enforcement is made in the aperture surface region adjoining the two volumes on each side of the aperture. In the case of an aperture in a planar perfectly conducting...... screen, having the same homogeneous medium on both sides and an impressed current on one aide, an alternative procedure is relevant. We make use of the fact that in the aperture the tangential component of the magnetic field due to the induced currents in the screen is zero. The use of such a procedure...... shows that equivalent currents can be found by a consideration of only one of the two volumes into which the aperture plane divides the space. Furthermore, from a consideration of an automatic matching at the aperture, additional information about tangential as well as normal field components...

  17. Life cycle costing of waste management systems: Overview, calculation principles and case studies

    DEFF Research Database (Denmark)

    Martinez Sanchez, Veronica; Kromann, Mikkel A.; Astrup, Thomas Fruergaard

    2015-01-01

    This paper provides a detailed and comprehensive cost model for the economic assessment of solid waste management systems. The model was based on the principles of Life Cycle Costing (LCC) and followed a bottom-up calculation approach providing detailed cost items for all key technologies within...... regarding the cost assessment of waste management, namely system boundary equivalency, accounting for temporally distributed emissions and impacts, inclusions of transfers, the internalisation of environmental impacts and the coverage of shadow prices, and there was also significant confusion regarding...

  18. Vertebrate Fossils Imply Paleo-elevations of the Tibetan Plateau

    Science.gov (United States)

    Deng, T.; Wang, X.; Li, Q.; Wu, F.; Wang, S.; Hou, S.

    2017-12-01

    The uplift of the Tibetan Plateau remains unclear, and its paleo-elevation reconstructions are crucial to interpret the geodynamic evolution and to understand the climatic changes in Asia. Uplift histories of the Tibetan Plateau based on different proxies differ considerably, and two viewpoints are pointedly opposing on the paleo-elevation estimations of the Tibetan Plateau. One viewpoint is that the Tibetan Plateau did not strongly uplift to reach its modern elevation until the Late Miocene, but another one, mainly based on stable isotopes, argues that the Tibetan Plateau formed early during the Indo-Asian collision and reached its modern elevation in the Paleogene or by the Middle Miocene. In 1839, Hugh Falconer firstly reported some rhinocerotid fossils collected from the Zanda Basin in Tibet, China and indicated that the Himalayas have uplifted by more than 2,000 m since several million years ago. In recent years, the vertebrate fossils discovered from the Tibetan Plateau and its surrounding areas implied a high plateau since the late Early Miocene. During the Oligocene, giant rhinos lived in northwestern China to the north of the Tibetan Plateau, while they were also distributed in the Indo-Pakistan subcontinent to the south of this plateau, which indicates that the elevation of the Tibetan Plateau was not too high to prevent exchanges of large mammals; giant rhinos, the rhinocerotid Aprotodon, and chalicotheres still dispersed north and south of "Tibetan Plateau". A tropical-subtropical lowland fish fauna was also present in the central part of this plateau during the Late Oligocene, in which Eoanabas thibetana was inferred to be closely related to extant climbing perches from South Asia and Sub-Saharan Africa. In contrast, during the Middle Miocene, the shovel-tusked elephant Platybelodon was found from many localities north of the Tibetan Plateau, while its trace was absent in the Siwaliks of the subcontinent, which implies that the Tibetan Plateau had

  19. Noninvadability implies noncoexistence for a class of cancellative systems

    Czech Academy of Sciences Publication Activity Database

    Swart, Jan M.

    2013-01-01

    Roč. 18, č. 38 (2013), s. 1-12 ISSN 1083-589X R&D Projects: GA ČR GAP201/10/0752 Institutional support: RVO:67985556 Keywords : cancellative system * interface tightness * duality * coexistence * Neuhauser-Pacala model * affine voter model * rebellious voter model * balancing selection * branching * annihilation * parity preservation Subject RIV: BA - General Mathematics Impact factor: 0.627, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/swart-noninvadability implies noncoexistence for a class of cancellative systems.pdf

  20. Behavioural equivalence for infinite systems - Partially decidable!

    DEFF Research Database (Denmark)

    Sunesen, Kim; Nielsen, Mogens

    1996-01-01

    languages with two generalizations based on traditional approaches capturing non-interleaving behaviour, pomsets representing global causal dependency, and locality representing spatial distribution of events. We first study equivalences on Basic Parallel Processes, BPP, a process calculus equivalent...... of processes between BPP and TCSP, not only are the two equivalences different, but one (locality) is decidable whereas the other (pomsets) is not. The decidability result for locality is proved by a reduction to the reachability problem for Petri nets....

  1. Relations of equivalence of conditioned radioactive waste

    International Nuclear Information System (INIS)

    Kumer, L.; Szeless, A.; Oszuszky, F.

    1982-01-01

    A compensation for the wastes remaining with the operator of a waste management center, to be given by the agent having caused the waste, may be assured by effecting a financial valuation (equivalence) of wastes. Technically and logically, this equivalence between wastes (or specifically between different waste categories) and financial valuation has been established as reasonable. In this paper, the possibility of establishing such equivalences are developed, and their suitability for waste management concepts is quantitatively expressed

  2. Equivalences of real submanifolds in complex space.

    OpenAIRE

    ZAITSEV, DMITRI

    2001-01-01

    PUBLISHED We show that for any real-analytic submanifold M in CN there is a proper real-analytic subvariety V contained in M such that for any p ? M \\ V , any real-analytic submanifold M? in CN, and any p? ? M?, the germs of the submanifolds M and M? at p and p? respectively are formally equivalent if and only if they are biholomorphically equivalent. More general results for k-equivalences are also stated and proved.

  3. Equivalence relations for the 9972-9975 SARP

    International Nuclear Information System (INIS)

    Niemer, K.A.; Frost, R.L.

    1994-10-01

    Equivalence relations required to determine mass limits for mixtures of nuclides for the Safety Analysis Report for Packaging (SARP) of the Savannah River Site 9972, 9973, 9974, and 9975 shipping casks were calculated. The systems analyzed included aqueous spheres, homogeneous metal spheres, and metal ball-and-shell configurations, all surrounded by an effectively infinite stainless steel or water reflector. Comparison of the equivalence calculations with the rule-of-fractions showed conservative agreement for aqueous solutions, both conservative and non-conservative agreement for the metal homogeneous sphere systems, and non-conservative agreement for the majority of metal ball-and-shell systems. Equivalence factors for the aqueous solutions and homogeneous metal spheres were calculated. The equivalence factors for the non-conservative metal homogeneous sphere systems were adjusted so that they were conservative. No equivalence factors were calculated for the ball-and-shell systems since the -SARP assumes that only homogeneous or uniformly distributed material will be shipped in the 9972-9975 shipping casks, and an unnecessarily conservative critical mass may result if the ball-and-shell configurations are included

  4. Equivalence of Szegedy's and coined quantum walks

    Science.gov (United States)

    Wong, Thomas G.

    2017-09-01

    Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.

  5. 7 CFR 1000.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1000.54 Section 1000.54 Agriculture... Prices § 1000.54 Equivalent price. If for any reason a price or pricing constituent required for computing the prices described in § 1000.50 is not available, the market administrator shall use a price or...

  6. A Bayesian equivalency test for two independent binomial proportions.

    Science.gov (United States)

    Kawasaki, Yohei; Shimokawa, Asanao; Yamada, Hiroshi; Miyaoka, Etsuo

    2016-01-01

    In clinical trials, it is often necessary to perform an equivalence study. The equivalence study requires actively denoting equivalence between two different drugs or treatments. Since it is not possible to assert equivalence that is not rejected by a superiority test, statistical methods known as equivalency tests have been suggested. These methods for equivalency tests are based on the frequency framework; however, there are few such methods in the Bayesian framework. Hence, this article proposes a new index that suggests the equivalency of binomial proportions, which is constructed based on the Bayesian framework. In this study, we provide two methods for calculating the index and compare the probabilities that have been calculated by these two calculation methods. Moreover, we apply this index to the results of actual clinical trials to demonstrate the utility of the index.

  7. An explicit solution for a renewal process with waiting time and its variational principle

    International Nuclear Information System (INIS)

    Lewins, J.D.

    2001-01-01

    The forward and backward equations for the conditional probability density are derived for a reliability system consisting of a single component whose repair is subject to a delay time in providing a spare part but whose mean rate of repair is otherwise constant and whose time to failure is exponentially distributed. Exact solutions are quoted. These equations are then shown to be an adjoint pair that provide stationary conditions for a variational principle, in elementary form, from which all properties of the systems can be predicted with an accuracy greater than that implied by the trial functions or approximations used. A second or specific form of variational principle provides specific estimates to questions at hand. The second or adjoint field in the first elementary principle is the backward Kolmogorov solution and the in the specific form is the importance function, as used in nuclear reactor theory. The solutions are given for long-time and in a recurrence relation form valid for all times so that approximate solutions can be checked. Approximations suitable for variational trial functions are given. Two examples give the effect of a change of delay time for a steady state and an initial transient, respectively

  8. The equivalence theorem

    International Nuclear Information System (INIS)

    Veltman, H.

    1990-01-01

    The equivalence theorem states that, at an energy E much larger than the vector-boson mass M, the leading order of the amplitude with longitudinally polarized vector bosons on mass shell is given by the amplitude in which these vector bosons are replaced by the corresponding Higgs ghosts. We prove the equivalence theorem and show its validity in every order in perturbation theory. We first derive the renormalized Ward identities by using the diagrammatic method. Only the Feynman-- 't Hooft gauge is discussed. The last step of the proof includes the power-counting method evaluated in the large-Higgs-boson-mass limit, needed to estimate the leading energy behavior of the amplitudes involved. We derive expressions for the amplitudes involving longitudinally polarized vector bosons for all orders in perturbation theory. The fermion mass has not been neglected and everything is evaluated in the region m f ∼M much-lt E much-lt m Higgs

  9. Lorentz covariance ‘almost’ implies electromagnetism and more

    International Nuclear Information System (INIS)

    Sobouti, Y

    2015-01-01

    Beginning from two simple assumptions, (i) the speed of light is a universal constant, or its equivalent, the spacetime intervals are Lorentz invariant, and (ii) there are mutually interacting particles, with a covariant ‘source-field’ equation, one arrives at a class of field equations of which the standard electromagnetism (EM) and electrodynamics are special cases. The formalism, depending on how one formulates the source-field equation, allows one to speculate magnetic monopoles, massive photons, nonlinear EMs, and more. (paper)

  10. Implied Reading in the Unforgettable Stories of Language Learners

    Directory of Open Access Journals (Sweden)

    Feryal ÇUBUKÇU

    2017-09-01

    Full Text Available Iser is literary theoretician and co-founder of the Constance School of Reception Aesthetics, professor Emeritus of English and Comparative Literature at the University of Constance and the University of California, Irvine. When Iser died in 2007 in his eighty-first year, he was one of the most widely known literary theoreticians in the world. His “implied reading” theory claims that texts can themselves also awaken false expectations, alternately bringing about surprise, joy and frustration, which can be the enlargement of experience. The indeterminacy of the text might yield different responses from different readers. To prove that each implied reading is based on the schemata of the readers, this study aims at analysing the stories told by language learners of Turkish who come from 20 countries and whose ages vary between 18-32. The participants are 65 undergraduate and graduate university students, from African, Asian and Balkan countries, who upon watching “Cinderella” were asked to write about the unforgettable folk story or fairy tale. When their stories are item analysed, the results show that the schematas of the learners shape the way they choose and recount the stories. Leraners of Turkish fill in the gaps throughout the story, form a meaningful bond by pulling information from it, participating in a reciprocal relationship, creating and deriving meaning in an extravaganza of interpretation.

  11. Developing Countries and Copyright in the Information Age - The Functional Equivalent Implementation of the WCT

    Directory of Open Access Journals (Sweden)

    T Pistorius

    2006-01-01

    Full Text Available Digital technology has had a profound impact on copyright law. The implementation of the WIPO Copyright Treaty (WCT and the enforcement of technological protection measures have led to disparate forms of copyright protection for digital and analogue media. The balance between authors’ rights and the right of the public to access copyright works has been distorted. Copyright law is playing an ever-increasing crucial role in the Information Society. Developing countries are especially disadvantaged by diminished access to works. In this article it is argued that adherence to the principle of functional equivalence in implementing the anti-circumvention provisions of the WCT will ensure that the copyright balance is maintained and will advance the development agenda.

  12. Stem cell bioprocessing: fundamentals and principles.

    Science.gov (United States)

    Placzek, Mark R; Chung, I-Ming; Macedo, Hugo M; Ismail, Siti; Mortera Blanco, Teresa; Lim, Mayasari; Cha, Jae Min; Fauzi, Iliana; Kang, Yunyi; Yeo, David C L; Ma, Chi Yip Joan; Polak, Julia M; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2009-03-06

    In recent years, the potential of stem cell research for tissue engineering-based therapies and regenerative medicine clinical applications has become well established. In 2006, Chung pioneered the first entire organ transplant using adult stem cells and a scaffold for clinical evaluation. With this a new milestone was achieved, with seven patients with myelomeningocele receiving stem cell-derived bladder transplants resulting in substantial improvements in their quality of life. While a bladder is a relatively simple organ, the breakthrough highlights the incredible benefits that can be gained from the cross-disciplinary nature of tissue engineering and regenerative medicine (TERM) that encompasses stem cell research and stem cell bioprocessing. Unquestionably, the development of bioprocess technologies for the transfer of the current laboratory-based practice of stem cell tissue culture to the clinic as therapeutics necessitates the application of engineering principles and practices to achieve control, reproducibility, automation, validation and safety of the process and the product. The successful translation will require contributions from fundamental research (from developmental biology to the 'omics' technologies and advances in immunology) and from existing industrial practice (biologics), especially on automation, quality assurance and regulation. The timely development, integration and execution of various components will be critical-failures of the past (such as in the commercialization of skin equivalents) on marketing, pricing, production and advertising should not be repeated. This review aims to address the principles required for successful stem cell bioprocessing so that they can be applied deftly to clinical applications.

  13. A contribution to the solution of the problems raised by the application of the principle of protection optimization to nuclear plants

    International Nuclear Information System (INIS)

    Lacourly, G.; Demerle, P.

    1975-01-01

    The radiological protection of populations and the environment rests on two main principles: the dose delivered to the individuals of the most exposed population group must remain below the dose limits set up by regulations; the doses must be kept as low as readly achievable social and economic considerations being taken into account. If the application of the former principle has now become routine work, the application of the latter one implying optimization calculus raises a number of difficult problems. In order to decide whether an exposure can be easily reduced, both the benefits of the reduction and its cost must be considered, which leads to undertake a differential analysis [fr

  14. [Euthanasia, assisted suicide, and the principle of double effect: a reply to Rodolfo Figueroa].

    Science.gov (United States)

    Miranda, Alejandro M

    2012-02-01

    The purpose of this paper is to defend the traditional application of the principle of double effect as a criterion for assessing the permissibility of actions that have as their common aim to end the suffering of seriously ill patients. According to this principle, euthanasia and physician-assisted suicide are always illicit acts, while the same is not said for other actions that bring about patient's death as a foreseen effect, namely, palliative treatments that hasten death or failure or interruption of life support. The reason for this difference is that, in the first two cases, the patient's death is intended as a means of pain relief; whereas, in the latter two, death is only a side effect of a medical act, an act justifiable if it is necessary to achieve a proportionate good. In a recent issue of this Journal, Professor Rodolfo Figueroa denied the soundness of the principle of double effect and maintained that all actions described above should be considered equivalent in law enforcement. Here, the author presents a reply to that argument, and also offers a justification of the afore said principle's core, that is, the moral and legal relevance of the distinction between intended effects and foreseen side effects.

  15. 76 FR 7840 - American Hydro Power Company; Notice of Termination of Exemption by Implied Surrender and...

    Science.gov (United States)

    2011-02-11

    ... Power Company; Notice of Termination of Exemption by Implied Surrender and Soliciting Comments, Protests... the Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No... if any term or condition of the exemption is violated. The project has not operated since 2004, and...

  16. Application of maximum values for radiation exposure and principles for the calculation of radiation doses

    International Nuclear Information System (INIS)

    2007-08-01

    The guide presents the definitions of equivalent dose and effective dose, the principles for calculating these doses, and instructions for applying their maximum values. The limits (Annual Limit on Intake and Derived Air Concentration) derived from dose limits are also presented for the purpose of monitoring exposure to internal radiation. The calculation of radiation doses caused to a patient from medical research and treatment involving exposure to ionizing radiation is beyond the scope of this ST Guide

  17. Moment generating functions and Normalized implied volatilities: unification and extension via Fukasawa's pricing formula

    OpenAIRE

    De Marco, Stefano; Martini, Claude

    2017-01-01

    We extend the model-free formula of [Fukasawa 2012] for $\\mathbb E[\\Psi(X_T)]$, where $X_T=\\log S_T/F$ is the log-price of an asset, to functions $\\Psi$ of exponential growth. The resulting integral representation is written in terms of normalized implied volatilities. Just as Fukasawa's work provides rigourous ground for Chriss and Morokoff's (1999) model-free formula for the log-contract (related to the Variance swap implied variance), we prove an expression for the moment generating functi...

  18. Limitations of the equivalence between spatial and ensemble estimators in the case of a single-tone excitation.

    Science.gov (United States)

    Monsef, Florian; Cozza, Andrea

    2011-10-01

    The ensemble-average value of the mean-square pressure is often assessed by using the spatial-average technique, underlying an equivalence principle between spatial and ensemble estimators. Using the ideal-diffuse-field model, the accuracy of the spatial-average method has been studied theoretically forty years ago in the case of a single-tone excitation. This study is revisited in the present work on the basis of a more realistic description of the sound field accounting for a finite number of plane waves. The analysis of the spatial-average estimator is based on the study of its convergence rate. Using experimental data from practical examples, it is shown that the classical expression underestimates the estimator uncertainty even for frequencies greater than Schroeder's frequency, and that the number of plane waves may act as lower bound on the spatial-average estimator accuracy. The comparison of the convergence rate with an ensemble-estimator shows that the two statistics cannot be regarded as equivalent in a general case. © 2011 Acoustical Society of America

  19. On Dual Phase-Space Relativity, the Machian Principle and Modified Newtonian Dynamics

    CERN Document Server

    Castro, C

    2004-01-01

    We investigate the consequences of the Mach's principle of inertia within the context of the Dual Phase Space Relativity which is compatible with the Eddington-Dirac large numbers coincidences and may provide with a physical reason behind the observed anomalous Pioneer acceleration and a solution to the riddle of the cosmological constant problem ( Nottale ). The cosmological implications of Non-Archimedean Geometry by assigning an upper impassible scale in Nature and the cosmological variations of the fundamental constants are also discussed. We study the corrections to Newtonian dynamics resulting from the Dual Phase Space Relativity by analyzing the behavior of a test particle in a modified Schwarzschild geometry (due to the the effects of the maximal acceleration) that leads in the weak-field approximation to essential modifications of the Newtonian dynamics and to violations of the equivalence principle. Finally we follow another avenue and find modified Newtonian dynamics induced by the Yang's Noncommut...

  20. How "ought" exceeds but implies "can": Description and encouragement in moral judgment.

    Science.gov (United States)

    Turri, John

    2017-11-01

    This paper tests a theory about the relationship between two important topics in moral philosophy and psychology. One topic is the function of normative language, specifically claims that one "ought" to do something. Do these claims function to describe moral responsibilities, encourage specific behavior, or both? The other topic is the relationship between saying that one "ought" to do something and one's ability to do it. In what respect, if any, does what one "ought" to do exceed what one "can" do? The theory tested here has two parts: (1) "ought" claims function to both describe responsibilities and encourage people to fulfill them (the dual-function hypothesis); (2) the two functions relate differently to ability, because the encouragement function is limited by the person's ability, but the descriptive function is not (the interaction hypothesis). If this theory is correct, then in one respect "ought implies can" is false because people have responsibilities that exceed their abilities. But in another respect "ought implies can" is legitimate because it is not worthwhile to encourage people to do things that exceed their ability. Results from two behavioral experiments support the theory that "ought" exceeds but implies "can." Results from a third experiment provide further evidence regarding an "ought" claim's primary function and how contextual features can affect the interpretation of its functions. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Short-Term Market Risks Implied by Weekly Options

    DEFF Research Database (Denmark)

    Andersen, Torben Gustav; Fusari, Nicola; Todorov, Viktor

    a direct way to study volatility and jump risks. Unlike longer-dated options, they are largely insensitive to the risk of intertemporal shifts in the economic environment. Adopting a novel semi-nonparametric approach, we uncover variation in the negative jump tail risk which is not spanned by market......We study short-term market risks implied by weekly S&P 500 index options. The introduction of weekly options has dramatically shifted the maturity profile of traded options over the last five years, with a substantial proportion now having expiry within one week. Such short-dated options provide......" by the level of market volatility and elude standard asset pricing models....

  2. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  3. IMPLIED AUTHOR IN PHILOSOPHICAL NOVELS

    Directory of Open Access Journals (Sweden)

    Olga Senkāne

    2014-10-01

    Full Text Available The present article falls within a number of papers about research on specification of philosophical novels. The aim of this article is to analyze author’s function as a narrative category in classical philosophical novels (Franz Kafka "The Trial" (1925, "The Castle" (1926, Jean-Paul Sartre "Nausea" (1938, Hermann Hesse "The Glass Bead Game" (1943, Albert Camus "The Plague" (1947 and a novel of Latvian prose writer Ilze Šķipsna "Neapsolītās zemes" ["Un-Promised Lands"] (1970. The analysis is based on theoretical ideas of structural narratologists Gerard Genette, William Labov, Seymuor Chatman, Wolf Schmid, as well as philosophers Edmund Husserl, Jean-Paul Sartre, Paul Ricouer and semioticians Yuri Lotman (Юрий Лотман and Umberto Eco. The real author can ”enter” the text only indirectly—as an image, with the help of the storyteller, and the way how this ”entry” happens is determined by the narration of the real author or narrative (communication skills of the author. Thus, the author and implied author are functionally different concepts: author as a real person develops the concept idea, his intention is to define the concept under his original vision; narrator, in its turn, communicates with the reader, representing the concept, and his aim is to select appropriate means of communication with regard to reader’s perceptual abilities.

  4. General Dynamic Equivalent Modeling of Microgrid Based on Physical Background

    Directory of Open Access Journals (Sweden)

    Changchun Cai

    2015-11-01

    Full Text Available Microgrid is a new power system concept consisting of small-scale distributed energy resources; storage devices and loads. It is necessary to employ a simplified model of microgrid in the simulation of a distribution network integrating large-scale microgrids. Based on the detailed model of the components, an equivalent model of microgrid is proposed in this paper. The equivalent model comprises two parts: namely, equivalent machine component and equivalent static component. Equivalent machine component describes the dynamics of synchronous generator, asynchronous wind turbine and induction motor, equivalent static component describes the dynamics of photovoltaic, storage and static load. The trajectory sensitivities of the equivalent model parameters with respect to the output variables are analyzed. The key parameters that play important roles in the dynamics of the output variables of the equivalent model are identified and included in further parameter estimation. Particle Swarm Optimization (PSO is improved for the parameter estimation of the equivalent model. Simulations are performed in different microgrid operation conditions to evaluate the effectiveness of the equivalent model of microgrid.

  5. 49 CFR 391.33 - Equivalent of road test.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Equivalent of road test. 391.33 Section 391.33... AND LONGER COMBINATION VEHICLE (LCV) DRIVER INSTRUCTORS Tests § 391.33 Equivalent of road test. (a) In place of, and as equivalent to, the road test required by § 391.31, a person who seeks to drive a...

  6. Explaining the level of credit spreads: Option-implied jump risk premia in a firm value model

    NARCIS (Netherlands)

    Cremers, K.J.M.; Driessen, J.; Maenhout, P.

    2008-01-01

    We study whether option-implied jump risk premia can explain the high observed level of credit spreads. We use a structural jump-diffusion firm value model to assess the level of credit spreads generated by option-implied jump risk premia. Prices and returns of equity index and individual options

  7. 76 FR 55904 - Michael J. Donahue; Notice of Termination of Exemption By Implied Surrender and Soliciting...

    Science.gov (United States)

    2011-09-09

    .... Donahue; Notice of Termination of Exemption By Implied Surrender and Soliciting Comments, Protests, and... Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No.: 6649-008. c... Commission reserves the right to revoke an exemption if any term or condition of the exemption is violated...

  8. 76 FR 58264 - Michael J. Donahue; Notice of Termination of Exemption by Implied Surrender and Soliciting...

    Science.gov (United States)

    2011-09-20

    .... Donahue; Notice of Termination of Exemption by Implied Surrender and Soliciting Comments, Protests, and... Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No.: 6649-008. c... Commission reserves the right to revoke an exemption if any term or condition of the exemption is violated...

  9. 77 FR 73653 - Milburnie Hydro Inc.; Notice of Termination of Exemption by Implied Surrender and Soliciting...

    Science.gov (United States)

    2012-12-11

    ... Inc.; Notice of Termination of Exemption by Implied Surrender and Soliciting Comments, Protests, and... Commission: a. Type of Proceeding: Termination of exemption by implied surrender. b. Project No.: 7910-006. c... Commission reserves the right to revoke an exemption if any term or condition of the exemption is violated...

  10. Mathematical simulation of biologically equivalent doses for LDR-HDR

    International Nuclear Information System (INIS)

    Slosarek, K.; Zajusz, A.

    1996-01-01

    Based on the LQ model examples of biologically equivalent doses LDR, HDR and external beams were calculated. The biologically equivalent doses for LDR were calculated by appending to the LQ model the corrector for the time of repair of radiation sublethal damages. For radiation continuously delivered at a low dose rate the influence of sublethal damage repair time changes on biologically equivalent doses were analysed. For fractionated treatment with high dose rate the biologically equivalent doses were calculated by adding to the LQ model the formula of accelerated repopulation. For total biologically equivalent dose calculation for combine LDR-HDR-Tele irradiation examples are presented with the use of different parameters of the time of repair of sublethal damages and accelerated repopulation. The calculations performed show, that the same biologically equivalent doses can be obtained for different parameters of cell kinetics changes during radiation treatment. It also shows, that during biologically equivalent dose calculations for different radiotherapy schedules, ignorance of cell kinetics parameters can lead to relevant errors

  11. Implied and Local Volatility Surfaces for South African Index and Foreign Exchange Options

    Directory of Open Access Journals (Sweden)

    Antonie Kotzé

    2015-01-01

    Full Text Available Certain exotic options cannot be valued using closed-form solutions or even by numerical methods assuming constant volatility. Many exotics are priced in a local volatility framework. Pricing under local volatility has become a field of extensive research in finance, and various models are proposed in order to overcome the shortcomings of the Black-Scholes model that assumes a constant volatility. The Johannesburg Stock Exchange (JSE lists exotic options on its Can-Do platform. Most exotic options listed on the JSE’s derivative exchanges are valued by local volatility models. These models needs a local volatility surface. Dupire derived a mapping from implied volatilities to local volatilities. The JSE uses this mapping in generating the relevant local volatility surfaces and further uses Monte Carlo and Finite Difference methods when pricing exotic options. In this document we discuss various practical issues that influence the successful construction of implied and local volatility surfaces such that pricing engines can be implemented successfully. We focus on arbitrage-free conditions and the choice of calibrating functionals. We illustrate our methodologies by studying the implied and local volatility surfaces of South African equity index and foreign exchange options.

  12. Impacts of generalized uncertainty principle on black hole thermodynamics and Salecker-Wigner inequalities

    International Nuclear Information System (INIS)

    Tawfik, A.

    2013-01-01

    We investigate the impacts of Generalized Uncertainty Principle (GUP) proposed by some approaches to quantum gravity such as String Theory and Doubly Special Relativity on black hole thermodynamics and Salecker-Wigner inequalities. Utilizing Heisenberg uncertainty principle, the Hawking temperature, Bekenstein entropy, specific heat, emission rate and decay time are calculated. As the evaporation entirely eats up the black hole mass, the specific heat vanishes and the temperature approaches infinity with an infinite radiation rate. It is found that the GUP approach prevents the black hole from the entire evaporation. It implies the existence of remnants at which the specific heat vanishes. The same role is played by the Heisenberg uncertainty principle in constructing the hydrogen atom. We discuss how the linear GUP approach solves the entire-evaporation-problem. Furthermore, the black hole lifetime can be estimated using another approach; the Salecker-Wigner inequalities. Assuming that the quantum position uncertainty is limited to the minimum wavelength of measuring signal, Wigner second inequality can be obtained. If the spread of quantum clock is limited to some minimum value, then the modified black hole lifetime can be deduced. Based on linear GUP approach, the resulting lifetime difference depends on black hole relative mass and the difference between black hole mass with and without GUP is not negligible

  13. Spaces of homotopy self-equivalences a survey

    CERN Document Server

    Rutter, John W

    1997-01-01

    This survey covers groups of homotopy self-equivalence classes of topological spaces, and the homotopy type of spaces of homotopy self-equivalences. For manifolds, the full group of equivalences and the mapping class group are compared, as are the corresponding spaces. Included are methods of calculation, numerous calculations, finite generation results, Whitehead torsion and other areas. Some 330 references are given. The book assumes familiarity with cell complexes, homology and homotopy. Graduate students and established researchers can use it for learning, for reference, and to determine the current state of knowledge.

  14. VaR and CVaR Implied in Option Prices

    Directory of Open Access Journals (Sweden)

    Giovanni Barone Adesi

    2016-02-01

    Full Text Available VaR (Value at Risk and CVaR (Conditional Value at Risk are implied by option prices. Their relationships to option prices are derived initially under the pricing measure. It does not require assumptions about the distribution of portfolio returns. The effects of changes of measure are modest at the short horizons typically used in applications. The computation of CVaR from option price is very convenient, because this measure is not elicitable, making direct comparisons of statistical inferences from market data problematic.

  15. Equivalent nozzle in thermomechanical problems

    International Nuclear Information System (INIS)

    Cesari, F.

    1977-01-01

    When analyzing nuclear vessels, it is most important to study the behavior of the nozzle cylinder-cylinder intersection. For the elastic field, this analysis in three dimensions is quite easy using the method of finite elements. The same analysis in the non-linear field becomes difficult for designs in 3-D. It is therefore necessary to resolve a nozzle in two dimensions equivalent to a 3-D nozzle. The purpose of the present work is to find an equivalent nozzle both with a mechanical and thermal load. This has been achieved by the analysis in three dimensions of a nozzle and a nozzle cylinder-sphere intersection, of a different radius. The equivalent nozzle will be a nozzle with a sphere radius in a given ratio to the radius of a cylinder; thus, the maximum equivalent stress is the same in both 2-D and 3-D. The nozzle examined derived from the intersection of a cylindrical vessel of radius R=191.4 mm and thickness T=6.7 mm with a cylindrical nozzle of radius r=24.675 mm and thickness t=1.350 mm, for which the experimental results for an internal pressure load are known. The structure was subdivided into 96 finite, three-dimensional and isoparametric elements with 60 degrees of freedom and 661 total nodes. Both the analysis with a mechanical load as well as the analysis with a thermal load were carried out on this structure according to the Bersafe system. The thermal load consisted of a transient typical of an accident occurring in a sodium-cooled fast reactor, with a peak of the temperature (540 0 C) for the sodium inside the vessel with an insulating argon temperature constant at 525 0 C. The maximum value of the equivalent tension was found in the internal area at the union towards the vessel side. The analysis of the nozzle in 2-D consists in schematizing the structure as a cylinder-sphere intersection, where the sphere has a given relation to the

  16. Weak circulation theorems as a way of distinguishing between generalized gravitation theories

    International Nuclear Information System (INIS)

    Enosh, M.

    1980-01-01

    It was proved in a previous paper that a generalized circulation theorem characterizes Einstein's theory of gravitation as a special case of a more general theory of gravitation, which is also based on the principle of equivalence. Here the question of whether it is possible to weaken this circulation theorem in such ways that it would imply more general theories than Einstein's is posed. This problem is solved. Principally, there are two possibilities. One of them is essentially Weyl's theory. (author)

  17. Bayesian Forecasting of Options Prices: A Natural Framework for Pooling Historical and Implied Volatiltiy Information

    OpenAIRE

    Darsinos, T.; Satchell, S.E.

    2001-01-01

    Bayesian statistical methods are naturally oriented towards pooling in a rigorous way information from separate sources. It has been suggested that both historical and implied volatilities convey information about future volatility. However, typically in the literature implied and return volatility series are fed separately into models to provide rival forecasts of volatility or options prices. We develop a formal Bayesian framework where we can merge the backward looking information as r...

  18. Biophysical modelling of phytoplankton communities from first principles using two-layered spheres: Equivalent Algal Populations (EAP) model.

    Science.gov (United States)

    Robertson Lain, L; Bernard, S; Evers-King, H

    2014-07-14

    There is a pressing need for improved bio-optical models of high biomass waters as eutrophication of coastal and inland waters becomes an increasing problem. Seasonal boom conditions in the Southern Benguela and persistent harmful algal production in various inland waters in Southern Africa present valuable opportunities for the development of such modelling capabilities. The phytoplankton-dominated signal of these waters additionally addresses an increased interest in Phytoplankton Functional Type (PFT) analysis. To these ends, an initial validation of a new model of Equivalent Algal Populations (EAP) is presented here. This paper makes a first order comparison of two prominent phytoplankton Inherent Optical Property (IOP) models with the EAP model, which places emphasis on explicit bio-physical modelling of the phytoplankton population as a holistic determinant of inherent optical properties. This emphasis is shown to have an impact on the ability to retrieve the detailed phytoplankton spectral scattering information necessary for PFT applications and to successfully simulate reflectance across wide ranges of physical environments, biomass, and assemblage characteristics.

  19. The strong equivalence principle and its violation

    International Nuclear Information System (INIS)

    Canuto, V.M.; Goldman, I.

    1983-01-01

    In this paper, the authors discuss theoretical and observational aspects of an SEP violation. They present a two-times theory as a possible framework to handle an SEP violation and summarize the tests performed to check the compatibility of such violation with a host of data ranging from nucleosynthesis to geophysics. They also discuss the dynamical equations needed to analyze radar ranging data to reveal an SEP violation and in particular the method employed by Shapiro and Reasenberg. (Auth.)

  20. Rotating model for the equivalence principle paradox

    International Nuclear Information System (INIS)

    Wilkins, D.C.

    1975-01-01

    An idealized system is described in which two inertial frames rotate relative to one another. When a (scalar) dipole is locally at rest in one frame, a paradox arises as to whether or not it will radiate. Fluxes of energy and angular momentum and the time development of the system are discussed. Resolution of the paradox involves several unusual features, including (i) radiation by an unmoving charge, an effect discussed by Chitre, Price, and Sandberg, (ii) different power seen by relatively accelerated inertial observers, and (iii) radiation reaction due to gravitational backscattering of radiation, in agreement with the work of C. and B. DeWitt. These results are obtained, for the most part, without the complications of curved space--time

  1. Floyd's principle, correctness theories and program equivalence

    NARCIS (Netherlands)

    Bergstra, J.A.; Tiuryn, J.; Tucker, J.V.

    1982-01-01

    A programming system is a language made from a fixed class of data abstractions and a selection of familiar deterministic control and assignment constructs. It is shown that the sets of all ‘before-after’ first-order assertions which are true of programs in any such language can uniquely determine

  2. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    Science.gov (United States)

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  3. Equivalency of two-dimensional algebras

    International Nuclear Information System (INIS)

    Santos, Gildemar Carneiro dos; Pomponet Filho, Balbino Jose S.

    2011-01-01

    Full text: Let us consider a vector z = xi + yj over the field of real numbers, whose basis (i,j) satisfy a given algebra. Any property of this algebra will be reflected in any function of z, so we can state that the knowledge of the properties of an algebra leads to more general conclusions than the knowledge of the properties of a function. However structural properties of an algebra do not change when this algebra suffers a linear transformation, though the structural constants defining this algebra do change. We say that two algebras are equivalent to each other whenever they are related by a linear transformation. In this case, we have found that some relations between the structural constants are sufficient to recognize whether or not an algebra is equivalent to another. In spite that the basis transform linearly, the structural constants change like a third order tensor, but some combinations of these tensors result in a linear transformation, allowing to write the entries of the transformation matrix as function of the structural constants. Eventually, a systematic way to find the transformation matrix between these equivalent algebras is obtained. In this sense, we have performed the thorough classification of associative commutative two-dimensional algebras, and find that even non-division algebra may be helpful in solving non-linear dynamic systems. The Mandelbrot set was used to have a pictorial view of each algebra, since equivalent algebras result in the same pattern. Presently we have succeeded in classifying some non-associative two-dimensional algebras, a task more difficult than for associative one. (author)

  4. On the equivalence between the minimum entropy generation rate and the maximum conversion rate for a reactive system

    International Nuclear Information System (INIS)

    Bispo, Heleno; Silva, Nilton; Brito, Romildo; Manzi, João

    2013-01-01

    Highlights: • Minimum entropy generation (MEG) principle improved the reaction performance. • MEG rate and the maximum conversion equivalence have been analyzed. • Temperature and residence time are used to the domain establishment of MEG. • Satisfying the temperature and residence time relationship results a optimal performance. - Abstract: The analysis of the equivalence between the minimum entropy generation (MEG) rate and the maximum conversion rate for a reactive system is the main purpose of this paper. While being used as a strategy of optimization, the minimum entropy production was applied to the production of propylene glycol in a Continuous Stirred-Tank Reactor (CSTR) with a view to determining the best operating conditions, and under such conditions, a high conversion rate was found. The effects of the key variables and restrictions on the validity domain of MEG were investigated, which raises issues that are included within a broad discussion. The results from simulations indicate that from the chemical reaction standpoint a maximum conversion rate can be considered as equivalent to MEG. Such a result can be clearly explained by examining the classical Maxwell–Boltzmann distribution, where the molecules of the reactive system under the condition of the MEG rate present a distribution of energy with reduced dispersion resulting in a better quality of collision between molecules with a higher conversion rate

  5. Time-to-contact estimation modulated by implied friction.

    Science.gov (United States)

    Yamada, Yuki; Sasaki, Kyoshiro; Miura, Kayo

    2014-01-01

    The present study demonstrated that friction cues for target motion affect time-to-contact (TTC) estimation. A circular target moved in a linear path with a constant velocity and was gradually occluded by a static rectangle. The target moved with forward and backward spins or without spin. Observers were asked to respond at the time when the moving target appeared to pass the occluder. The results showed that TTC was significantly longer in the backward spin condition than in the forward and without-spin conditions. Moreover, similar results were obtained when a sound was used to imply friction. Our findings indicate that the observer's experiential knowledge of motion coupled with friction intuitively modulated their TTC estimation.

  6. The equivalence of gravitational potential and rechargeable battery for high-altitude long-endurance solar-powered aircraft on energy storage

    International Nuclear Information System (INIS)

    Gao, Xian-Zhong; Hou, Zhong-Xi; Guo, Zheng; Fan, Rong-Fei; Chen, Xiao-Qian

    2013-01-01

    Highlights: • The scope of this paper is to apply solar energy to achieve the high-altitude long-endurance flight. • The equivalence of gravitational potential and rechargeable battery is discussed. • Four kinds of factors have been discussed to compare the two method of energy storage. • This work can provide some governing principles for the application of solar-powered aircraft. - Abstract: Applying solar energy is one of the most promising methods to achieve the aim of High-altitude Long-endurance (HALE) flight, and solar-powered aircraft is usually taken by the research groups to develop HALE aircraft. However, the crucial factor which constrains the solar-powered aircraft to achieve the aim of HALE is the problem how to fulfill the power requirement under weight constraint of rechargeable batteries. Motivated by the birds store energy from thermal by gaining height, the method of energy stored by gravitational potential for solar-powered aircraft have attracted great attentions in recent years. In order to make the method of energy stored in gravitational potential more practical in solar-powered aircraft, the equivalence of gravitational potential and rechargeable battery for aircraft on energy storage has been analyzed, and four kinds of factors are discussed in this paper: the duration of solar irradiation, the charging rate, the energy density of rechargeable battery and the initial altitude of aircraft. This work can provide some governing principles for the solar-powered aircraft to achieve the unlimited endurance flight, and the endurance performance of solar-powered aircraft may be greatly improved by the application of energy storage using gravitational potential

  7. Size Does Matter: Implied Object Size is Mentally Simulated During Language Comprehension

    NARCIS (Netherlands)

    de Koning, Bjorn B.; Wassenburg, Stephanie I.; Bos, Lisanne T.; Van der Schoot, Menno

    2017-01-01

    Embodied theories of language comprehension propose that readers construct a mental simulation of described objects that contains perceptual characteristics of their real-world referents. The present study is the first to investigate directly whether implied object size is mentally simulated during

  8. How "moral" are the principles of biomedical ethics?--a cross-domain evaluation of the common morality hypothesis.

    Science.gov (United States)

    Christen, Markus; Ineichen, Christian; Tanner, Carmen

    2014-06-17

    The principles of biomedical ethics - autonomy, non-maleficence, beneficence, and justice - are of paradigmatic importance for framing ethical problems in medicine and for teaching ethics to medical students and professionals. In order to underline this significance, Tom L. Beauchamp and James F. Childress base the principles in the common morality, i.e. they claim that the principles represent basic moral values shared by all persons committed to morality and are thus grounded in human moral psychology. We empirically investigated the relationship of the principles to other moral and non-moral values that provide orientations in medicine. By way of comparison, we performed a similar analysis for the business & finance domain. We evaluated the perceived degree of "morality" of 14 values relevant to medicine (n1 = 317, students and professionals) and 14 values relevant to business & finance (n2 = 247, students and professionals). Ratings were made along four dimensions intended to characterize different aspects of morality. We found that compared to other values, the principles-related values received lower ratings across several dimensions that characterize morality. By interpreting our finding using a clustering and a network analysis approach, we suggest that the principles can be understood as "bridge values" that are connected both to moral and non-moral aspects of ethical dilemmas in medicine. We also found that the social domain (medicine vs. business & finance) influences the degree of perceived morality of values. Our results are in conflict with the common morality hypothesis of Beauchamp and Childress, which would imply domain-independent high morality ratings of the principles. Our findings support the suggestions by other scholars that the principles of biomedical ethics serve primarily as instruments in deliberated justifications, but lack grounding in a universal "common morality". We propose that the specific manner in which the principles

  9. Equivalent drawbead performance in deep drawing simulations

    NARCIS (Netherlands)

    Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han

    1999-01-01

    Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the

  10. S-equivalents lagrangians in generalized mechanics

    International Nuclear Information System (INIS)

    Negri, L.J.; Silva, Edna G. da.

    1985-01-01

    The problem of s-equivalent lagrangians is considered in the realm of generalized mechanics. Some results corresponding to the ordinary (non-generalized) mechanics are extended to the generalized case. A theorem for the reduction of the higher order lagrangian description to the usual order is found to be useful for the analysis of generalized mechanical systems and leads to a new class of equivalence between lagrangian functions. Some new perspectives are pointed out. (Author) [pt

  11. Relationship of the change in implied volatility with the underlying equity index return in Thailand

    OpenAIRE

    Thakolsri, Supachock; Sethapramote, Yuthana; Jiranyakul, Komain

    2016-01-01

    In this study, we examine the relationship between the change in implied volatility index and the underlying stock index return in the Thai stock market. The data used are daily data during November 2010 to December 2013. The regression analysis is performed on stationary series. The empirical results reveal that there is evidence of a significantly negative and asymmetric relationship between the underlying stock index return and the change in implied volatility. The finding in this study gi...

  12. Dynamic equivalence relation on the fuzzy measure algebras

    Directory of Open Access Journals (Sweden)

    Roya Ghasemkhani

    2017-04-01

    Full Text Available The main goal of the present paper is to extend classical results from the measure theory and dynamical systems to the fuzzy subset setting. In this paper, the notion of  dynamic equivalence relation is introduced and then it is proved that this relation is an equivalence relation. Also, a new metric on the collection of all equivalence classes is introduced and it is proved that this metric is complete.

  13. Construction of a self-supporting tissue-equivalent dividing wall and operational characteristics of a coaxial double-cylindrical tissue-equivalent proportional counter

    International Nuclear Information System (INIS)

    Saion, E.B.; Watt, D.E.

    1994-01-01

    An additional feature incorporated in a coaxial double-cylindrical tissue-equivalent proportional counter, is the presence of a common tissue-equivalent dividing wall between the inner and outer counters of thickness equivalent to the corresponding maximum range of protons at the energy of interest. By appropriate use of an anti-coincidence arrangement with the outer counter, the inner counter could be used to discriminate microdosimetric spectra of neutrons at the desired low energy range from those of the faster neutrons. The construction of an A-150 self-supporting tissue-equivalent dividing wall and an anti-coincidence unit are described. Some operational characteristic tests have been performed to determine the operation of the new microdosimeter. (author)

  14. Geometry of the local equivalence of states

    Energy Technology Data Exchange (ETDEWEB)

    Sawicki, A; Kus, M, E-mail: assawi@cft.edu.pl, E-mail: marek.kus@cft.edu.pl [Center for Theoretical Physics, Polish Academy of Sciences, Al Lotnikow 32/46, 02-668 Warszawa (Poland)

    2011-12-09

    We present a description of locally equivalent states in terms of symplectic geometry. Using the moment map between local orbits in the space of states and coadjoint orbits of the local unitary group, we reduce the problem of local unitary equivalence to an easy part consisting of identifying the proper coadjoint orbit and a harder problem of the geometry of fibers of the moment map. We give a detailed analysis of the properties of orbits of 'equally entangled states'. In particular, we show connections between certain symplectic properties of orbits such as their isotropy and coisotropy with effective criteria of local unitary equivalence. (paper)

  15. Expanding Uncertainty Principle to Certainty-Uncertainty Principles with Neutrosophy and Quad-stage Method

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2015-03-01

    Full Text Available The most famous contribution of Heisenberg is uncertainty principle. But the original uncertainty principle is improper. Considering all the possible situations (including the case that people can create laws and applying Neutrosophy and Quad-stage Method, this paper presents "certainty-uncertainty principles" with general form and variable dimension fractal form. According to the classification of Neutrosophy, "certainty-uncertainty principles" can be divided into three principles in different conditions: "certainty principle", namely a particle’s position and momentum can be known simultaneously; "uncertainty principle", namely a particle’s position and momentum cannot be known simultaneously; and neutral (fuzzy "indeterminacy principle", namely whether or not a particle’s position and momentum can be known simultaneously is undetermined. The special cases of "certain ty-uncertainty principles" include the original uncertainty principle and Ozawa inequality. In addition, in accordance with the original uncertainty principle, discussing high-speed particle’s speed and track with Newton mechanics is unreasonable; but according to "certaintyuncertainty principles", Newton mechanics can be used to discuss the problem of gravitational defection of a photon orbit around the Sun (it gives the same result of deflection angle as given by general relativity. Finally, for the reason that in physics the principles, laws and the like that are regardless of the principle (law of conservation of energy may be invalid; therefore "certaintyuncertainty principles" should be restricted (or constrained by principle (law of conservation of energy, and thus it can satisfy the principle (law of conservation of energy.

  16. Gyrokinetic equivalence

    International Nuclear Information System (INIS)

    Parra, Felix I; Catto, Peter J

    2009-01-01

    We compare two different derivations of the gyrokinetic equation: the Hamiltonian approach in Dubin D H E et al (1983 Phys. Fluids 26 3524) and the recursive methodology in Parra F I and Catto P J (2008 Plasma Phys. Control. Fusion 50 065014). We prove that both approaches yield the same result at least to second order in a Larmor radius over macroscopic length expansion. There are subtle differences in the definitions of some of the functions that need to be taken into account to prove the equivalence.

  17. Thévenin equivalent based static contingency assessment

    DEFF Research Database (Denmark)

    2015-01-01

    of the determined present state of the power system and determining a first representation of the network based on the determined Thevenin equivalents, determining a modified representation of the network, wherein the modified representation is a representation of the network having at least one contingency......, wherein at least one Thevenin equivalent of at least one voltage controlled node is modified due to the at least one contingency, the modified network representation being determined on the basis of the modified Thevenin equivalents, calculating voltage angles of the modified Thevenin equivalents......, and evaluating the voltage angles to determine whether the network having at least one contingency admit a steady state. Also a method of providing information on a real time static security assessment of a power system is disclosed....

  18. Teleparallel equivalent of Lovelock gravity

    Science.gov (United States)

    González, P. A.; Vásquez, Yerko

    2015-12-01

    There is a growing interest in modified gravity theories based on torsion, as these theories exhibit interesting cosmological implications. In this work inspired by the teleparallel formulation of general relativity, we present its extension to Lovelock gravity known as the most natural extension of general relativity in higher-dimensional space-times. First, we review the teleparallel equivalent of general relativity and Gauss-Bonnet gravity, and then we construct the teleparallel equivalent of Lovelock gravity. In order to achieve this goal, we use the vielbein and the connection without imposing the Weitzenböck connection. Then, we extract the teleparallel formulation of the theory by setting the curvature to null.

  19. Implied motion language can influence visual spatial memory.

    Science.gov (United States)

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  20. Equivalence of several Chern-Simons matter models

    International Nuclear Information System (INIS)

    Chen, W.; Itoi, C.

    1994-01-01

    Chern-Simons (CS) coupling characterizes not only statistics, but also spin and scaling dimension of matter fields. We demonstrate spin transmutation in relativistic CS matter theory, and moreover show equivalence of several models. We study the CS vector model in some detail, which provides a consistent check to the assertion of the equivalence