WorldWideScience

Sample records for weak equivalence principle

  1. A weak equivalence principle test on a suborbital rocket

    Energy Technology Data Exchange (ETDEWEB)

    Reasenberg, Robert D; Phillips, James D, E-mail: reasenberg@cfa.harvard.ed [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States)

    2010-05-07

    We describe a Galilean test of the weak equivalence principle, to be conducted during the free fall portion of a sounding rocket flight. The test of a single pair of substances is aimed at a measurement uncertainty of sigma(eta) < 10{sup -16} after averaging the results of eight separate drops. The weak equivalence principle measurement is made with a set of four laser gauges that are expected to achieve 0.1 pm Hz{sup -1/2}. The discovery of a violation (eta not = 0) would have profound implications for physics, astrophysics and cosmology.

  2. Can quantum probes satisfy the weak equivalence principle?

    International Nuclear Information System (INIS)

    Seveso, Luigi; Paris, Matteo G.A.

    2017-01-01

    We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’s mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.

  3. Can quantum probes satisfy the weak equivalence principle?

    Energy Technology Data Exchange (ETDEWEB)

    Seveso, Luigi, E-mail: luigi.seveso@unimi.it [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); Paris, Matteo G.A. [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); INFN, Sezione di Milano, I-20133 Milano (Italy)

    2017-05-15

    We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’s mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.

  4. Do positrons and antiprotons respect the weak equivalence principle?

    International Nuclear Information System (INIS)

    Hughes, R.J.

    1990-01-01

    We resolve the difficulties which Morrison identified with energy conservation and the gravitational red-shift when particles of antimatter, such as the positron and antiproton, do not respect the weak equivalence principle. 13 refs

  5. On experimental testing of the weak equivalence principle for the neutron

    International Nuclear Information System (INIS)

    Pokotilovskij, Yu.N.

    1994-01-01

    The considerations is presented of the experimental situation with the verification of the weak equivalence principle for the neutron. The direct method is proposed to significantly increase (to ∼ 10 -6 ) the precision of the equivalence principle for the neutron in the Galilei type experiment, which uses the thin-film Fabri-Perot interferometer and precise time-of-flight spectrometry of ultracold neutrons

  6. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  7. Cosmological equivalence principle and the weak-field limit

    International Nuclear Information System (INIS)

    Wiltshire, David L.

    2008-01-01

    The strong equivalence principle is extended in application to averaged dynamical fields in cosmology to include the role of the average density in the determination of inertial frames. The resulting cosmological equivalence principle is applied to the problem of synchronization of clocks in the observed universe. Once density perturbations grow to give density contrasts of order 1 on scales of tens of megaparsecs, the integrated deceleration of the local background regions of voids relative to galaxies must be accounted for in the relative synchronization of clocks of ideal observers who measure an isotropic cosmic microwave background. The relative deceleration of the background can be expected to represent a scale in which weak-field Newtonian dynamics should be modified to account for dynamical gradients in the Ricci scalar curvature of space. This acceleration scale is estimated using the best-fit nonlinear bubble model of the universe with backreaction. At redshifts z -10 ms -2 , is small, when integrated over the lifetime of the universe it amounts to an accumulated relative difference of 38% in the rate of average clocks in galaxies as compared to volume-average clocks in the emptiness of voids. A number of foundational aspects of the cosmological equivalence principle are also discussed, including its relation to Mach's principle, the Weyl curvature hypothesis, and the initial conditions of the universe.

  8. Strong quantum violation of the gravitational weak equivalence principle by a non-Gaussian wave packet

    International Nuclear Information System (INIS)

    Chowdhury, P; Majumdar, A S; Sinha, S; Home, D; Mousavi, S V; Mozaffari, M R

    2012-01-01

    The weak equivalence principle of gravity is examined at the quantum level in two ways. First, the position detection probabilities of particles described by a non-Gaussian wave packet projected upwards against gravity around the classical turning point and also around the point of initial projection are calculated. These probabilities exhibit mass dependence at both these points, thereby reflecting the quantum violation of the weak equivalence principle. Second, the mean arrival time of freely falling particles is calculated using the quantum probability current, which also turns out to be mass dependent. Such a mass dependence is shown to be enhanced by increasing the non-Gaussianity parameter of the wave packet, thus signifying a stronger violation of the weak equivalence principle through a greater departure from Gaussianity of the initial wave packet. The mass dependence of both the position detection probabilities and the mean arrival time vanishes in the limit of large mass. Thus, compatibility between the weak equivalence principle and quantum mechanics is recovered in the macroscopic limit of the latter. A selection of Bohm trajectories is exhibited to illustrate these features in the free fall case. (paper)

  9. Test masses for the G-POEM test of the weak equivalence principle

    International Nuclear Information System (INIS)

    Reasenberg, Robert D; Phillips, James D; Popescu, Eugeniu M

    2011-01-01

    We describe the design of the test masses that are used in the 'ground-based principle of equivalence measurement' test of the weak equivalence principle. The main features of the design are the incorporation of corner cubes and the use of mass removal and replacement to create pairs of test masses with different test substances. The corner cubes allow for the vertical separation of the test masses to be measured with picometer accuracy by SAO's unique tracking frequency laser gauge, while the mass removal and replacement operations are arranged so that the test masses incorporating different test substances have nominally identical gravitational properties. (papers)

  10. Weak principle of equivalence and gauge theory of tetrad aravitational field

    International Nuclear Information System (INIS)

    Tunyak, V.N.

    1978-01-01

    It is shown that, unlike the tetrade formulation of the general relativity theory derived from the requirement on the Poincare group localization, the tetrade gravitation theory corresponding to the Trader formulation of the weak equivalence principle, where the nongravitational-matter Lagrangian is the direct covariant generalization of the partial relativistic expression on the Riemann space-time is incompatible with the known method for deriving the calibration theory of the tetrade gravitation field

  11. Higher-order gravity and the classical equivalence principle

    Science.gov (United States)

    Accioly, Antonio; Herdy, Wallace

    2017-11-01

    As is well known, the deflection of any particle by a gravitational field within the context of Einstein’s general relativity — which is a geometrical theory — is, of course, nondispersive. Nevertheless, as we shall show in this paper, the mentioned result will change totally if the bending is analyzed — at the tree level — in the framework of higher-order gravity. Indeed, to first order, the deflection angle corresponding to the scattering of different quantum particles by the gravitational field mentioned above is not only spin dependent, it is also dispersive (energy-dependent). Consequently, it violates the classical equivalence principle (universality of free fall, or equality of inertial and gravitational masses) which is a nonlocal principle. However, contrary to popular belief, it is in agreement with the weak equivalence principle which is nothing but a statement about purely local effects. It is worthy of note that the weak equivalence principle encompasses the classical equivalence principle locally. We also show that the claim that there exists an incompatibility between quantum mechanics and the weak equivalence principle, is incorrect.

  12. Equivalence Principle, Higgs Boson and Cosmology

    Directory of Open Access Journals (Sweden)

    Mauro Francaviglia

    2013-05-01

    Full Text Available We discuss here possible tests for Palatini f(R-theories together with their implications for different formulations of the Equivalence Principle. We shall show that Palatini f(R-theories obey the Weak Equivalence Principle and violate the Strong Equivalence Principle. The violations of the Strong Equivalence Principle vanish in vacuum (and purely electromagnetic solutions as well as on short time scales with respect to the age of the universe. However, we suggest that a framework based on Palatini f(R-theories is more general than standard General Relativity (GR and it sheds light on the interpretation of data and results in a way which is more model independent than standard GR itself.

  13. Quantum Field Theoretic Derivation of the Einstein Weak Equivalence Principle Using Emqg Theory

    OpenAIRE

    Ostoma, Tom; Trushyk, Mike

    1999-01-01

    We provide a quantum field theoretic derivation of Einstein's Weak Equivalence Principle of general relativity using a new quantum gravity theory proposed by the authors called Electro-Magnetic Quantum Gravity or EMQG (ref. 1). EMQG is based on a new theory of inertia (ref. 5) proposed by R. Haisch, A. Rueda, and H. Puthoff (which we modified and called Quantum Inertia). Quantum Inertia states that classical Newtonian Inertia is a property of matter due to the strictly local electrical force ...

  14. Galactic Shapiro delay to the Crab pulsar and limit on weak equivalence principle violation

    Energy Technology Data Exchange (ETDEWEB)

    Desai, Shantanu [Indian Institute of Technology, Department of Physics, Hyderabad, Telangana (India); Kahya, Emre [Istanbul Technical University, Department of Physics, Istanbul (Turkey)

    2018-02-15

    We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Weak equivalence principle by using observations of ''nano-shot'' giant pulses from the Crab pulsar with time-delay < 0.4 ns, as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Weak equivalence principle in terms of the PPN parameter Δγ < 2.41 x 10{sup -15}. From the time-difference between simultaneous optical and radio observations, we get Δγ < 1.54 x 10{sup -9}. We also point out differences in our calculation of Shapiro delay and that from two recent papers (Yang and Zhang, Phys Rev D 94(10):101501, 2016; Zhang and Gong, Astrophys J 837:134, 2017), which used the same observations to obtain a corresponding limit on Δγ. (orig.)

  15. Galactic Shapiro delay to the Crab pulsar and limit on weak equivalence principle violation

    International Nuclear Information System (INIS)

    Desai, Shantanu; Kahya, Emre

    2018-01-01

    We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Weak equivalence principle by using observations of ''nano-shot'' giant pulses from the Crab pulsar with time-delay < 0.4 ns, as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Weak equivalence principle in terms of the PPN parameter Δγ < 2.41 x 10 -15 . From the time-difference between simultaneous optical and radio observations, we get Δγ < 1.54 x 10 -9 . We also point out differences in our calculation of Shapiro delay and that from two recent papers (Yang and Zhang, Phys Rev D 94(10):101501, 2016; Zhang and Gong, Astrophys J 837:134, 2017), which used the same observations to obtain a corresponding limit on Δγ. (orig.)

  16. Quantum mechanics and the equivalence principle

    International Nuclear Information System (INIS)

    Davies, P C W

    2004-01-01

    A quantum particle moving in a gravitational field may penetrate the classically forbidden region of the gravitational potential. This raises the question of whether the time of flight of a quantum particle in a gravitational field might deviate systematically from that of a classical particle due to tunnelling delay, representing a violation of the weak equivalence principle. I investigate this using a model quantum clock to measure the time of flight of a quantum particle in a uniform gravitational field, and show that a violation of the equivalence principle does not occur when the measurement is made far from the turning point of the classical trajectory. The results are then confirmed using the so-called dwell time definition of quantum tunnelling. I conclude with some remarks about the strong equivalence principle in quantum mechanics

  17. Foundations of gravitation theory: the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1978-01-01

    A new framework is presented within which to discuss the principle of equivalence and its experimental tests. The framework incorporates a special structure imposed on the equivalence principle by the principle of energy conservation. This structure includes relations among the conceptual components of the equivalence principle as well as quantitative relations among the outcomes of its experimental tests. One of the most striking new results obtained through use of this framework is a connection between the breakdown of local Lorentz invariance and the breakdown of the principle that all bodies fall with the same acceleration in a gravitational field. An extensive discussion of experimental tests of the equivalence principle and their significance is also presented. Within the above framework, theory-independent analyses of a broad range of equivalence principle tests are possible. Gravitational redshift experiments. Doppler-shift experiments, the Turner-Hill and Hughes-Drever experiments, and a number of solar-system tests of gravitation theories are analyzed. Application of the techniques of theoretical nuclear physics to the quantitative interpretation of equivalence principle tests using laboratory materials of different composition yields a number of important results. It is found that current Eotvos experiments significantly demonstrate the compatibility of the weak interactions with the equivalence principle. It is also shown that the Hughes-Drever experiment is the most precise test of local Lorentz invariance yet performed. The work leads to a strong, tightly knit empirical basis for the principle of equivalence, the central pillar of the foundations of gravitation theory

  18. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton

    Science.gov (United States)

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-01

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10-12 eV (i.e., range larger than a few 1 05 m ), we improve existing constraints by one order of magnitude to |α |baryon number and to |α |baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10-12 eV , the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  19. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton.

    Science.gov (United States)

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-06

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10^{-12}  eV (i.e., range larger than a few 10^{5}  m), we improve existing constraints by one order of magnitude to |α|difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10^{-12}  eV, the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  20. Adaptation of the TH Epsilon Mu formalism for the analysis of the equivalence principle in the presence of the weak and electroweak interaction

    Science.gov (United States)

    Fennelly, A. J.

    1981-01-01

    The TH epsilon mu formalism, used in analyzing equivalence principle experiments of metric and nonmetric gravity theories, is adapted to the description of the electroweak interaction using the Weinberg-Salam unified SU(2) x U(1) model. The use of the TH epsilon mu formalism is thereby extended to the weak interactions, showing how the gravitational field affects W sub mu (+ or -1) and Z sub mu (0) boson propagation and the rates of interactions mediated by them. The possibility of a similar extension to the strong interactions via SU(5) grand unified theories is briefly discussed. Also, using the effects of the potentials on the baryon and lepton wave functions, the effects of gravity on transition mediated in high-A atoms which are electromagnetically forbidden. Three possible experiments to test the equivalence principle in the presence of the weak interactions, which are technologically feasible, are then briefly outline: (1) K-capture by the FE nucleus (counting the emitted X-ray); (2) forbidden absorption transitions in high-A atoms' vapor; and (3) counting the relative Beta-decay rates in a suitable alpha-beta decay chain, assuming the strong interactions obey the equivalence principle.

  1. Violation of Equivalence Principle and Solar Neutrinos

    International Nuclear Information System (INIS)

    Gago, A.M.; Nunokawa, H.; Zukanovich Funchal, R.

    2001-01-01

    We have updated the analysis for the solution to the solar neutrino problem by the long-wavelength neutrino oscillations induced by a tiny breakdown of the weak equivalence principle of general relativity, and obtained a very good fit to all the solar neutrino data

  2. Testing the principle of equivalence by solar neutrinos

    International Nuclear Information System (INIS)

    Minakata, Hisakazu; Washington Univ., Seattle, WA; Nunokawa, Hiroshi; Washington Univ., Seattle, WA

    1994-04-01

    We discuss the possibility of testing the principle of equivalence with solar neutrinos. If there exists a violation of the equivalence principle quarks and leptons with different flavors may not universally couple with gravity. The method we discuss employs a quantum mechanical phenomenon of neutrino oscillation to probe into the non-university of the gravitational couplings of neutrinos. We develop an appropriate formalism to deal with neutrino propagation under the weak gravitational fields of the sun in the presence of the flavor mixing. We point out that solar neutrino observation by the next generation water Cherenkov detectors can improve the existing bound on violation of the equivalence principle by 3-4 orders of magnitude if the nonadiabatic Mikheyev-Smirnov-Wolfenstein mechanism is the solution to the solar neutrino problem

  3. Testing the principle of equivalence by solar neutrinos

    International Nuclear Information System (INIS)

    Minakata, H.; Nunokawa, H.

    1995-01-01

    We discuss the possibility of testing the principle of equivalence with solar neutrinos. If there exists a violation of the equivalence principle, quarks and leptons with different flavors may not universally couple with gravity. The method we discuss employs the quantum mechanical phenomenon of neutrino oscillation to probe into the nonuniversality of the gravitational couplings of neutrinos. We develop an appropriate formalism to deal with neutrino propagation under the weak gravitational fields of the Sun in the presence of the flavor mixing. We point out that solar neutrino observation by the next generation water Cherenkov detectors can place stringent bounds on the violation of the equivalence principle to 1 part in 10 15 --10 16 if the nonadiabatic Mikheyev-Smirnov-Wolfenstein mechanism is the solution to the solar neutrino problem

  4. Energy conservation and the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1979-01-01

    If the equivalence principle is violated, then observers performing local experiments can detect effects due to their position in an external gravitational environment (preferred-location effects) or can detect effects due to their velocity through some preferred frame (preferred frame effects). We show that the principle of energy conservation implies a quantitative connection between such effects and structure-dependence of the gravitational acceleration of test bodies (violation of the Weak Equivalence Principle). We analyze this connection within a general theoretical framework that encompasses both non-gravitational local experiments and test bodies as well as gravitational experiments and test bodies, and we use it to discuss specific experimental tests of the equivalence principle, including non-gravitational tests such as gravitational redshift experiments, Eoetvoes experiments, the Hughes-Drever experiment, and the Turner-Hill experiment, and gravitational tests such as the lunar-laser-ranging ''Eoetvoes'' experiment, and measurements of anisotropies and variations in the gravitational constant. This framework is illustrated by analyses within two theoretical formalisms for studying gravitational theories: the PPN formalism, which deals with the motion of gravitating bodies within metric theories of gravity, and the THepsilonμ formalism that deals with the motion of charged particles within all metric theories and a broad class of non-metric theories of gravity

  5. Cryogenic test of the equivalence principle

    International Nuclear Information System (INIS)

    Worden, P.W. Jr.

    1976-01-01

    The weak equivalence principle is the hypothesis that the ratio of internal and passive gravitational mass is the same for all bodies. A greatly improved test of this principle is possible in an orbiting satellite. The most promising experiments for an orbital test are adaptations of the Galilean free-fall experiment and the Eotvos balance. Sensitivity to gravity gradient noise, both from the earth and from the spacecraft, defines a limit to the sensitivity in each case. This limit is generally much worse for an Eotvos balance than for a properly designed free-fall experiment. The difference is related to the difficulty of making a balance sufficiently isoinertial. Cryogenic technology is desirable to take full advantage of the potential sensitivity, but tides in the liquid helium refrigerant may produce a gravity gradient that seriously degrades the ultimate sensitivity. The Eotvos balance appears to have a limiting sensitivity to relative difference of rate of fall of about 2 x 10 -14 in orbit. The free-fall experiment is limited by helium tide to about 10 -15 ; if the tide can be controlled or eliminated the limit may approach 10 -18 . Other limitations to equivalence principle experiments are discussed. An experimental test of some of the concepts involved in the orbital free-fall experiment is continuing. The experiment consists in comparing the motions of test masses levitated in a superconducting magnetic bearing, and is itself a sensitive test of the equivalence principle. At present the levitation magnets, position monitors and control coils have been tested and major noise sources identified. A measurement of the equivalence principle is postponed pending development of a system for digitizing data. The experiment and preliminary results are described

  6. Quantification of the equivalence principle

    International Nuclear Information System (INIS)

    Epstein, K.J.

    1978-01-01

    Quantitative relationships illustrate Einstein's equivalence principle, relating it to Newton's ''fictitious'' forces arising from the use of noninertial frames, and to the form of the relativistic time dilatation in local Lorentz frames. The equivalence principle can be interpreted as the equivalence of general covariance to local Lorentz covariance, in a manner which is characteristic of Riemannian and pseudo-Riemannian geometries

  7. Possible test of the strong principle of equivalence

    International Nuclear Information System (INIS)

    Brecher, K.

    1978-01-01

    We suggest that redshift determinations of X-ray and γ-ray lines produced near the surface of neutron stars which arise from different physical processes could provide a significant test of the strong principle of equivalence for strong gravitational fields. As a complement to both the high-precision weak-field solar-system experiments and the cosmological time variation searches, such observations could further test the hypothesis that physics is locally the same at all times and in all places

  8. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  9. The principle of equivalence reconsidered: assessing the relevance of the principle of equivalence in prison medicine.

    Science.gov (United States)

    Jotterand, Fabrice; Wangmo, Tenzin

    2014-01-01

    In this article we critically examine the principle of equivalence of care in prison medicine. First, we provide an overview of how the principle of equivalence is utilized in various national and international guidelines on health care provision to prisoners. Second, we outline some of the problems associated with its applications, and argue that the principle of equivalence should go beyond equivalence to access and include equivalence of outcomes. However, because of the particular context of the prison environment, third, we contend that the concept of "health" in equivalence of health outcomes needs conceptual clarity; otherwise, it fails to provide a threshold for healthy states among inmates. We accomplish this by examining common understandings of the concepts of health and disease. We conclude our article by showing why the conceptualization of diseases as clinical problems provides a helpful approach in the delivery of health care in prison.

  10. MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle.

    Science.gov (United States)

    Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter

    2017-12-08

    According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10^{-15} precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ(Ti,Pt)=[-1±9(stat)±9(syst)]×10^{-15} (1σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.

  11. MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle

    Science.gov (United States)

    Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter

    2017-12-01

    According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10-15 precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ (Ti ,Pt )=[-1 ±9 (stat)±9 (syst)]×10-15 (1 σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.

  12. Quantum equivalence principle without mass superselection

    International Nuclear Information System (INIS)

    Hernandez-Coronado, H.; Okon, E.

    2013-01-01

    The standard argument for the validity of Einstein's equivalence principle in a non-relativistic quantum context involves the application of a mass superselection rule. The objective of this work is to show that, contrary to widespread opinion, the compatibility between the equivalence principle and quantum mechanics does not depend on the introduction of such a restriction. For this purpose, we develop a formalism based on the extended Galileo group, which allows for a consistent handling of superpositions of different masses, and show that, within such scheme, mass superpositions behave as they should in order to obey the equivalence principle. - Highlights: • We propose a formalism for consistently handling, within a non-relativistic quantum context, superpositions of states with different masses. • The formalism utilizes the extended Galileo group, in which mass is a generator. • The proposed formalism allows for the equivalence principle to be satisfied without the need of imposing a mass superselection rule

  13. The principle of equivalence

    International Nuclear Information System (INIS)

    Unnikrishnan, C.S.

    1994-01-01

    Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs

  14. Cosmology with equivalence principle breaking in the dark sector

    International Nuclear Information System (INIS)

    Keselman, Jose Ariel; Nusser, Adi; Peebles, P. J. E.

    2010-01-01

    A long-range force acting only between nonbaryonic particles would be associated with a large violation of the weak equivalence principle. We explore cosmological consequences of this idea, which we label ReBEL (daRk Breaking Equivalence principLe). A high resolution hydrodynamical simulation of the distributions of baryons and dark matter confirms our previous findings that a ReBEL force of comparable strength to gravity on comoving scales of about 1 h -1 Mpc causes voids between the concentrations of large galaxies to be more nearly empty, suppresses accretion of intergalactic matter onto galaxies at low redshift, and produces an early generation of dense dark-matter halos. A preliminary analysis indicates the ReBEL scenario is consistent with the one-dimensional power spectrum of the Lyman-Alpha forest and the three-dimensional galaxy autocorrelation function. Segregation of baryons and DM in galaxies and systems of galaxies is a strong prediction of ReBEL. ReBEL naturally correlates the baryon mass fraction in groups and clusters of galaxies with the system mass, in agreement with recent measurements.

  15. Attainment of radiation equivalency principle

    International Nuclear Information System (INIS)

    Shmelev, A.N.; Apseh, V.A.

    2004-01-01

    Problems connected with the prospects for long-term development of the nuclear energetics are discussed. Basic principles of the future large-scale nuclear energetics are listed, primary attention is the safety of radioactive waste management of nuclear energetics. The radiation equivalence principle means close of fuel cycle and management of nuclear materials transportation with low losses on spent fuel and waste processing. Two aspects are considered: radiation equivalence in global and local aspects. The necessity of looking for other strategies of fuel cycle management in full-scale nuclear energy on radioactive waste management is supported [ru

  16. Dark matter and the equivalence principle

    Science.gov (United States)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  17. Equivalence principle violations and couplings of a light dilaton

    International Nuclear Information System (INIS)

    Damour, Thibault; Donoghue, John F.

    2010-01-01

    We consider possible violations of the equivalence principle through the exchange of a light 'dilaton-like' scalar field. Using recent work on the quark-mass dependence of nuclear binding, we find that the dilaton-quark-mass coupling induces significant equivalence-principle-violating effects varying like the inverse cubic root of the atomic number - A -1/3 . We provide a general parametrization of the scalar couplings, but argue that two parameters are likely to dominate the equivalence-principle phenomenology. We indicate the implications of this framework for comparing the sensitivities of current and planned experimental tests of the equivalence principle.

  18. The equivalence principle in classical mechanics and quantum mechanics

    OpenAIRE

    Mannheim, Philip D.

    1998-01-01

    We discuss our understanding of the equivalence principle in both classical mechanics and quantum mechanics. We show that not only does the equivalence principle hold for the trajectories of quantum particles in a background gravitational field, but also that it is only because of this that the equivalence principle is even to be expected to hold for classical particles at all.

  19. Weak equivalence classes of complex vector bundles

    Czech Academy of Sciences Publication Activity Database

    Le, Hong-Van

    LXXVII, č. 1 (2008), s. 23-30 ISSN 0862-9544 R&D Projects: GA AV ČR IAA100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : chern classes * complex Grassmannians weak equivalence Subject RIV: BA - General Mathematics

  20. Principle of natural and artificial radioactive series equivalency

    International Nuclear Information System (INIS)

    Vasilyeva, A.N.; Starkov, O.V.

    2001-01-01

    In the present paper one approach used under development of radioactive waste management conception is under consideration. This approach is based on the principle of natural and artificial radioactive series radiotoxic equivalency. The radioactivity of natural and artificial radioactive series has been calculated for 10 9 - years period. The toxicity evaluation for natural and artificial series has also been made. The correlation between natural radioactive series and their predecessors - actinides produced in thermal and fast reactors - has been considered. It has been shown that systematized reactor series data had great scientific significance and the principle of differential calculation of radiotoxicity was necessary to realize long-lived radioactive waste and uranium and thorium ore radiotoxicity equivalency conception. The calculations show that the execution of equivalency principle is possible for uranium series (4n+2, 4n+1). It is a problem for thorium. series. This principle is impracticable for neptunium series. (author)

  1. Comments on field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1987-01-01

    It is pointed Out that often-used arguments based on a short-circuit concept in presentations of field equivalence principles are not correct. An alternative presentation based on the uniqueness theorem is given. It does not contradict the results obtained by using the short-circuit concept...

  2. Solar system and equivalence principle constraints on f(R) gravity by the chameleon approach

    International Nuclear Information System (INIS)

    Capozziello, Salvatore; Tsujikawa, Shinji

    2008-01-01

    We study constraints on f(R) dark energy models from solar system experiments combined with experiments on the violation of the equivalence principle. When the mass of an equivalent scalar field degree of freedom is heavy in a region with high density, a spherically symmetric body has a thin shell so that an effective coupling of the fifth force is suppressed through a chameleon mechanism. We place experimental bounds on the cosmologically viable models recently proposed in the literature that have an asymptotic form f(R)=R-λR c [1-(R c /R) 2n ] in the regime R>>R c . From the solar system constraints on the post-Newtonian parameter γ, we derive the bound n>0.5, whereas the constraints from the violations of the weak and strong equivalence principles give the bound n>0.9. This allows a possibility to find the deviation from the Λ-cold dark matter (ΛCDM) cosmological model. For the model f(R)=R-λR c (R/R c ) p with 0 -10 , which shows that this model is hardly distinguishable from the ΛCDM cosmology

  3. Electrostatic Positioning System for a free fall test at drop tower Bremen and an overview of tests for the Weak Equivalence Principle in past, present and future

    Science.gov (United States)

    Sondag, Andrea; Dittus, Hansjörg

    2016-08-01

    The Weak Equivalence Principle (WEP) is at the basis of General Relativity - the best theory for gravitation today. It has been and still is tested with different methods and accuracies. In this paper an overview of tests of the Weak Equivalence Principle done in the past, developed in the present and planned for the future is given. The best result up to now is derived from the data of torsion balance experiments by Schlamminger et al. (2008). An intuitive test of the WEP consists of the comparison of the accelerations of two free falling test masses of different composition. This has been carried through by Kuroda & Mio (1989, 1990) with the up to date most precise result for this setup. There is still more potential in this method, especially with a longer free fall time and sensors with a higher resolution. Providing a free fall time of 4.74 s (9.3 s using the catapult) the drop tower of the Center of Applied Space Technology and Microgravity (ZARM) at the University of Bremen is a perfect facility for further improvements. In 2001 a free fall experiment with high sensitive SQUID (Superconductive QUantum Interference Device) sensors tested the WEP with an accuracy of 10-7 (Nietzsche, 2001). For optimal conditions one could reach an accuracy of 10-13 with this setup (Vodel et al., 2001). A description of this experiment and its results is given in the next part of this paper. For the free fall of macroscopic test masses it is important to start with precisely defined starting conditions concerning the positions and velocities of the test masses. An Electrostatic Positioning System (EPS) has been developed to this purpose. It is described in the last part of this paper.

  4. Test of the Weak Equivalence Principle using LIGO observations of GW150914 and Fermi observations of GBM transient 150914

    Directory of Open Access Journals (Sweden)

    Molin Liu

    2017-07-01

    Full Text Available About 0.4 s after the Laser Interferometer Gravitational-Wave Observatory (LIGO detected a transient gravitational-wave (GW signal GW150914, the Fermi Gamma-ray Burst Monitor (GBM also found a weak electromagnetic transient (GBM transient 150914. Time and location coincidences favor a possible association between GW150904 and GBM transient 150914. Under this possible association, we adopt Fermi's electromagnetic (EM localization and derive constraints on possible violations of the Weak Equivalence Principle (WEP from the observations of two events. Our calculations are based on four comparisons: (1 The first is the comparison of the initial GWs detected at the two LIGO sites. From the different polarizations of these initial GWs, we obtain a limit on any difference in the parametrized post-Newtonian (PPN parameter Δγ≲10−10. (2 The second is a comparison of GWs and possible EM waves. Using a traditional super-Eddington accretion model for GBM transient 150914, we again obtain an upper limit Δγ≲10−10. Compared with previous results for photons and neutrinos, our limits are five orders of magnitude stronger than those from PeV neutrinos in blazar flares, and seven orders stronger than those from MeV neutrinos in SN1987A. (3 The third is a comparison of GWs with different frequencies in the range [35 Hz, 250 Hz]. (4 The fourth is a comparison of EM waves with different energies in the range [1 keV, 10 MeV]. These last two comparisons lead to an even stronger limit, Δγ≲10−8. Our results highlight the potential of multi-messenger signals exploiting different emission channels to strengthen existing tests of the WEP.

  5. Apparent violation of the principle of equivalence and Killing horizons

    International Nuclear Information System (INIS)

    Zimmerman, R.L.; Farhoosh, H.; Oregon Univ., Eugene

    1980-01-01

    By means of the principle of equivalence it is deduced that the qualitative behavior of the Schwarzschild horizon about a uniformly accelerating particle. This result is confirmed for an exact solution of a uniformly accelerating object in the limit of small accelerations. For large accelerations the Schwarzschild horizon appears to violate the qualitative behavior established via the principle of equivalence. When similar arguments are extended to an observable such as the red shift between two observers, there is no departure from the results expected from the principle of equivalence. The resolution of the paradox is brought about by a compensating effect due to the Rindler horizon. (author)

  6. Is a weak violation of the Pauli principle possible?

    International Nuclear Information System (INIS)

    Ignat'ev, A.Y.; Kuz'min, V.A.

    1987-01-01

    We examine models in which there is a weak violation of the Pauli principle. A simple algebra of creation and annihilation operators is constructed which contains a parameter β and describes a weak violation of the Pauli principle (when β = 0 the Pauli principle is satisfied exactly). The commutation relations in this algebra turn out to be trilinear. A model based on this algebra is described. It allows transitions in which the Pauli principle is violated, but the probability of these transitions is suppressed by the quantity β 2 (even though the interaction Hamiltonian does not contain small parameters)

  7. The principle of general covariance and the principle of equivalence: two distinct concepts

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    It is shown how to construct a theory with general covariance but without the equivalence principle. Such a theory is in disagreement with experiment, but it serves to illustrate the independence of the former principle from the latter one [pt

  8. The Bohr--Einstein ''weighing-of-energy'' debate and the principle of equivalence

    International Nuclear Information System (INIS)

    Hughes, R.J.

    1990-01-01

    The Bohr--Einstein debate over the ''weighing of energy'' and the validity of the time--energy uncertainty relation is reexamined in the context of gravitation theories that do not respect the equivalence principle. Bohr's use of the equivalence principle is shown to be sufficient, but not necessary, to establish the validity of this uncertainty relation in Einstein's ''weighing-of-energy'' gedanken experiment. The uncertainty relation is shown to hold in any energy-conserving theory of gravity, and so a failure of the equivalence principle does not engender a failure of quantum mechanics. The relationship between the gravitational redshift and the equivalence principle is reviewed

  9. A Weak Comparison Principle for Reaction-Diffusion Systems

    Directory of Open Access Journals (Sweden)

    José Valero

    2012-01-01

    Full Text Available We prove a weak comparison principle for a reaction-diffusion system without uniqueness of solutions. We apply the abstract results to the Lotka-Volterra system with diffusion, a generalized logistic equation, and to a model of fractional-order chemical autocatalysis with decay. Moreover, in the case of the Lotka-Volterra system a weak maximum principle is given, and a suitable estimate in the space of essentially bounded functions L∞ is proved for at least one solution of the problem.

  10. Extended Equivalence Principle: Implications for Gravity, Geometry and Thermodynamics

    OpenAIRE

    Sivaram, C.; Arun, Kenath

    2012-01-01

    The equivalence principle was formulated by Einstein in an attempt to extend the concept of inertial frames to accelerated frames, thereby bringing in gravity. In recent decades, it has been realised that gravity is linked not only with geometry of space-time but also with thermodynamics especially in connection with black hole horizons, vacuum fluctuations, dark energy, etc. In this work we look at how the equivalence principle manifests itself in these different situations where we have str...

  11. A Technique of Teaching the Principle of Equivalence at Ground Level

    Science.gov (United States)

    Lubrica, Joel V.

    2016-01-01

    This paper presents one way of demonstrating the Principle of Equivalence in the classroom. Teaching the Principle of Equivalence involves someone experiencing acceleration through empty space, juxtaposed with the daily encounter with gravity. This classroom activity is demonstrated with a water-filled bottle containing glass marbles and…

  12. Probing Students' Ideas of the Principle of Equivalence

    Science.gov (United States)

    Bandyopadhyay, Atanu; Kumar, Arvind

    2011-01-01

    The principle of equivalence was the first vital clue to Einstein in his extension of special relativity to general relativity, the modern theory of gravitation. In this paper we investigate in some detail students' understanding of this principle in a variety of contexts, when they are undergoing an introductory course on general relativity. The…

  13. Quantum mechanics in noninertial reference frames: Violations of the nonrelativistic equivalence principle

    International Nuclear Information System (INIS)

    Klink, W.H.; Wickramasekara, S.

    2014-01-01

    In previous work we have developed a formulation of quantum mechanics in non-inertial reference frames. This formulation is grounded in a class of unitary cocycle representations of what we have called the Galilean line group, the generalization of the Galilei group that includes transformations amongst non-inertial reference frames. These representations show that in quantum mechanics, just as is the case in classical mechanics, the transformations to accelerating reference frames give rise to fictitious forces. A special feature of these previously constructed representations is that they all respect the non-relativistic equivalence principle, wherein the fictitious forces associated with linear acceleration can equivalently be described by gravitational forces. In this paper we exhibit a large class of cocycle representations of the Galilean line group that violate the equivalence principle. Nevertheless the classical mechanics analogue of these cocycle representations all respect the equivalence principle. -- Highlights: •A formulation of Galilean quantum mechanics in non-inertial reference frames is given. •The key concept is the Galilean line group, an infinite dimensional group. •A large class of general cocycle representations of the Galilean line group is constructed. •These representations show violations of the equivalence principle at the quantum level. •At the classical limit, no violations of the equivalence principle are detected

  14. Neutrino oscillations in non-inertial frames and the violation of the equivalence principle neutrino mixing induced by the equivalence principle violation

    International Nuclear Information System (INIS)

    Lambiase, G.

    2001-01-01

    Neutrino oscillations are analyzed in an accelerating and rotating reference frame, assuming that the gravitational coupling of neutrinos is flavor dependent, which implies a violation of the equivalence principle. Unlike the usual studies in which a constant gravitational field is considered, such frames could represent a more suitable framework for testing if a breakdown of the equivalence principle occurs, due to the possibility to modulate the (simulated) gravitational field. The violation of the equivalence principle implies, for the case of a maximal gravitational mixing angle, the presence of an off-diagonal term in the mass matrix. The consequences on the evolution of flavor (mass) eigenstates of such a term are analyzed for solar (oscillations in the vacuum) and atmospheric neutrinos. We calculate the flavor oscillation probability in the non-inertial frame, which does depend on its angular velocity and linear acceleration, as well as on the energy of neutrinos, the mass-squared difference between two mass eigenstates, and on the measure of the degree of violation of the equivalence principle (Δγ). In particular, we find that the energy dependence disappears for vanishing mass-squared difference, unlike the result obtained by Gasperini, Halprin, Leung, and other physical mechanisms proposed as a viable explanation of neutrino oscillations. Estimations on the upper values of Δγ are inferred for a rotating observer (with vanishing linear acceleration) comoving with the earth, hence ω∝7.10 -5 rad/sec, and all other alternative mechanisms generating the oscillation phenomena have been neglected. In this case we find that the constraints on Δγ are given by Δγ≤10 2 for solar neutrinos and Δγ≤10 6 for atmospheric neutrinos. (orig.)

  15. Test of the Equivalence Principle in the Dark sector on galactic scales

    International Nuclear Information System (INIS)

    Mohapi, N.; Hees, A.; Larena, J.

    2016-01-01

    The Einstein Equivalence Principle is a fundamental principle of the theory of General Relativity. While this principle has been thoroughly tested with standard matter, the question of its validity in the Dark sector remains open. In this paper, we consider a general tensor-scalar theory that allows to test the equivalence principle in the Dark sector by introducing two different conformal couplings to standard matter and to Dark matter. We constrain these couplings by considering galactic observations of strong lensing and of velocity dispersion. Our analysis shows that, in the case of a violation of the Einstein Equivalence Principle, data favour violations through coupling strengths that are of opposite signs for ordinary and Dark matter. At the same time, our analysis does not show any significant deviations from General Relativity

  16. The equivalence principle and the gravitational constant in experimental relativity

    International Nuclear Information System (INIS)

    Spallicci, A.D.A.M.

    1988-01-01

    Fischbach's analysis of the Eotvos experiment, showing an embedded fifth force, has stressed the importance of further tests of the Equivalence Principle (EP). From Galilei and Newton, the EP played the role of a postulate for all gravitational physics and mechanics (weak EP), until Einstein, who extended the validity of the EP to all physics (strong EP). After Fischbach's publication on the fifth force, several experiments have been performed or simply proposed to test the WEP. They are concerned with possible gravitational potential anomalies, depending upon distances or matter composition. While the low level of accuracy with which the gravitational constant G is known has been recognized, experiments have been proposed to test G in the range from few cm until 200 m. This paper highlights the different features of the proposed space experiments. Possible implications on the metric formalism for objects in low potential and slow motion are briefly indicated

  17. Einstein's equivalence principle instead of the inertia forces

    International Nuclear Information System (INIS)

    Herreros Mateos, F.

    1997-01-01

    In this article I intend to show that Einstein's equivalence principle substitutes advantageously the inertia forces in the study and resolution of problems in which non-inertial systems appear. (Author) 13 refs

  18. On Weak Markov's Principle

    DEFF Research Database (Denmark)

    Kohlenbach, Ulrich Wilhelm

    2002-01-01

    We show that the so-called weak Markov's principle (WMP) which states that every pseudo-positive real number is positive is underivable in E-HA + AC. Since allows one to formalize (atl eastl arge parts of) Bishop's constructive mathematics, this makes it unlikely that WMP can be proved within...... the framework of Bishop-style mathematics (which has been open for about 20 years). The underivability even holds if the ine.ective schema of full comprehension (in all types) for negated formulas (in particular for -free formulas) is added, which allows one to derive the law of excluded middle...

  19. The principle of equivalence and the Trojan asteroids

    International Nuclear Information System (INIS)

    Orellana, R.; Vucetich, H.

    1986-05-01

    An analysis of the Trojan asteroids motion has been carried out in order to set limits to possible violations to the principle of equivalence. Preliminary results, in agreement with general relativity, are reported. (author)

  20. The equivalence principle in a quantum world

    DEFF Research Database (Denmark)

    Bjerrum-Bohr, N. Emil J.; Donoghue, John F.; El-Menoufi, Basem Kamal

    2015-01-01

    the energy is small, we now have the tools to address this conflict explicitly. Despite the violation of some classical concepts, the EP continues to provide the core of the quantum gravity framework through the symmetry - general coordinate invariance - that is used to organize the effective field theory......We show how modern methods can be applied to quantum gravity at low energy. We test how quantum corrections challenge the classical framework behind the equivalence principle (EP), for instance through introduction of nonlocality from quantum physics, embodied in the uncertainty principle. When...

  1. Free Fall and the Equivalence Principle Revisited

    Science.gov (United States)

    Pendrill, Ann-Marie

    2017-01-01

    Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton's law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field.…

  2. Infrared equivalence of strongly and weakly coupled gauge theories

    International Nuclear Information System (INIS)

    Olesen, P.

    1975-10-01

    Using the decoupling theorem of Apelquist and Carazzone, it is shown that in terms of Feynman diagrams the pure Yang-Mills theory is equivalent in the infrared limit to a theory (zero-mass renormalized), where the vector mesons are coupled fo fermions, and where the fermions do not decouple. By taking enough fermions it is then shown that even though the pure Yang-Mills theory is characterized by the lack of applicability of perturbation theory, nevertheless the effective coupling in the equivalent fermion description is very weak. The effective mass in the zero-mass renormalization blows up. In the fermion description, diagrams involving only vector mesons are suppressed relative to diagrams containing at least one fermion loop. (Auth.)

  3. Gravitational Lagrangians, Mach's Principle, and the Equivalence Principle in an Expanding Universe

    Science.gov (United States)

    Essén, Hanno

    2014-08-01

    Gravitational Lagrangians as derived by Fock for the Einstein-Infeld-Hoffmann approach, and by Kennedy assuming only a fourth rank tensor interaction, contain long range interactions. Here we investigate how these affect the local dynamics when integrated over an expanding universe out to the Hubble radius. Taking the cosmic expansion velocity into account in a heuristic manner it is found that these long range interactions imply Mach's principle, provided the universe has the critical density, and that mass is renormalized. Suitable higher order additions to the Lagrangians make the formalism consistent with the equivalence principle.

  4. Weak interaction: past answers, present questions

    International Nuclear Information System (INIS)

    Ne'eman, Y.

    1977-02-01

    A historical sketch of the weak interaction is presented. From beta ray to pion decay, the V-A theory of Marshak and Sudarshan, CVC principle of equivalence, universality as an algebraic condition, PCAC, renormalized weak Hamiltonian in the rehabilitation of field theory, and some current issues are considered in this review. 47 references

  5. Tests of the equivalence principle with neutral kaons

    CERN Document Server

    Apostolakis, Alcibiades J; Backenstoss, Gerhard; Bargassa, P; Behnke, O; Benelli, A; Bertin, V; Blanc, F; Bloch, P; Carlson, P J; Carroll, M; Cawley, E; Chardin, G; Chertok, M B; Danielsson, M; Dejardin, M; Derré, J; Ealet, A; Eleftheriadis, C; Faravel, L; Fetscher, W; Fidecaro, Maria; Filipcic, A; Francis, D; Fry, J; Gabathuler, Erwin; Gamet, R; Gerber, H J; Go, A; Haselden, A; Hayman, P J; Henry-Coüannier, F; Hollander, R W; Jon-And, K; Kettle, P R; Kokkas, P; Kreuger, R; Le Gac, R; Leimgruber, F; Mandic, I; Manthos, N; Marel, Gérard; Mikuz, M; Miller, J; Montanet, François; Müller, A; Nakada, Tatsuya; Pagels, B; Papadopoulos, I M; Pavlopoulos, P; Polivka, G; Rickenbach, R; Roberts, B L; Ruf, T; Sakelliou, L; Schäfer, M; Schaller, L A; Schietinger, T; Schopper, A; Tauscher, Ludwig; Thibault, C; Touchard, F; Touramanis, C; van Eijk, C W E; Vlachos, S; Weber, P; Wigger, O; Wolter, M; Zavrtanik, D; Zimmerman, D; Ellis, Jonathan Richard; Mavromatos, Nikolaos E; Nanopoulos, Dimitri V

    1999-01-01

    We test the Principle of Equivalence for particles and antiparticles, using CPLEAR data on tagged Pkao and Pkab decays into $pi^+ pi^-$. For the first time, we search for possible annual, monthly and diurnal modulations of the observables $|eta_{+-}|$ and $phi _{+-}$, that could be correlated with variations in astrophysical potentials. Within the accuracy of CPLEAR, the measured values of $|eta _{+-}|$ and $phi _{+-}$ are found not to be correlated with changes of the gravitational potential. We analyze data assuming effective scalar, vector and tensor interactions, and we conclude that the Principle of Equivalence between particles and antiparticles holds to a level of $6.5$, $4.3$ and $1.8 imes 10^{-9}$, respectively, for scalar, vector and tensor potentials originating from the Sun with a range much greater than the distance Earth-Sun. We also study energy-dependent effects that might arise from vector or tensor interactions. Finally, we compile upper limits on the gravitational coupling difference betwee...

  6. High energy cosmic neutrinos and the equivalence principle

    International Nuclear Information System (INIS)

    Minakata, H.

    1996-01-01

    Observation of ultra-high energy neutrinos, in particular detection of ν τ , from cosmologically distant sources like active galactic nuclei (AGN) opens new possibilities to search for neutrino flavor conversion. We consider the effects of violation of the equivalence principle (VEP) on propagation of these cosmic neutrinos. In particular, we discuss two effects: (1) the oscillations of neutrinos due to VEP in the gravitational field of our Galaxy and in the intergalactic space; (2) resonance flavor conversion driven by the gravitational potential of AGN. We show that ultra-high energies of the neutrinos as well as cosmological distances to AGN, or strong AGN gravitational potential allow to improve the accuracy of testing of the equivalence principle by 25 orders of magnitude for massless neutrinos (Δf ∼ 10 -41 ) and by 11 orders of magnitude for massive neutrinos (Δf ∼ 10 -28 x (Δm 2 /1eV 2 )). The experimental signatures of the transitions induced by VEP are discussed. (author). 17 refs

  7. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  8. The c equivalence principle and the correct form of writing Maxwell's equations

    International Nuclear Information System (INIS)

    Heras, Jose A

    2010-01-01

    It is well known that the speed c u =1/√(ε 0 μ 0 ) is obtained in the process of defining SI units via action-at-a-distance forces, like the force between two static charges and the force between two long and parallel currents. The speed c u is then physically different from the observed speed of propagation c associated with electromagnetic waves in vacuum. However, repeated experiments have led to the numerical equality c u = c, which we have called the c equivalence principle. In this paper we point out that ∇xE=-[1/(ε 0 μ 0 c 2 )]∂B/∂t is the correct form of writing Faraday's law when the c equivalence principle is not assumed. We also discuss the covariant form of Maxwell's equations without assuming the c equivalence principle.

  9. Uniformly accelerating charged particles. A threat to the equivalence principle

    International Nuclear Information System (INIS)

    Lyle, Stephen N.

    2008-01-01

    There has been a long debate about whether uniformly accelerated charges should radiate electromagnetic energy and how one should describe their worldline through a flat spacetime, i.e., whether the Lorentz-Dirac equation is right. There are related questions in curved spacetimes, e.g., do different varieties of equivalence principle apply to charged particles, and can a static charge in a static spacetime radiate electromagnetic energy? The problems with the LD equation in flat spacetime are spelt out in some detail here, and its extension to curved spacetime is discussed. Different equivalence principles are compared and some vindicated. The key papers are discussed in detail and many of their conclusions are significantly revised by the present solution. (orig.)

  10. Equivalence principle for quantum systems: dephasing and phase shift of free-falling particles

    Science.gov (United States)

    Anastopoulos, C.; Hu, B. L.

    2018-02-01

    We ask the question of how the (weak) equivalence principle established in classical gravitational physics should be reformulated and interpreted for massive quantum objects that may also have internal degrees of freedom (dof). This inquiry is necessary because even elementary concepts like a classical trajectory are not well defined in quantum physics—trajectories originating from quantum histories become viable entities only under stringent decoherence conditions. From this investigation we posit two logically and operationally distinct statements of the equivalence principle for quantum systems. Version A: the probability distribution of position for a free-falling particle is the same as the probability distribution of a free particle, modulo a mass-independent shift of its mean. Version B: any two particles with the same velocity wave-function behave identically in free fall, irrespective of their masses. Both statements apply to all quantum states, including those without a classical correspondence, and also for composite particles with quantum internal dof. We also investigate the consequences of the interaction between internal and external dof induced by free fall. For a class of initial states, we find dephasing occurs for the translational dof, namely, the suppression of the off-diagonal terms of the density matrix, in the position basis. We also find a gravitational phase shift in the reduced density matrix of the internal dof that does not depend on the particle’s mass. For classical states, the phase shift has a natural classical interpretation in terms of gravitational red-shift and special relativistic time-dilation.

  11. Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO

    Directory of Open Access Journals (Sweden)

    Lo C. Y.

    2006-04-01

    Full Text Available The theoretical foundation of LIGO's design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein's theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein's equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne's formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein's equivalence principle is not important or even irrelevant.

  12. Mars seasonal polar caps as a test of the equivalence principle

    International Nuclear Information System (INIS)

    Rubincam, David Parry

    2011-01-01

    The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial (passive) to gravitational (active) masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor Eoetvoes test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.

  13. Mars Seasonal Polar Caps as a Test of the Equivalence Principle

    Science.gov (United States)

    Rubincam, Daivd Parry

    2011-01-01

    The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial to gravitational masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor E6tv6s test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.

  14. The Equivalence Principle and Anomalous Magnetic Moment Experiments

    OpenAIRE

    Alvarez, C.; Mann, R. B.

    1995-01-01

    We investigate the possibility of testing of the Einstein Equivalence Principle (EEP) using measurements of anomalous magnetic moments of elementary particles. We compute the one loop correction for the $g-2$ anomaly within the class of non metric theories of gravity described by the \\tmu formalism. We find several novel mechanisms for breaking the EEP whose origin is due purely to radiative corrections. We discuss the possibilities of setting new empirical constraints on these effects.

  15. Phenomenology of the Equivalence Principle with Light Scalars

    OpenAIRE

    Damour, Thibault; Donoghue, John F.

    2010-01-01

    Light scalar particles with couplings of sub-gravitational strength, which can generically be called 'dilatons', can produce violations of the equivalence principle. However, in order to understand experimental sensitivities one must know the coupling of these scalars to atomic systems. We report here on a study of the required couplings. We give a general Lagrangian with five independent dilaton parameters and calculate the "dilaton charge" of atomic systems for each of these. Two combinatio...

  16. Relativity and equivalence principles in the gauge theory of gravitation

    International Nuclear Information System (INIS)

    Ivanenko, D.; Sardanashvili, G.

    1981-01-01

    Roles of relativity (RP) and equivalence principles (EP) in the gauge theory of gravity are shown. RP in the gravitational theory in formalism of laminations can be formulated as requirement of covariance of equations relative to the GL + (4, R)(X) gauge group. In such case RP turns out to be identical to the gauge principle in the gauge theory of a group of outer symmetries, and the gravitational theory can be directly constructed as the gauge theory. In general relativity theory the equivalence theory adds RP and is intended for description of transition to a special relativity theory in some system of reference. The approach described takes into account that in the gauge theory, besides gauge fields under conditions of spontaneous symmetry breaking, the Goldstone and Higgs fields can also arise, to which the gravitational metric field is related, what is the sequence of taking account of RP in the gauge theory of gravitation [ru

  17. Equivalence of Dirac quantization and Schwinger's action principle quantization

    International Nuclear Information System (INIS)

    Das, A.; Scherer, W.

    1987-01-01

    We show that the method of Dirac quantization is equivalent to Schwinger's action principle quantization. The relation between the Lagrange undetermined multipliers in Schwinger's method and Dirac's constraint bracket matrix is established and it is explicitly shown that the two methods yield identical (anti)commutators. This is demonstrated in the non-trivial example of supersymmetric quantum mechanics in superspace. (orig.)

  18. Verification of the weak equivalence principle of supports and heavy masses using SQUIDs; Ueberpruefung des schwachen Aequivalenzprinzips von Traegern und schwerer Masse mittels Squids

    Energy Technology Data Exchange (ETDEWEB)

    Vodel, W.; Nietzsche, S.; Neubert, R. [Friedrich-Schiller-Universitaet Jena (Germany). Inst. fuer Festkoerperphysik; Dittus, H. [Univ. Bremen (Germany). Zentrum fuer angewandte Raumfahrttechnologie und Mikrogravitation

    2003-07-01

    The weak equivalence principle is one of the fundamental hypotheses of general relativity and one of the key elements of our physical picture of the world, but since Galileo there has been no satisfactory way of verifying it. The new SQUID technology may offer a solution. The contribution presents the experiments of Jena University. Applications are envisaged, e.g., in the STEP space mission of the NASA/ESA. [German] Das Schwache Aequivalenzprinzip ist eine der grundlegenden Hypothesen der Allgemeinen Relativitaetstheorie und damit einer der Grundpfeiler unseres physikalischen Weltbildes. Obwohl es seit den ersten Experimenten von Galileo Galilei am Schiefen Turm zu Pisa im Jahre 1638 bis heute schon zahlreiche und immer praeziser werdende Messungen zur Ueberpruefung der Aequivalenz von schwerer und traeger Masse gegeben hat, ist die strenge Gueltigkeit dieses fundamentalen Prinzips experimentell vergleichsweise unzureichend bestimmt. Neuere Methoden, wie der Einsatz SQUID-basierter Messtechnik und die Durchfuehrung von Experimenten auf Satelliten, lassen Verbesserungen schon in naher Zukunft erwarten, so dass theoretische Ueberlegungen zur Vereinigung aller uns bekannten physikalischen Wechselwirkungen, die eine Verletzung des Schwachen Aequivalenzprinzips voraussagen, experimentell eingegrenzt werden koennten. Der Beitrag gibt einen Ueberblick ueber die an der Universitaet Jena entwickelte SQUID-basierte Messtechnik zum Test des Aequivalenzprinzips und fasst die bisher bei Freifallversuchen am Fallturm Bremen erzielten experimentellen Ergebnisse zusammen. Ein Ausblick auf die geplante Raumfahrtmission STEP der NASA/ESA zum Praezisionstest des Schwachen Aequivalenzprinzips schliesst den Beitrag ab. (orig.)

  19. Test of the Equivalence Principle in an Einstein Elevator

    Science.gov (United States)

    Shapiro, Irwin I.; Glashow, S.; Lorenzini, E. C.; Cosmo, M. L.; Cheimets, P. N.; Finkelstein, N.; Schneps, M.

    2005-01-01

    This Annual Report illustrates the work carried out during the last grant-year activity on the Test of the Equivalence Principle in an Einstein Elevator. The activity focused on the following main topics: (1) analysis and conceptual design of a detector configuration suitable for the flight tests; (2) development of techniques for extracting a small signal from data strings with colored and white noise; (3) design of the mechanism that spins and releases the instrument package inside the cryostat; and (4) experimental activity carried out by our non-US partners (a summary is shown in this report). The analysis and conceptual design of the flight-detector (point 1) was focused on studying the response of the differential accelerometer during free fall, in the presence of errors and precession dynamics, for various detector's configurations. The goal was to devise a detector configuration in which an Equivalence Principle violation (EPV) signal at the sensitivity threshold level can be successfully measured and resolved out of a much stronger dynamics-related noise and gravity gradient. A detailed analysis and comprehensive simulation effort led us to a detector's design that can accomplish that goal successfully.

  20. The c equivalence principle and the correct form of writing Maxwell's equations

    Energy Technology Data Exchange (ETDEWEB)

    Heras, Jose A, E-mail: herasgomez@gmail.co [Universidad Autonoma Metropolitana Unidad Azcapotzalco, Av. San Pablo No. 180, Col. Reynosa, 02200, Mexico DF (Mexico)

    2010-09-15

    It is well known that the speed c{sub u}=1/{radical}({epsilon}{sub 0{mu}0}) is obtained in the process of defining SI units via action-at-a-distance forces, like the force between two static charges and the force between two long and parallel currents. The speed c{sub u} is then physically different from the observed speed of propagation c associated with electromagnetic waves in vacuum. However, repeated experiments have led to the numerical equality c{sub u} = c, which we have called the c equivalence principle. In this paper we point out that {nabla}xE=-[1/({epsilon}{sub 0}{mu}{sub 0}c{sup 2})]{partial_derivative}B/{partial_derivative}t is the correct form of writing Faraday's law when the c equivalence principle is not assumed. We also discuss the covariant form of Maxwell's equations without assuming the c equivalence principle.

  1. Equivalence principle and the baryon acoustic peak

    Science.gov (United States)

    Baldauf, Tobias; Mirbabayi, Mehrdad; Simonović, Marko; Zaldarriaga, Matias

    2015-08-01

    We study the dominant effect of a long wavelength density perturbation δ (λL) on short distance physics. In the nonrelativistic limit, the result is a uniform acceleration, fixed by the equivalence principle, and typically has no effect on statistical averages due to translational invariance. This same reasoning has been formalized to obtain a "consistency condition" on the cosmological correlation functions. In the presence of a feature, such as the acoustic peak at ℓBAO, this naive expectation breaks down for λLexplicitly applied to the one-loop calculation of the power spectrum. Finally, the success of baryon acoustic oscillation reconstruction schemes is argued to be another empirical evidence for the validity of the results.

  2. The equivalence principle

    International Nuclear Information System (INIS)

    Smorodinskij, Ya.A.

    1980-01-01

    The prerelativistic history of the equivalence principle (EP) is presented briefly. Its role in history of the general relativity theory (G.R.T.) discovery is elucidated. A modern idea states that the ratio of inert and gravitational masses does not differ from 1 at least up to the 12 sign after comma. Attention is paid to the difference of the gravitational field from electromagnetic one. The difference is as follows, the energy of the gravitational field distributed in space is the source of the field. These fields always interact at superposition. Electromagnetic fields from different sources are put together. On the basis of EP it is established the Sun field interact with the Earth gravitational energy in the same way as with any other one. The latter proves the existence of gravitation of the very gravitational field to a heavy body. A problem on gyroscope movement in the Earth gravitational field is presented as a paradox. The calculation has shown that gyroscope at satellite makes a positive precession, and its axis turns in an angle equal to α during a turn of the satellite round the Earth, but because of the space curvature - into the angle two times larger than α. A resulting turn is equal to 3α. It is shown on the EP basis that the polarization plane in any coordinate system does not turn when the ray of light passes in the gravitational field. Together with the historical value of EP noted is the necessity to take into account the requirements claimed by the EP at description of the physical world

  3. Equivalence principle and quantum mechanics: quantum simulation with entangled photons.

    Science.gov (United States)

    Longhi, S

    2018-01-15

    Einstein's equivalence principle (EP) states the complete physical equivalence of a gravitational field and corresponding inertial field in an accelerated reference frame. However, to what extent the EP remains valid in non-relativistic quantum mechanics is a controversial issue. To avoid violation of the EP, Bargmann's superselection rule forbids a coherent superposition of states with different masses. Here we suggest a quantum simulation of non-relativistic Schrödinger particle dynamics in non-inertial reference frames, which is based on the propagation of polarization-entangled photon pairs in curved and birefringent optical waveguides and Hong-Ou-Mandel quantum interference measurement. The photonic simulator can emulate superposition of mass states, which would lead to violation of the EP.

  4. Theoretical aspects of the equivalence principle

    International Nuclear Information System (INIS)

    Damour, Thibault

    2012-01-01

    We review several theoretical aspects of the equivalence principle (EP). We emphasize the unsatisfactory fact that the EP maintains the absolute character of the coupling constants of physics, while general relativity and its generalizations (Kaluza–Klein, …, string theory) suggest that all absolute structures should be replaced by dynamical entities. We discuss the EP-violation phenomenology of dilaton-like models, which is likely to be dominated by the linear superposition of two effects: a signal proportional to the nuclear Coulomb energy, related to the variation of the fine-structure constant, and a signal proportional to the surface nuclear binding energy, related to the variation of the light quark masses. We recall various theoretical arguments (including a recently proposed anthropic argument) suggesting that the EP be violated at a small, but not unmeasurably small level. This motivates the need for improved tests of the EP. These tests are probing new territories in physics that are related to deep, and mysterious, issues in fundamental physics. (paper)

  5. Equivalence principle implications of modified gravity models

    International Nuclear Information System (INIS)

    Hui, Lam; Nicolis, Alberto; Stubbs, Christopher W.

    2009-01-01

    Theories that attempt to explain the observed cosmic acceleration by modifying general relativity all introduce a new scalar degree of freedom that is active on large scales, but is screened on small scales to match experiments. We demonstrate that if such screening occurs via the chameleon mechanism, such as in f(R) theory, it is possible to have order unity violation of the equivalence principle, despite the absence of explicit violation in the microscopic action. Namely, extended objects such as galaxies or constituents thereof do not all fall at the same rate. The chameleon mechanism can screen the scalar charge for large objects but not for small ones (large/small is defined by the depth of the gravitational potential and is controlled by the scalar coupling). This leads to order one fluctuations in the ratio of the inertial mass to gravitational mass. We provide derivations in both Einstein and Jordan frames. In Jordan frame, it is no longer true that all objects move on geodesics; only unscreened ones, such as test particles, do. In contrast, if the scalar screening occurs via strong coupling, such as in the Dvali-Gabadadze-Porrati braneworld model, equivalence principle violation occurs at a much reduced level. We propose several observational tests of the chameleon mechanism: 1. small galaxies should accelerate faster than large galaxies, even in environments where dynamical friction is negligible; 2. voids defined by small galaxies would appear larger compared to standard expectations; 3. stars and diffuse gas in small galaxies should have different velocities, even if they are on the same orbits; 4. lensing and dynamical mass estimates should agree for large galaxies but disagree for small ones. We discuss possible pitfalls in some of these tests. The cleanest is the third one where the mass estimate from HI rotational velocity could exceed that from stars by 30% or more. To avoid blanket screening of all objects, the most promising place to look is in

  6. Violation of the equivalence principle for stressed bodies in asynchronous relativity

    Energy Technology Data Exchange (ETDEWEB)

    Andrade Martins, R. de (Centro de Logica, Epistemologia e Historia da Ciencia, Campinas (Brazil))

    1983-12-11

    In the recently developed asynchronous formulation of the relativistic theory of extended bodies, the inertial mass of a body does not explicitly depend on its pressure or stress. The detailed analysis of the weight of a box filled with a gas and placed in a weak gravitational field shows that this feature of asynchronous relativity implies a breakdown of the equivalence between inertial and passive gravitational mass for stressed systems.

  7. Acceleration Measurements Using Smartphone Sensors: Dealing with the Equivalence Principle

    OpenAIRE

    Monteiro, Martín; Cabeza, Cecilia; Martí, Arturo C.

    2014-01-01

    Acceleration sensors built into smartphones, i-pads or tablets can conveniently be used in the physics laboratory. By virtue of the equivalence principle, a sensor fixed in a non-inertial reference frame cannot discern between a gravitational field and an accelerated system. Accordingly, acceleration values read by these sensors must be corrected for the gravitational component. A physical pendulum was studied by way of example, and absolute acceleration and rotation angle values were derived...

  8. Equivalence principle, CP violations, and the Higgs-like boson mass

    International Nuclear Information System (INIS)

    Bellucci, S.; Faraoni, V.

    1994-01-01

    We consider the violation of the equivalence principle induced by a massive gravivector, i.e., the partner of the graviton in N>1 supergravity. The present limits on this violation allow us to obtain a lower bound on the vacuum expectation value of the scalar field that gives the gravivector its mass. We consider also the effective neutral kaon mass difference induced by the gravivector and compare the result with the experimental data on the CP-violation parameter ε

  9. Density matrix in quantum electrodynamics, equivalence principle and Hawking effect

    International Nuclear Information System (INIS)

    Frolov, V.P.; Gitman, D.M.

    1978-01-01

    The expression for the density matrix describing particles of one sort (electrons or positrons) created by an external electromagnetic field from the vacuum is obtained. The explicit form of the density matrix is found for the case of constant and uniform electric field. Arguments are given for the presence of a connection between the thermal nature of the density matrix describing particles created by the gravitational field of a black hole and the equivalence principle. (author)

  10. How to estimate the differential acceleration in a two-species atom interferometer to test the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Varoquaux, G; Nyman, R A; Geiger, R; Cheinet, P; Bouyer, P [Laboratoire Charles Fabry de l' Institut d' Optique, Campus Polytechnique, RD 128, 91127 Palaiseau (France); Landragin, A [LNE-SYRTE, UMR8630, UPMC, Observatoire de Paris, 61 avenue de l' Observatoire, 75014 Paris (France)], E-mail: philippe.bouyer@institutoptique.fr

    2009-11-15

    We propose a scheme for testing the weak equivalence principle (universality of free-fall (UFF)) using an atom-interferometric measurement of the local differential acceleration between two atomic species with a large mass ratio as test masses. An apparatus in free fall can be used to track atomic free-fall trajectories over large distances. We show how the differential acceleration can be extracted from the interferometric signal using Bayesian statistical estimation, even in the case of a large mass and laser wavelength difference. We show that this statistical estimation method does not suffer from acceleration noise of the platform and does not require repeatable experimental conditions. We specialize our discussion to a dual potassium/rubidium interferometer and extend our protocol with other atomic mixtures. Finally, we discuss the performance of the UFF test developed for the free-fall (zero-gravity) airplane in the ICE project (http://www.ice-space.fr)

  11. How to estimate the differential acceleration in a two-species atom interferometer to test the equivalence principle

    International Nuclear Information System (INIS)

    Varoquaux, G; Nyman, R A; Geiger, R; Cheinet, P; Bouyer, P; Landragin, A

    2009-01-01

    We propose a scheme for testing the weak equivalence principle (universality of free-fall (UFF)) using an atom-interferometric measurement of the local differential acceleration between two atomic species with a large mass ratio as test masses. An apparatus in free fall can be used to track atomic free-fall trajectories over large distances. We show how the differential acceleration can be extracted from the interferometric signal using Bayesian statistical estimation, even in the case of a large mass and laser wavelength difference. We show that this statistical estimation method does not suffer from acceleration noise of the platform and does not require repeatable experimental conditions. We specialize our discussion to a dual potassium/rubidium interferometer and extend our protocol with other atomic mixtures. Finally, we discuss the performance of the UFF test developed for the free-fall (zero-gravity) airplane in the ICE project (http://www.ice-space.fr).

  12. Effective Inertial Frame in an Atom Interferometric Test of the Equivalence Principle

    Science.gov (United States)

    Overstreet, Chris; Asenbaum, Peter; Kovachy, Tim; Notermans, Remy; Hogan, Jason M.; Kasevich, Mark A.

    2018-05-01

    In an ideal test of the equivalence principle, the test masses fall in a common inertial frame. A real experiment is affected by gravity gradients, which introduce systematic errors by coupling to initial kinematic differences between the test masses. Here we demonstrate a method that reduces the sensitivity of a dual-species atom interferometer to initial kinematics by using a frequency shift of the mirror pulse to create an effective inertial frame for both atomic species. Using this method, we suppress the gravity-gradient-induced dependence of the differential phase on initial kinematic differences by 2 orders of magnitude and precisely measure these differences. We realize a relative precision of Δ g /g ≈6 ×10-11 per shot, which improves on the best previous result for a dual-species atom interferometer by more than 3 orders of magnitude. By reducing gravity gradient systematic errors to one part in 1 013 , these results pave the way for an atomic test of the equivalence principle at an accuracy comparable with state-of-the-art classical tests.

  13. A new electrophoretic focusing principle: focusing of nonamphoteric weak ionogenic analytes using inverse electromigration dispersion profiles.

    Science.gov (United States)

    Gebauer, Petr; Malá, Zdena; Bocek, Petr

    2010-03-01

    This contribution introduces a new separation principle in CE which offers focusing of weak nonamphoteric ionogenic species and their inherent transport to the detector. The prerequisite condition for application of this principle is the existence of an inverse electromigration dispersion profile, i.e. a profile where pH is decreasing toward the anode or cathode for focusing of anionic or cationic weak analytes, respectively. The theory presented defines the principal conditions under which an analyte is focused on a profile of this type. Since electromigration dispersion profiles are migrating ones, the new principle offers inherent transport of focused analytes into the detection cell. The focusing principle described utilizes a mechanism different from both CZE (where separation is based on the difference in mobilities) and IEF (where separation is based on difference in pI), and hence, offers another separation dimension in CE. The new principle and its theory presented here are supplemented by convincing experiments as their proof.

  14. Test of the Equivalence Principle in an Einstein Elevator

    Science.gov (United States)

    Shapiro, Irwin I.; Glashow, S.; Lorenzini, E. C.; Cosmo, M. L.; Cheimets, P. N.; Finkelstein, N.; Schneps, M.

    2004-01-01

    The scientific goal of the experiment is to test the equality of gravitational and inertial mass (i.e., to test the Principle of Equivalence) by measuring the independence of the rate of fall of bodies from their compositions. The measurement is accomplished by measuring the relative displacement (or equivalently acceleration) of two falling bodies of different materials which are the proof masses of a differential accelerometer spinning about an horizontal axis to modulate a possible violation signal. A non-zero differential acceleration appearing at the signal frequency will indicate a violation of the Equivalence Principle. The goal of the experiment is to measure the Eotvos ratio og/g (differential acceleration/common acceleration) with a targeted accuracy that is about two orders of magnitude better than the state of the art (presently at several parts in 10(exp 13). The analyses carried out during this first grant year have focused on: (1) evaluation of possible shapes for the proof masses to meet the requirements on the higher-order mass moment disturbances generated by the falling capsule; (2) dynamics of the instrument package and differential acceleration measurement in the presence of errors and imperfections; (3) computation of the inertia characteristic of the instrument package that enable a separation of the signal from the dynamics-related noise; (4) a revised thermal analysis of the instrument package in light of the new conceptual design of the cryostat; (5) the development of a dynamic and control model of the capsule attached to the gondola and balloon to define the requirements for the leveling mechanism (6) a conceptual design of the leveling mechanism that keeps the capsule aligned before release from the balloon; and (7) a new conceptual design of the customized cryostat and a preliminary valuation of its cost. The project also involves an international cooperation with the Institute of Space Physics (IFSI) in Rome, Italy. The group at IFSI

  15. Supersymmetric QED at finite temperature and the principle of equivalence

    International Nuclear Information System (INIS)

    Robinett, R.W.

    1985-01-01

    Unbroken supersymmetric QED is examined at finite temperature and it is shown that the scalar and spinor members of a chiral superfield acquire different temperature-dependent inertial masses. By considering the renormalization of the energy-momentum tensor it is also shown that the T-dependent scalar-spinor gravitational masses are also no longer degenerate and, moreover, are different from their T-dependent inertial mass shifts implying a violation of the equivalence principle. The temperature-dependent corrections to the spinor (g-2) are also calculated and found not to vanish

  16. Five-dimensional projective unified theory and the principle of equivalence

    International Nuclear Information System (INIS)

    De Sabbata, V.; Gasperini, M.

    1984-01-01

    We investigate the physical consequences of a new five-dimensional projective theory unifying gravitation and electromagnetism. Solving the field equations in the linear approximation and in the static limit, we find that a celestial body would act as a source of a long-range scalar field, and that macroscopic test bodies with different internal structure would accelerate differently in the solar gravitational field; this seems to be in disagreement with the equivalence principle. To avoid this contradiction, we suggest a possible modification of the geometrical structure of the five-dimensional projective space

  17. Testing the equivalence principle on a trampoline

    Science.gov (United States)

    Reasenberg, Robert D.; Phillips, James D.

    2001-07-01

    We are developing a Galilean test of the equivalence principle in which two pairs of test mass assemblies (TMA) are in free fall in a comoving vacuum chamber for about 0.9 s. The TMA are tossed upward, and the process repeats at 1.2 s intervals. Each TMA carries a solid quartz retroreflector and a payload mass of about one-third of the total TMA mass. The relative vertical motion of the TMA of each pair is monitored by a laser gauge working in an optical cavity formed by the retroreflectors. Single-toss precision of the relative acceleration of a single pair of TMA is 3.5×10-12 g. The project goal of Δg/g = 10-13 can be reached in a single night's run, but repetition with altered configurations will be required to ensure the correction of systematic error to the nominal accuracy level. Because the measurements can be made quickly, we plan to study several pairs of materials.

  18. Testing the equivalence principle on cosmological scales

    Science.gov (United States)

    Bonvin, Camille; Fleury, Pierre

    2018-05-01

    The equivalence principle, that is one of the main pillars of general relativity, is very well tested in the Solar system; however, its validity is more uncertain on cosmological scales, or when dark matter is concerned. This article shows that relativistic effects in the large-scale structure can be used to directly test whether dark matter satisfies Euler's equation, i.e. whether its free fall is characterised by geodesic motion, just like baryons and light. After having proposed a general parametrisation for deviations from Euler's equation, we perform Fisher-matrix forecasts for future surveys like DESI and the SKA, and show that such deviations can be constrained with a precision of order 10%. Deviations from Euler's equation cannot be tested directly with standard methods like redshift-space distortions and gravitational lensing, since these observables are not sensitive to the time component of the metric. Our analysis shows therefore that relativistic effects bring new and complementary constraints to alternative theories of gravity.

  19. On the relativity and equivalence principles in the gauge theory of gravitation

    International Nuclear Information System (INIS)

    Ivanenko, D.; Sardanashvily, G.

    1981-01-01

    One sees the basic ideas of the gauge gravitation theory still not generally accepted in spite of more than twenty years of its history. The chief reason lies in the fact that the gauge character of gravity is connected with the whole complex of problems of Einstein General Relativity: about the reference system definition, on the (3+1)-splitting, on the presence (or absence) of symmetries in GR, on the necessity (or triviality) of general covariance, on the meaning of equivalence principle, which led Einstein from Special to General Relativity |1|. The real actuality of this complex of interconnected problems is demonstrated by the well-known work of V. Fock, who saw no symmetries in General Relativity, declared the unnecessary Equivalence principle and proposed even to substitute the designation ''chronogeometry'' instead of ''general relativity'' (see also P. Havas). Developing this line, H. Bondi quite recently also expressed doubts about the ''relativity'' in Einstein theory of gravitation. All proposed versions of the gauge gravitation theory must clarify the discrepancy between Einstein gravitational field being a pseudo-Riemannian metric field, and the gauge potentials representing connections on some fiber bundles and there exists no group, whose gauging would lead to the purely gravitational part of connection (Christoffel symbols or Fock-Ivenenko-Weyl spinorial coefficients). (author)

  20. Testing Einstein's Equivalence Principle With Fast Radio Bursts.

    Science.gov (United States)

    Wei, Jun-Jie; Gao, He; Wu, Xue-Feng; Mészáros, Peter

    2015-12-31

    The accuracy of Einstein's equivalence principle (EEP) can be tested with the observed time delays between correlated particles or photons that are emitted from astronomical sources. Assuming as a lower limit that the time delays are caused mainly by the gravitational potential of the Milky Way, we prove that fast radio bursts (FRBs) of cosmological origin can be used to constrain the EEP with high accuracy. Taking FRB 110220 and two possible FRB/gamma-ray burst (GRB) association systems (FRB/GRB 101011A and FRB/GRB 100704A) as examples, we obtain a strict upper limit on the differences of the parametrized post-Newtonian parameter γ values as low as [γ(1.23  GHz)-γ(1.45  GHz)]radio energies, improving by 1 to 2 orders of magnitude the previous results at other energies based on supernova 1987A and GRBs.

  1. Dark energy and equivalence principle constraints from astrophysical tests of the stability of the fine-structure constant

    Energy Technology Data Exchange (ETDEWEB)

    Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C. [Centro de Astrofísica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Pino, M. [Institut Domènech i Montaner, C/Maspujols 21-23, 43206 Reus (Spain); Rocha, C.I.S.A. [Externato Ribadouro, Rua de Santa Catarina 1346, 4000-447 Porto (Portugal); Wietersheim, M. von, E-mail: Carlos.Martins@astro.up.pt, E-mail: Ana.Pinho@astro.up.pt, E-mail: up201106579@fc.up.pt, E-mail: mpc_97@yahoo.com, E-mail: cisar97@hotmail.com, E-mail: maxivonw@gmail.com [Institut Manuel Sales i Ferré, Avinguda de les Escoles 6, 43550 Ulldecona (Spain)

    2015-08-01

    Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w{sub 0}. Moreover, in these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints.

  2. Dark energy and equivalence principle constraints from astrophysical tests of the stability of the fine-structure constant

    International Nuclear Information System (INIS)

    Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C.; Pino, M.; Rocha, C.I.S.A.; Wietersheim, M. von

    2015-01-01

    Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w 0 . Moreover, in these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints

  3. The kernel G1(x,x') and the quantum equivalence principle

    International Nuclear Information System (INIS)

    Ceccatto, H.; Foussats, A.; Giacomini, H.; Zandron, O.

    1981-01-01

    In this paper, it is re-examined the formulation of the quantum equivalence principle (QEP) and its compatibility with the conditions which must be fulfilled by the kernel G 1 (x,x') is discussed. It is also determined the base of solutions which give the particle model in a curved space-time in terms of Cauchy's data for such a kernel. Finally, it is analyzed the creation of particles in this model by studying the time evolution of creation and annihilation operators. This method is an alternative to one that uses Bogoliubov's transformation as a mechanism of creation. (author)

  4. The short-circuit concept used in field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1990-01-01

    In field equivalence principles, electric and magnetic surface currents are specified and considered as impressed currents. Often the currents are placed on perfect conductors. It is shown that these currents can be treated through two approaches. The first approach is decomposition of the total...... field into partial fields caused by the individual impressed currents. When this approach is used, it is shown that, on a perfect electric (magnetic) conductor, impressed electric (magnetic) surface currents are short-circuited. The second approach is to note that, since Maxwell's equations...... and the boundary conditions are satisfied, none of the impressed currents is short-circuited and no currents are induced on the perfect conductors. Since all currents and field quantities are considered at the same time, this approach is referred to as the total-field approach. The partial-field approach leads...

  5. Tidal tails test the equivalence principle in the dark-matter sector

    International Nuclear Information System (INIS)

    Kesden, Michael; Kamionkowski, Marc

    2006-01-01

    Satellite galaxies currently undergoing tidal disruption offer a unique opportunity to constrain an effective violation of the equivalence principle in the dark sector. While dark matter in the standard scenario interacts solely through gravity on large scales, a new long-range force between dark-matter particles may naturally arise in theories in which the dark matter couples to a light scalar field. An inverse-square-law force of this kind would manifest itself as a violation of the equivalence principle in the dynamics of dark matter compared to baryons in the form of gas or stars. In a previous paper, we showed that an attractive force would displace stars outwards from the bottom of the satellite's gravitational potential well, leading to a higher fraction of stars being disrupted from the tidal bulge further from the Galactic center. Since stars disrupted from the far (near) side of the satellite go on to form the trailing (leading) tidal stream, an attractive dark-matter force will produce a relative enhancement of the trailing stream compared to the leading stream. This distinctive signature of a dark-matter force might be detected through detailed observations of the tidal tails of a disrupting satellite, such as those recently performed by the Two-Micron All-Sky Survey (2MASS) and Sloan Digital Sky Survey (SDSS) on the Sagittarius (Sgr) dwarf galaxy. Here we show that this signature is robust to changes in our models for both the satellite and Milky Way, suggesting that we might hope to search for a dark-matter force in the tidal features of other recently discovered satellite galaxies in addition to the Sgr dwarf

  6. Testing Einstein's Equivalence Principle With Fast Radio Bursts

    Science.gov (United States)

    Wei, Jun-Jie; Gao, He; Wu, Xue-Feng; Mészáros, Peter

    2015-12-01

    The accuracy of Einstein's equivalence principle (EEP) can be tested with the observed time delays between correlated particles or photons that are emitted from astronomical sources. Assuming as a lower limit that the time delays are caused mainly by the gravitational potential of the Milky Way, we prove that fast radio bursts (FRBs) of cosmological origin can be used to constrain the EEP with high accuracy. Taking FRB 110220 and two possible FRB/gamma-ray burst (GRB) association systems (FRB/GRB 101011A and FRB/GRB 100704A) as examples, we obtain a strict upper limit on the differences of the parametrized post-Newtonian parameter γ values as low as [γ (1.23 GHz )-γ (1.45 GHz )] <4.36 ×10-9. This provides the most stringent limit up to date on the EEP through the relative differential variations of the γ parameter at radio energies, improving by 1 to 2 orders of magnitude the previous results at other energies based on supernova 1987A and GRBs.

  7. Ecological aspects of the radiation-migration equivalence principle in a closed fuel cycle and its comparative assessment with the ALARA principle

    International Nuclear Information System (INIS)

    Poluektov, P.P.; Lopatkin, A.V.; Nikipelov, B.V.; Rachkov, V.I.; Sukhanov, L.P.; Voloshin, S.V.

    2005-01-01

    The errors and uncertainties arising in the determination of radionuclide escape from the RW burial require the use of extremely conservative estimates. In the limit, the nuclide concentrations in the waste may be used as estimates of their concentrations in underground waters. On this basis, it is possible to evaluate the corresponding radio-toxicities (by normalizing to the interference levels) of individual components and radioactive waste as a whole or the effective radio-toxicities (by dividing the radionuclide radio-toxicities into the retardation factors for the nuclide transfer with underground waters). This completely coincides with the procedure of performing the limiting conservative estimate according to the traditional approach with the use of scenarios, escape models, and the corresponding codes. A comparison of radio-toxicities for waste with those for natural uranium consumed for producing a required fuel results in the notion of radiation-migration equivalence for individual waste components and radioactive waste as a whole. Therefore, the radiation-migration equivalence corresponds to the limiting conservative estimate in the traditional approach to the determination of RW disposal safety in comparison with the radiotoxicity of natural uranium. The amounts of radionuclides in fragments (and actinides) and the corresponding weight of heavy metal in the fuel are compared with due regard for the hazard (according to the NRB-99 standards), the nuclide mobility (through the sorption retardation factors), the retention of radioactive waste by the solid matrix, and the contribution from the chains of uranium fission products. It was noted above that the RME principle is aimed at ensuring the radiological safety of the present and future generations and the environment through the minimization of radioactive waste upon reprocessing. This is attended by reaching a reasonably achievable, low level of radiological action in the context of modern science, i

  8. Determination of dose equivalent with tissue-equivalent proportional counters

    International Nuclear Information System (INIS)

    Dietze, G.; Schuhmacher, H.; Menzel, H.G.

    1989-01-01

    Low pressure tissue-equivalent proportional counters (TEPC) are instruments based on the cavity chamber principle and provide spectral information on the energy loss of single charged particles crossing the cavity. Hence such detectors measure absorbed dose or kerma and are able to provide estimates on radiation quality. During recent years TEPC based instruments have been developed for radiation protection applications in photon and neutron fields. This was mainly based on the expectation that the energy dependence of their dose equivalent response is smaller than that of other instruments in use. Recently, such instruments have been investigated by intercomparison measurements in various neutron and photon fields. Although their principles of measurements are more closely related to the definition of dose equivalent quantities than those of other existing dosemeters, there are distinct differences and limitations with respect to the irradiation geometry and the determination of the quality factor. The application of such instruments for measuring ambient dose equivalent is discussed. (author)

  9. Null result for violation of the equivalence principle with free-fall rotating gyroscopes

    International Nuclear Information System (INIS)

    Luo, J.; Zhou, Z.B.; Nie, Y.X.; Zhang, Y.Z.

    2002-01-01

    The differential acceleration between a rotating mechanical gyroscope and a nonrotating one is directly measured by using a double free-fall interferometer, and no apparent differential acceleration has been observed at the relative level of 2x10 -6 . It means that the equivalence principle is still valid for rotating extended bodies, i.e., the spin-gravity interaction between the extended bodies has not been observed at this level. Also, to the limit of our experimental sensitivity, there is no observed asymmetrical effect or antigravity of the rotating gyroscopes as reported by Hayasaka et al

  10. Discrete maximum principle for the P1 - P0 weak Galerkin finite element approximations

    Science.gov (United States)

    Wang, Junping; Ye, Xiu; Zhai, Qilong; Zhang, Ran

    2018-06-01

    This paper presents two discrete maximum principles (DMP) for the numerical solution of second order elliptic equations arising from the weak Galerkin finite element method. The results are established by assuming an h-acute angle condition for the underlying finite element triangulations. The mathematical theory is based on the well-known De Giorgi technique adapted in the finite element context. Some numerical results are reported to validate the theory of DMP.

  11. Testing the Equivalence Principle and Lorentz Invariance with PeV Neutrinos from Blazar Flares.

    Science.gov (United States)

    Wang, Zi-Yi; Liu, Ruo-Yu; Wang, Xiang-Yu

    2016-04-15

    It was recently proposed that a giant flare of the blazar PKS B1424-418 at redshift z=1.522 is in association with a PeV-energy neutrino event detected by IceCube. Based on this association we here suggest that the flight time difference between the PeV neutrino and gamma-ray photons from blazar flares can be used to constrain the violations of equivalence principle and the Lorentz invariance for neutrinos. From the calculated Shapiro delay due to clusters or superclusters in the nearby universe, we find that violation of the equivalence principle for neutrinos and photons is constrained to an accuracy of at least 10^{-5}, which is 2 orders of magnitude tighter than the constraint placed by MeV neutrinos from supernova 1987A. Lorentz invariance violation (LIV) arises in various quantum-gravity theories, which predicts an energy-dependent velocity of propagation in vacuum for particles. We find that the association of the PeV neutrino with the gamma-ray outburst set limits on the energy scale of possible LIV to >0.01E_{pl} for linear LIV models and >6×10^{-8}E_{pl} for quadratic order LIV models, where E_{pl} is the Planck energy scale. These are the most stringent constraints on neutrino LIV for subluminal neutrinos.

  12. Some algorithms for reordering a sequence of objects, with application to E. Sparre Andersen's principle of equivalence in mathematical statistics

    NARCIS (Netherlands)

    Bruijn, de N.G.

    1972-01-01

    Recently A. W. Joseph described an algorithm providing combinatorial insight into E. Sparre Andersen's so-called Principle of Equivalence in mathematical statistics. In the present paper such algorithms are discussed systematically.

  13. A homogeneous static gravitational field and the principle of equivalence

    International Nuclear Information System (INIS)

    Chernikov, N.A.

    2001-01-01

    In this paper any gravitational field (both in the Einsteinian case and in the Newtonian case) is described by the connection, called gravitational. A homogeneous static gravitational field is considered in the four-dimensional area z>0 of a space-time with Cartesian coordinates x, y, z, and t. Such field can be created by masses, disposed outside the area z>0 with a density distribution independent of x, y, and t. Remarkably, in the four-dimensional area z>0, together with the primitive background connection, the primitive gravitational connection has been derived. In concordance with the Principle of Equivalence all components of such gravitational connection are equal to zero in the uniformly accelerated frame system, in which the gravitational force of attraction is balanced by the inertial force. However, all components of such background connection are equal to zero in the resting frame system, but not in the accelerated frame system

  14. Violations of the equivalence principle in a dilaton-runaway scenario

    CERN Document Server

    Damour, Thibault Marie Alban Guillaume; Veneziano, Gabriele

    2002-01-01

    We explore a version of the cosmological dilaton-fixing and decoupling mechanism in which the dilaton-dependence of the low-energy effective action is extremized for infinitely large values of the bare string coupling $g_s^2 = e^{\\phi}$. We study the efficiency with which the dilaton $\\phi$ runs away towards its ``fixed point'' at infinity during a primordial inflationary stage, and thereby approximately decouples from matter. The residual dilaton couplings are found to be related to the amplitude of the density fluctuations generated during inflation. For the simplest inflationary potential, $V (\\chi) = {1/2} m_{\\chi}^2 (\\phi) \\chi^2$, the residual dilaton couplings are shown to predict violations of the universality of gravitational acceleration near the $\\Delta a / a \\sim 10^{-12}$ level. This suggests that a modest improvement in the precision of equivalence principle tests might be able to detect the effect of such a runaway dilaton. Under some assumptions about the coupling of the dilaton to dark matter...

  15. Induction, bounding, weak combinatorial principles, and the homogeneous model theorem

    CERN Document Server

    Hirschfeldt, Denis R; Shore, Richard A

    2017-01-01

    Goncharov and Peretyat'kin independently gave necessary and sufficient conditions for when a set of types of a complete theory T is the type spectrum of some homogeneous model of T. Their result can be stated as a principle of second order arithmetic, which is called the Homogeneous Model Theorem (HMT), and analyzed from the points of view of computability theory and reverse mathematics. Previous computability theoretic results by Lange suggested a close connection between HMT and the Atomic Model Theorem (AMT), which states that every complete atomic theory has an atomic model. The authors show that HMT and AMT are indeed equivalent in the sense of reverse mathematics, as well as in a strong computability theoretic sense and do the same for an analogous result of Peretyat'kin giving necessary and sufficient conditions for when a set of types is the type spectrum of some model.

  16. Weak Galilean invariance as a selection principle for coarse-grained diffusive models.

    Science.gov (United States)

    Cairoli, Andrea; Klages, Rainer; Baule, Adrian

    2018-05-29

    How does the mathematical description of a system change in different reference frames? Galilei first addressed this fundamental question by formulating the famous principle of Galilean invariance. It prescribes that the equations of motion of closed systems remain the same in different inertial frames related by Galilean transformations, thus imposing strong constraints on the dynamical rules. However, real world systems are often described by coarse-grained models integrating complex internal and external interactions indistinguishably as friction and stochastic forces. Since Galilean invariance is then violated, there is seemingly no alternative principle to assess a priori the physical consistency of a given stochastic model in different inertial frames. Here, starting from the Kac-Zwanzig Hamiltonian model generating Brownian motion, we show how Galilean invariance is broken during the coarse-graining procedure when deriving stochastic equations. Our analysis leads to a set of rules characterizing systems in different inertial frames that have to be satisfied by general stochastic models, which we call "weak Galilean invariance." Several well-known stochastic processes are invariant in these terms, except the continuous-time random walk for which we derive the correct invariant description. Our results are particularly relevant for the modeling of biological systems, as they provide a theoretical principle to select physically consistent stochastic models before a validation against experimental data.

  17. [Equivalent Lever Principle of Ossicular Chain and Amplitude Reduction Effect of Internal Ear Lymph].

    Science.gov (United States)

    Zhao, Xiaoyan; Qin, Renjia

    2015-04-01

    This paper makes persuasive demonstrations on some problems about the human ear sound transmission principle in existing physiological textbooks and reference books, and puts forward the authors' view to make up for its literature. Exerting the knowledge of lever in physics and the acoustics theory, we come up with an equivalent simplified model of manubrium mallei which is to meet the requirements as the long arm of the lever. We also set up an equivalent simplified model of ossicular chain--a combination of levers of ossicular chain. We disassemble the model into two simple levers, and make full analysis and demonstration on them. Through the calculation and comparison of displacement amplitudes in both external auditory canal air and internal ear lymph, we may draw a conclusion that the key reason, which the sound displacement amplitude is to be decreased to adapt to the endurance limit of the basement membrane, is that the density and sound speed in lymph is much higher than those in the air.

  18. Significance and principles of the calculation of the effective dose equivalent for radiological protection of personnel and patients

    International Nuclear Information System (INIS)

    Drexler, G.; Williams, G.

    1985-01-01

    The application of the effective dose equivalent, Hsub(E), concept for radiological protection assessments of occupationally exposed persons is justifiable by the practicability thus achieved with regard to the limiting principles. Nevertheless, it would be proper logic to further use as the basic limiting quantity the real physical dose equivalent of homogeneous whole-body exposure, and for inhomogeneous whole-body irradiation the Hsub(E) value, calculated by means of the concept of the effective dose equivalent. For then the required concepts, models and calculations would not be connected with a basic radiation protection quantity. Application of the effective dose equivalent for radiation protection assessments for patients is misleading and is not practical with regard to assessing an individual or collective radiation risk of patients. The quantity of expected harm would be better suited to this purpose. There is no need to express the radiation risk by a dose quantity, which means careless handling of good information. (orig./WU) [de

  19. Gravitational quadrupolar coupling to equivalence principle test masses: the general case

    CERN Document Server

    Lockerbie, N A

    2002-01-01

    This paper discusses the significance of the quadrupolar gravitational force in the context of test masses destined for use in equivalence principle (EP) experiments, such as STEP and MICROSCOPE. The relationship between quadrupolar gravity and rotational inertia for an arbitrary body is analysed, and the special, gravitational, role of a body's principal axes of inertia is revealed. From these considerations the gravitational quadrupolar force acting on a cylindrically symmetrical body, due to a point-like attracting source mass, is derived in terms of the body's mass quadrupole tensor. The result is shown to be in agreement with that obtained from MacCullagh's formula (as the starting point). The theory is then extended to cover the case of a completely arbitrary solid body, and a compact formulation for the quadrupolar force on such a body is derived. A numerical example of a dumb-bell's attraction to a local point-like gravitational source is analysed using this theory. Close agreement is found between th...

  20. Reducing Weak to Strong Bisimilarity in CCP

    Directory of Open Access Journals (Sweden)

    Andrés Aristizábal

    2012-12-01

    Full Text Available Concurrent constraint programming (ccp is a well-established model for concurrency that singles out the fundamental aspects of asynchronous systems whose agents (or processes evolve by posting and querying (partial information in a global medium. Bisimilarity is a standard behavioural equivalence in concurrency theory. However, only recently a well-behaved notion of bisimilarity for ccp, and a ccp partition refinement algorithm for deciding the strong version of this equivalence have been proposed. Weak bisimiliarity is a central behavioural equivalence in process calculi and it is obtained from the strong case by taking into account only the actions that are observable in the system. Typically, the standard partition refinement can also be used for deciding weak bisimilarity simply by using Milner's reduction from weak to strong bisimilarity; a technique referred to as saturation. In this paper we demonstrate that, because of its involved labeled transitions, the above-mentioned saturation technique does not work for ccp. We give an alternative reduction from weak ccp bisimilarity to the strong one that allows us to use the ccp partition refinement algorithm for deciding this equivalence.

  1. Equivalent Lagrangians

    International Nuclear Information System (INIS)

    Hojman, S.

    1982-01-01

    We present a review of the inverse problem of the Calculus of Variations, emphasizing the ambiguities which appear due to the existence of equivalent Lagrangians for a given classical system. In particular, we analyze the properties of equivalent Lagrangians in the multidimensional case, we study the conditions for the existence of a variational principle for (second as well as first order) equations of motion and their solutions, we consider the inverse problem of the Calculus of Variations for singular systems, we state the ambiguities which emerge in the relationship between symmetries and conserved quantities in the case of equivalent Lagrangians, we discuss the problems which appear in trying to quantize classical systems which have different equivalent Lagrangians, we describe the situation which arises in the study of equivalent Lagrangians in field theory and finally, we present some unsolved problems and discussion topics related to the content of this article. (author)

  2. Expanded solar-system limits on violations of the equivalence principle

    International Nuclear Information System (INIS)

    Overduin, James; Mitcham, Jack; Warecki, Zoey

    2014-01-01

    Most attempts to unify general relativity with the standard model of particle physics predict violations of the equivalence principle associated in some way with the composition of the test masses. We test this idea by using observational uncertainties in the positions and motions of solar-system bodies to set upper limits on the relative difference Δ between gravitational and inertial mass for each body. For suitable pairs of objects, it is possible to constrain three different linear combinations of Δ using Kepler’s third law, the migration of stable Lagrange points, and orbital polarization (the Nordtvedt effect). Limits of order 10 −10 –10 −6 on Δ for individual bodies can then be derived from planetary and lunar ephemerides, Cassini observations of the Saturn system, and observations of Jupiter’s Trojan asteroids as well as recently discovered Trojan companions around the Earth, Mars, Neptune, and Saturnian moons. These results can be combined with models for elemental abundances in each body to test for composition-dependent violations of the universality of free fall in the solar system. The resulting limits are weaker than those from laboratory experiments, but span a larger volume in composition space. (paper)

  3. Formal structures, the concepts of covariance, invariance, equivalent reference frames, and the principle Relativity

    Science.gov (United States)

    Rodrigues, W. A.; Scanavini, M. E. F.; de Alcantara, L. P.

    1990-02-01

    In this paper a given spacetime theory T is characterized as the theory of a certain species of structure in the sense of Bourbaki [1]. It is then possible to clarify in a rigorous way the concepts of passive and active covariance of T under the action of the manifold mapping group G M . For each T, we define also an invariance group G I T and, in general, G I T ≠ G M . This group is defined once we realize that, for each τ ∈ ModT, each explicit geometrical object defining the structure can be classified as absolute or dynamical [2]. All spacetime theories possess also implicit geometrical objects that do not appear explicitly in the structure. These implicit objects are not absolute nor dynamical. Among them there are the reference frame fields, i.e., “timelike” vector fields X ∈ TU,U subseteq M M, where M is a manifold which is part of ST, a substructure for each τ ∈ ModT, called spacetime. We give a physically motivated definition of equivalent reference frames and introduce the concept of the equivalence group of a class of reference frames of kind X according to T, G X T. We define that T admits a weak principle of relativity (WPR) only if G X T ≠ identity for some X. If G X T = G I T for some X, we say that T admits a strong principle of relativity (PR). The results of this paper generalize and clarify several results obtained by Anderson [2], Scheibe [3], Hiskes [4], Recami and Rodrigues [5], Friedman [6], Fock [7], and Scanavini [8]. Among the novelties here, there is the realization that the definitions of G I T and G X T can be given only when certain boundary conditions for the equations of motion of T can be physically realizable in the domain U U subseteq M M, where a given reference frame is defined. The existence of physically realizable boundary conditions for each τ ∈ ModT (in ∂ U), in contrast with the mathematically possible boundary condition, is then seen to be essential for the validity of a principle of relativity for T

  4. Is weak violation of the Pauli principle possible?

    International Nuclear Information System (INIS)

    Ignat'ev, A.Yu.; Kuz'min, V.A.

    1987-01-01

    The question considered in the work is whether there are models which can account for small violation of the Pauli principle. A simple algebra is constructed for the creation-annihilation operators, which contains a parameter β and describe small violation of the Pauli principle (the Pauli principle is valid exactly for β=0). The commutation relations in this algebra are trilinear. A model is presented, basing upon this commutator algebra, which allows transitions violating the Pauli principle, their probability being suppressed by a factor of β 2 (even though the Hamiltonian does not contain small parameters)

  5. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  6. Risk measurement with equivalent utility principles

    NARCIS (Netherlands)

    Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.

    2006-01-01

    Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics

  7. Einstein's Elevator in Class: A Self-Construction by Students for the Study of the Equivalence Principle

    Science.gov (United States)

    Kapotis, Efstratios; Kalkanis, George

    2016-10-01

    According to the principle of equivalence, it is impossible to distinguish between gravity and inertial forces that a noninertial observer experiences in his own frame of reference. For example, let's consider an elevator in space that is being accelerated in one direction. An observer inside it would feel as if there was gravity force pulling him toward the opposite direction. The same holds for a person in a stationary elevator located in Earth's gravitational field. No experiment enables us to distinguish between the accelerating elevator in space and the motionless elevator near Earth's surface. Strictly speaking, when the gravitational field is non-uniform (like Earth's), the equivalence principle holds only for experiments in elevators that are small enough and that take place over a short enough period of time (Fig. 1). However, performing an experiment in an elevator in space is impractical. On the other hand, it is easy to combine both forces on the same observer, i.e., gravity and a fictitious inertial force due to acceleration. Imagine an observer in an elevator that falls freely within Earth's gravitational field. The observer experiences gravity pulling him down while it might be said that the inertial force due to gravity acceleration g pulls him up. Gravity and inertial force cancel each other, (mis)leading the observer to believe there is no gravitational field. This study outlines our implementation of a self-construction idea that we have found useful in teaching introductory physics students (undergraduate, non-majors).

  8. Quantum mechanics from an equivalence principle

    International Nuclear Information System (INIS)

    Faraggi, A.E.

    1997-01-01

    The authors show that requiring diffeomorphic equivalence for one-dimensional stationary states implies that the reduced action S 0 satisfies the quantum Hamilton-Jacobi equation with the Planck constant playing the role of a covariantizing parameter. The construction shows the existence of a fundamental initial condition which is strictly related to the Moebius symmetry of the Legendre transform and to its involutive character. The universal nature of the initial condition implies the Schroedinger equation in any dimension

  9. Decompositional equivalence: A fundamental symmetry underlying quantum theory

    OpenAIRE

    Fields, Chris

    2014-01-01

    Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?

  10. Development of dose equivalent meters based on microdosimetric principles

    International Nuclear Information System (INIS)

    Booz, J.

    1984-01-01

    In this paper, the employment of microdosimetric dose-equivalent meters in radiation protection is described considering the advantages of introducing microdosimetric methods into radiation protection, the technical suitability of such instruments for measuring dose equivalent, and finally technical requirements, constraints and solutions together with some examples of instruments and experimental results. The advantage of microdosimetric methods in radiation protection is illustrated with the evaluation of dose-mean quality factors in radiation fields of unknown composition and with the methods of evaluating neutron- and gamma-dose fractions. - It is shown that there is good correlation between dose-mean lineal energy, anti ysub(anti D), and the ICRP quality factor. - Neutron- and gamma-dose fractions of unknown radiation fields can be evaluated with microdosimetric proportional counters without recurrence to other instruments and methods. The problems of separation are discussed. The technical suitability of microdosimetric instruments for measuring dose equivalent is discussed considering the energy response to neutrons and photons and the sensitivity in terms of dose-equivalent rate. Then, considering technical requirements, constraints, and solutions, the problem of the large dynamic range in LET, the large dynamic range in pulse rate, geometry of sensitive volume and electrodes, evaluation of dose-mean quality factors, calibration methods, and uncertainties are discussed. (orig.)

  11. Conditions needed to give meaning to rad-equivalence principle

    International Nuclear Information System (INIS)

    Latarjet, R.

    1980-01-01

    To legislate on mutagenic chemical pollution the problem to be faced is similar to that tackled about 30 years ago regarding pollution by ionizing radiations. It would be useful to benefit from the work of these 30 years by establishing equivalences, if possible, between chemical mutagens and radiations. Inevitable mutagenic pollutions are considered here, especially those associated with fuel based energy production. As with radiations the legislation must derive from a compromise between the harmful and beneficial effects of the polluting system. When deciding on tolerance doses it is necessary to safeguard the biosphere without inflicting excessive restrictions on industry and on the economy. The present article discusses the conditions needed to give meaning to the notion of rad-equivalence. Some examples of already established equivalences are given, together with the first practical consequences which emerge [fr

  12. Introduction to weak interactions

    International Nuclear Information System (INIS)

    Leite Lopes, J.

    An account is first given of the electromagnetic interactions of complex, scalar, vector and spinor fields. It is shown that the electromagnetic field may be considered as a gauge field. Yang-Mills fields and the field theory invariant with respect to the non-Abelian gauge transformation group are then described. The construction, owing to this invariance principle, of conserved isospin currents associated with gauge fields is also demonstrated. This is followed by a historical survey of the development of the weak interaction theory, established at first to describe beta disintegration processes by analogy with electrodynamics. The various stages are mentioned from the discovery of principles and rules and violation of principles, such as those of invariance with respect to spatial reflection and charge conjugation to the formulation of the effective current-current Lagrangian and research on the structure of weak currents [fr

  13. The Principle of General Tovariance

    Science.gov (United States)

    Heunen, C.; Landsman, N. P.; Spitters, B.

    2008-06-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.

  14. Nonextensive entropies derived from Gauss' principle

    International Nuclear Information System (INIS)

    Wada, Tatsuaki

    2011-01-01

    Gauss' principle in statistical mechanics is generalized for a q-exponential distribution in nonextensive statistical mechanics. It determines the associated stochastic and statistical nonextensive entropies which satisfy Greene-Callen principle concerning on the equivalence between microcanonical and canonical ensembles. - Highlights: → Nonextensive entropies are derived from Gauss' principle and ensemble equivalence. → Gauss' principle is generalized for a q-exponential distribution. → I have found the condition for satisfying Greene-Callen principle. → The associated statistical q-entropy is found to be normalized Tsallis entropy.

  15. A Pontryagin Minimum Principle-Based Adaptive Equivalent Consumption Minimum Strategy for a Plug-in Hybrid Electric Bus on a Fixed Route

    Directory of Open Access Journals (Sweden)

    Shaobo Xie

    2017-09-01

    Full Text Available When developing a real-time energy management strategy for a plug-in hybrid electric vehicle, it is still a challenge for the Equivalent Consumption Minimum Strategy to achieve near-optimal energy consumption, because the optimal equivalence factor is not readily available without the trip information. With the help of realistic speeding profiles sampled from a plug-in hybrid electric bus running on a fixed commuting line, this paper proposes a convenient and effective approach of determining the equivalence factor for an adaptive Equivalent Consumption Minimum Strategy. Firstly, with the adaptive law based on the feedback of battery SOC, the equivalence factor is described as a combination of the major component and tuning component. In particular, the major part defined as a constant is applied to the inherent consistency of regular speeding profiles, while the second part including a proportional and integral term can slightly tune the equivalence factor to satisfy the disparity of daily running cycles. Moreover, Pontryagin’s Minimum Principle is employed and solved by using the shooting method to capture the co-state dynamics, in which the Secant method is introduced to adjust the initial co-state value. And then the initial co-state value in last shooting is taken as the optimal stable constant of equivalence factor. Finally, altogether ten successive driving profiles are selected with different initial SOC levels to evaluate the proposed method, and the results demonstrate the excellent fuel economy compared with the dynamic programming and PMP method.

  16. Gravitational quadrupolar coupling to equivalence principle test masses: the general case

    International Nuclear Information System (INIS)

    Lockerbie, N A

    2002-01-01

    This paper discusses the significance of the quadrupolar gravitational force in the context of test masses destined for use in equivalence principle (EP) experiments, such as STEP and MICROSCOPE. The relationship between quadrupolar gravity and rotational inertia for an arbitrary body is analysed, and the special, gravitational, role of a body's principal axes of inertia is revealed. From these considerations the gravitational quadrupolar force acting on a cylindrically symmetrical body, due to a point-like attracting source mass, is derived in terms of the body's mass quadrupole tensor. The result is shown to be in agreement with that obtained from MacCullagh's formula (as the starting point). The theory is then extended to cover the case of a completely arbitrary solid body, and a compact formulation for the quadrupolar force on such a body is derived. A numerical example of a dumb-bell's attraction to a local point-like gravitational source is analysed using this theory. Close agreement is found between the resulting quadrupolar force on the body and the difference between the net and the monopolar forces acting on it, underscoring the utility of the approach. A dynamical technique for experimentally obtaining the mass quadrupole tensors of EP test masses is discussed, and a means of validating the results is noted

  17. The Thermodynamical Arrow and the Historical Arrow; Are They Equivalent?

    Directory of Open Access Journals (Sweden)

    Martin Tamm

    2017-08-01

    Full Text Available In this paper, the relationship between the thermodynamic and historical arrows of time is studied. In the context of a simple combinatorial model, their definitions are made more precise and in particular strong versions (which are not compatible with time symmetric microscopic laws and weak versions (which can be compatible with time symmetric microscopic laws are given. This is part of a larger project that aims to explain the arrows as consequences of a common time symmetric principle in the set of all possible universes. However, even if we accept that both arrows may have the same origin, this does not imply that they are equivalent, and it is argued that there can be situations where one arrow may be well-defined but the other is not.

  18. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.

  19. The Nome law compromise: the limits of a market system with weak economic principles

    International Nuclear Information System (INIS)

    Finon, D.

    2010-01-01

    The NOME law aims for two principle objectives in terms of competition: one objective is to increase the share of the market for the rivals of the historic suppliers, and the other is to develop retail competition that will lead to competitive prices, consistent with the current cost of nuclear kWh. This is brought about through a regulation of prices and of the quantity of wholesale trades by allocating drawing rights on nuclear power to alternatives, and through control mechanisms that dissuade the buyers of these rights om arbitrating the European wholesale market. We show then that it is necessary to leave the canonic running of the electricity retail market to succeed in decoupling retail prices om wholesale prices. We identify the importance of the historic suppliers' role as a linchpin that in practice defines retail prices, as well as handling market distribution between themselves and the alternatives. We note the special nature of retail prices as coming not from market balance, but rather as being a rice defined under political injunction, which is therefore implicitly regulated. With weak economic foundations, the system can be pushed of course by the effect of competition alone, in particular when we reach the allocation limit of a quarter of nuclear energy production. It has an equally weak legal basis with respect to European case law. That raises doubt about its sustainability. (author)

  20. Quantum Action Principle with Generalized Uncertainty Principle

    OpenAIRE

    Gu, Jie

    2013-01-01

    One of the common features in all promising candidates of quantum gravity is the existence of a minimal length scale, which naturally emerges with a generalized uncertainty principle, or equivalently a modified commutation relation. Schwinger's quantum action principle was modified to incorporate this modification, and was applied to the calculation of the kernel of a free particle, partly recovering the result previously studied using path integral.

  1. Limiting absorption principle at low energies for a mathematical model of weak interaction: the decay of a boson

    International Nuclear Information System (INIS)

    Barbarouxa, J.M.; Guillot, J.C.

    2009-01-01

    We study the spectral properties of a Hamiltonian describing the weak decay of spin 1 massive bosons into the full family of leptons. We prove that the considered Hamiltonian is self-adjoint, with a unique ground state and we derive a Mourre estimate and a limiting absorption principle above the ground state energy and below the first threshold, for a sufficiently small coupling constant. As a corollary, we prove absence of eigenvalues and absolute continuity of the energy spectrum in the same spectral interval. (authors)

  2. Experimental constraints on a minimal and nonminimal violation of the equivalence principle in the oscillations of massive neutrinos

    International Nuclear Information System (INIS)

    Gasperini, M.; Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Torino, Italy)

    1989-01-01

    The negative results of the oscillations experiments are discussed with the hypothesis that the various neutrino types are not universally coupled to gravity. In this case the transition probabiltiy between two different flavor eigenstates may be affected by the local gravitational field present in a terrestrial laboratory, and the contribution of gravity can interfere, in general, with the mass contribution to the oscillation process. In particular it is shown that even a strong violation of the equivalence principle could be compatible with the experimental data, provided the gravity-induced energy splitting is balanced by a suitable neutrino mass difference

  3. Dispersion-corrected first-principles calculation of terahertz vibration, and evidence for weak hydrogen bond formation

    Science.gov (United States)

    Takahashi, Masae; Ishikawa, Yoichi; Ito, Hiromasa

    2013-03-01

    A weak hydrogen bond (WHB) such as CH-O is very important for the structure, function, and dynamics in a chemical and biological system WHB stretching vibration is in a terahertz (THz) frequency region Very recently, the reasonable performance of dispersion-corrected first-principles to WHB has been proven. In this lecture, we report dispersion-corrected first-principles calculation of the vibrational absorption of some organic crystals, and low-temperature THz spectral measurement, in order to clarify WHB stretching vibration. The THz frequency calculation of a WHB crystal has extremely improved by dispersion correction. Moreover, the discrepancy in frequency between an experiment and calculation and is 10 1/cm or less. Dispersion correction is especially effective for intermolecular mode. The very sharp peak appearing at 4 K is assigned to the intermolecular translational mode that corresponds to WHB stretching vibration. It is difficult to detect and control the WHB formation in a crystal because the binding energy is very small. With the help of the latest intense development of experimental and theoretical technique and its careful use, we reveal solid-state WHB stretching vibration as evidence for the WHB formation that differs in respective WHB networks The research was supported by the Ministry of Education, Culture, Sports, Science and Technology of Japan (Grant No. 22550003).

  4. Experimental method research on neutron equal dose-equivalent detection

    International Nuclear Information System (INIS)

    Ji Changsong

    1995-10-01

    The design principles of neutron dose-equivalent meter for neutron biological equi-effect detection are studied. Two traditional principles 'absorption net principle' and 'multi-detector principle' are discussed, and on the basis of which a new theoretical principle for neutron biological equi-effect detection--'absorption stick principle' has been put forward to place high hope on both increasing neutron sensitivity of this type of meters and overcoming the shortages of the two traditional methods. In accordance with this new principle a brand-new model of neutron dose-equivalent meter BH3105 has been developed. Its neutron sensitivity reaches 10 cps/(μSv·h -1 ), 18∼40 times higher than that of all the same kinds of meters 0.23∼0.56 cps/(μSv·h -1 ), available today at home and abroad and the specifications of the newly developed meter reach or surpass the levels of the same kind of meters. Therefore the new theoretical principle of neutron biological equi-effect detection--'absorption stick principle' is proved to be scientific, advanced and useful by experiments. (3 refs., 3 figs., 2 tabs.)

  5. Current research efforts at JILA to test the equivalence principle at short ranges

    International Nuclear Information System (INIS)

    Faller, J.E.; Niebauer, T.M.; McHugh, M.P.; Van Baak, D.A.

    1988-01-01

    We are presently engaged in three different experiments to search for a possible breakdown of the equivalence principle at short ranges. The first of these experiments, which has been completed, is our so-called Galilean test in which the differential free-fall of two objects of differing composition was measured using laser interferometry. We observed that the differential acceleration of two test bodies was less than 5 parts in 10 billion. This experiment set new limits on a suggested baryon dependent ''Fifth Force'' at ranges longer than 1 km. With a second experiment, we are investigating substance dependent interactions primarily for ranges up to 10 meters using a fluid supported torsion balance; this apparatus has been built and is now undergoing laboratory tests. Finally, a proposal has been made to measure the gravitational signal associated with the changing water level at a large pumped storage facility in Ludington, Michigan. Measuring the gravitational signal above and below the pond will yield the value of the gravitational constant, G, at ranges from 10-100 m. These measurements will serve as an independent check on other geophysical measurements of G

  6. Testing the Equivalence Principle in an Einstein Elevator: Detector Dynamics and Gravity Perturbations

    Science.gov (United States)

    Hubbard, Dorthy (Technical Monitor); Lorenzini, E. C.; Shapiro, I. I.; Cosmo, M. L.; Ashenberg, J.; Parzianello, G.; Iafolla, V.; Nozzoli, S.

    2003-01-01

    We discuss specific, recent advances in the analysis of an experiment to test the Equivalence Principle (EP) in free fall. A differential accelerometer detector with two proof masses of different materials free falls inside an evacuated capsule previously released from a stratospheric balloon. The detector spins slowly about its horizontal axis during the fall. An EP violation signal (if present) will manifest itself at the rotational frequency of the detector. The detector operates in a quiet environment as it slowly moves with respect to the co-moving capsule. There are, however, gravitational and dynamical noise contributions that need to be evaluated in order to define key requirements for this experiment. Specifically, higher-order mass moments of the capsule contribute errors to the differential acceleration output with components at the spin frequency which need to be minimized. The dynamics of the free falling detector (in its present design) has been simulated in order to estimate the tolerable errors at release which, in turn, define the release mechanism requirements. Moreover, the study of the higher-order mass moments for a worst-case position of the detector package relative to the cryostat has led to the definition of requirements on the shape and size of the proof masses.

  7. Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission

    Science.gov (United States)

    Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G.; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.

    2018-01-01

    The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10-5. By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10-5 and the Sun's gravitational oblateness, J2⊙J2⊙ = (2.246 ± 0.022) × 10-7. Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, GM⊙°/GM⊙GM⊙°/GM⊙ = (-6.13 ± 1.47) × 10-14, which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain ∣∣G°∣∣/GG°/G to be <4 × 10-14 per year.

  8. Is the Strong Anthropic Principle too weak?

    International Nuclear Information System (INIS)

    Feoli, A.; Rampone, S.

    1999-01-01

    The authors discuss Carter's formula about the mankind evolution probability following the derivation proposed by Barrow and Tipler. The authors stress the relation between the existence of billions galaxies and the evolution of at least one intelligent life, whose living time is not trivial, all over the Universe. The authors show that the existence probability and the lifetime of a civilization depend not only on the evolutionary critical steps, but also on the number of places where the life can arise. In the light of these results, are proposed a stronger version of Anthropic Principle

  9. Weak relativity

    CERN Document Server

    Selleri, Franco

    2015-01-01

    Weak Relativity is an equivalent theory to Special Relativity according to Reichenbach’s definition, where the parameter epsilon equals to 0. It formulates a Neo-Lorentzian approach by replacing the Lorentz transformations with a new set named “Inertial Transformations”, thus explaining the Sagnac effect, the twin paradox and the trip from the future to the past in an easy and elegant way. The cosmic microwave background is suggested as a possible privileged reference system. Most importantly, being a theory based on experimental proofs, rather than mutual consensus, it offers a physical description of reality independent of the human observation.

  10. Consistency of the Mach principle and the gravitational-to-inertial mass equivalence principle

    International Nuclear Information System (INIS)

    Granada, Kh.K.; Chubykalo, A.E.

    1990-01-01

    Kinematics of the system, composed of two bodies, interacting with each other according to inverse-square law, was investigated. It is shown that the Mach principle, earlier rejected by the general relativity theory, can be used as an alternative for the absolute space concept, if it is proposed, that distant star background dictates both inertial and gravitational mass of a body

  11. On Dual Phase-Space Relativity, the Machian Principle and Modified Newtonian Dynamics

    CERN Document Server

    Castro, C

    2004-01-01

    We investigate the consequences of the Mach's principle of inertia within the context of the Dual Phase Space Relativity which is compatible with the Eddington-Dirac large numbers coincidences and may provide with a physical reason behind the observed anomalous Pioneer acceleration and a solution to the riddle of the cosmological constant problem ( Nottale ). The cosmological implications of Non-Archimedean Geometry by assigning an upper impassible scale in Nature and the cosmological variations of the fundamental constants are also discussed. We study the corrections to Newtonian dynamics resulting from the Dual Phase Space Relativity by analyzing the behavior of a test particle in a modified Schwarzschild geometry (due to the the effects of the maximal acceleration) that leads in the weak-field approximation to essential modifications of the Newtonian dynamics and to violations of the equivalence principle. Finally we follow another avenue and find modified Newtonian dynamics induced by the Yang's Noncommut...

  12. Introduction to unification of electromagnetic and weak interactions

    International Nuclear Information System (INIS)

    Martin, F.

    1980-01-01

    After reviewing the present status of weak interaction phenomenology we discuss the basic principles of gauge theories. Then we show how Higgs mechanism can give massive quanta of interaction. The so-called 'Weinberg-Salam' model, which unifies electromagnetic and weak interactions, is described. We conclude with a few words on unification with strong interactions and gravity [fr

  13. Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission.

    Science.gov (United States)

    Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G; Neumann, Gregory A; Smith, David E; Zuber, Maria T

    2018-01-18

    The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10 -5 . By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10 -5 and the Sun's gravitational oblateness, [Formula: see text] = (2.246 ± 0.022) × 10 -7 . Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, [Formula: see text] = (-6.13 ± 1.47) × 10 -14 , which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain [Formula: see text] to be <4 × 10 -14 per year.

  14. The Application of Equivalence Theory to Advertising Translation

    Institute of Scientific and Technical Information of China (English)

    张颖

    2017-01-01

    Through analyzing equivalence theory, the author tries to find a solution to the problems arising in the process of ad?vertising translation. These problems include cultural diversity, language diversity and special requirement of advertisement. The author declares that Nida''s functional equivalence is one of the most appropriate theories to deal with these problems. In this pa?per, the author introduces the principles of advertising translation and culture divergences in advertising translation, and then gives some advertising translation practices to explain and analyze how to create good advertising translation by using functional equivalence. At last, the author introduces some strategies in advertising translation.

  15. Bisimulation Meet PCTL Equivalences for Probabilistic Automata (Journal Version)

    DEFF Research Database (Denmark)

    Song, Lei; Zhang, Lijun; Godskesen, Jens Christian

    2013-01-01

    Probabilistic automata (PA) have been successfully applied in the formal verification of concurrent and stochastic systems. Efficient model checking algorithms have been studied, where the most often used logics for expressing properties are based on PCTL and its extension PCTL*. Various behavioral...... equivalences are proposed for PAs, as a powerful tool for abstraction and compositional minimization for PAs. Unfortunately, the behavioral equivalences are well-known to be strictly stronger than the logical equivalences induced by PCTL or PCTL*. This paper introduces novel notions of strong bisimulation...... relations, which characterizes PCTL and PCTL* exactly. We also extend weak bisimulations characterizing PCTL and PCTL* without next operator, respectively. Thus, our paper bridges the gap between logical and behavioral equivalences in this setting....

  16. Bayesian Markov Chain Monte Carlo inversion for weak anisotropy parameters and fracture weaknesses using azimuthal elastic impedance

    Science.gov (United States)

    Chen, Huaizhen; Pan, Xinpeng; Ji, Yuxin; Zhang, Guangzhi

    2017-08-01

    A system of aligned vertical fractures and fine horizontal shale layers combine to form equivalent orthorhombic media. Weak anisotropy parameters and fracture weaknesses play an important role in the description of orthorhombic anisotropy (OA). We propose a novel approach of utilizing seismic reflection amplitudes to estimate weak anisotropy parameters and fracture weaknesses from observed seismic data, based on azimuthal elastic impedance (EI). We first propose perturbation in stiffness matrix in terms of weak anisotropy parameters and fracture weaknesses, and using the perturbation and scattering function, we derive PP-wave reflection coefficient and azimuthal EI for the case of an interface separating two OA media. Then we demonstrate an approach to first use a model constrained damped least-squares algorithm to estimate azimuthal EI from partially incidence-phase-angle-stack seismic reflection data at different azimuths, and then extract weak anisotropy parameters and fracture weaknesses from the estimated azimuthal EI using a Bayesian Markov Chain Monte Carlo inversion method. In addition, a new procedure to construct rock physics effective model is presented to estimate weak anisotropy parameters and fracture weaknesses from well log interpretation results (minerals and their volumes, porosity, saturation, fracture density, etc.). Tests on synthetic and real data indicate that unknown parameters including elastic properties (P- and S-wave impedances and density), weak anisotropy parameters and fracture weaknesses can be estimated stably in the case of seismic data containing a moderate noise, and our approach can make a reasonable estimation of anisotropy in a fractured shale reservoir.

  17. Algorithms for singularities and real structures of weak Del Pezzo surfaces

    KAUST Repository

    Lubbes, Niels

    2014-08-01

    In this paper, we consider the classification of singularities [P. Du Val, On isolated singularities of surfaces which do not affect the conditions of adjunction. I, II, III, Proc. Camb. Philos. Soc. 30 (1934) 453-491] and real structures [C. T. C. Wall, Real forms of smooth del Pezzo surfaces, J. Reine Angew. Math. 1987(375/376) (1987) 47-66, ISSN 0075-4102] of weak Del Pezzo surfaces from an algorithmic point of view. It is well-known that the singularities of weak Del Pezzo surfaces correspond to root subsystems. We present an algorithm which computes the classification of these root subsystems. We represent equivalence classes of root subsystems by unique labels. These labels allow us to construct examples of weak Del Pezzo surfaces with the corresponding singularity configuration. Equivalence classes of real structures of weak Del Pezzo surfaces are also represented by root subsystems. We present an algorithm which computes the classification of real structures. This leads to an alternative proof of the known classification for Del Pezzo surfaces and extends this classification to singular weak Del Pezzo surfaces. As an application we classify families of real conics on cyclides. © World Scientific Publishing Company.

  18. The Satellite Test of the Equivalence Principle (STEP)

    Science.gov (United States)

    2004-01-01

    STEP will carry concentric test masses to Earth orbit to test a fundamental assumption underlying Einstein's theory of general relativity: that gravitational mass is equivalent to inertial mass. STEP is a 21st-century version of the test that Galileo is said to have performed by dropping a carnon ball and a musket ball simultaneously from the top of the Leaning Tower of Pisa to compare their accelerations. During the STEP experiment, four pairs of test masses will be falling around the Earth, and their accelerations will be measured by superconducting quantum interference devices (SQUIDS). The extended time sensitivity of the instruments will allow the measurements to be a million times more accurate than those made in modern ground-based tests.

  19. Evaluation of 1cm dose equivalent rate using a NaI(Tl) scintilation spectrometer

    International Nuclear Information System (INIS)

    Matsuda, Hideharu

    1990-01-01

    A method for evaluating 1 cm dose equivalent rates from a pulse height distribution obtained by a 76.2mmφ spherical NaI(Tl) scintillation spectrometer was described. Weak leakage radiation from nuclear facilities were also measured and dose equivalent conversion factor and effective energy of leakage radiation were evaluated from 1 cm dose equivalent rate and exposure rate. (author)

  20. Weak MSO: automata and expressiveness modulo bisimilarity

    NARCIS (Netherlands)

    Carreiro, F.; Facchini, A.; Venema, Y.; Zanasi, F.

    2014-01-01

    We prove that the bisimulation-invariant fragment of weak monadic second-order logic (WMSO) is equivalent to the fragment of the modal μ-calculus where the application of the least fixpoint operator μp.φ is restricted to formulas φ that are continuous in p. Our proof is automata-theoretic in nature;

  1. A Community Standard: Equivalency of Healthcare in Australian Immigration Detention.

    Science.gov (United States)

    Essex, Ryan

    2017-08-01

    The Australian government has long maintained that the standard of healthcare provided in its immigration detention centres is broadly comparable with health services available within the Australian community. Drawing on the literature from prison healthcare, this article examines (1) whether the principle of equivalency is being applied in Australian immigration detention and (2) whether this standard of care is achievable given Australia's current policies. This article argues that the principle of equivalency is not being applied and that this standard of health and healthcare will remain unachievable in Australian immigration detention without significant reform. Alternate approaches to addressing the well documented issues related to health and healthcare in Australian immigration detention are discussed.

  2. Copernicus, Kant, and the anthropic cosmological principles

    Science.gov (United States)

    Roush, Sherrilyn

    In the last three decades several cosmological principles and styles of reasoning termed 'anthropic' have been introduced into physics research and popular accounts of the universe and human beings' place in it. I discuss the circumstances of 'fine tuning' that have motivated this development, and what is common among the principles. I examine the two primary principles, and find a sharp difference between these 'Weak' and 'Strong' varieties: contrary to the view of the progenitors that all anthropic principles represent a departure from Copernicanism in cosmology, the Weak Anthropic Principle is an instance of Copernicanism. It has close affinities with the step of Copernicus that Immanuel Kant took himself to be imitating in the 'critical' turn that gave rise to the Critique of Pure Reason. I conclude that the fact that a way of going about natural science mentions human beings is not sufficient reason to think that it is a subjective approach; in fact, it may need to mention human beings in order to be objective.

  3. On a Weak Discrete Maximum Principle for hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Vejchodský, Tomáš

    -, č. 209 (2007), s. 54-65 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/05/0629 Institutional research plan: CEZ:AV0Z20570509; CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.943, year: 2007

  4. A Cp-theory problem book functional equivalencies

    CERN Document Server

    Tkachuk, Vladimir V

    2016-01-01

    This fourth volume in Vladimir Tkachuk's series on Cp-theory gives reasonably complete coverage of the theory of functional equivalencies through 500 carefully selected problems and exercises. By systematically introducing each of the major topics of Cp-theory, the book is intended to bring a dedicated reader from basic topological principles to the frontiers of modern research. The book presents complete and up-to-date information on the preservation of topological properties by homeomorphisms of function spaces.  An exhaustive theory of t-equivalent, u-equivalent and l-equivalent spaces is developed from scratch.   The reader will also find introductions to the theory of uniform spaces, the theory of locally convex spaces, as well as  the theory of inverse systems and dimension theory. Moreover, the inclusion of Kolmogorov's solution of Hilbert's Problem 13 is included as it is needed for the presentation of the theory of l-equivalent spaces. This volume contains the most important classical re...

  5. On the role of the equivalence principle in the general relativity theory

    International Nuclear Information System (INIS)

    Gertsenshtein, M.E.; Stanyukovich, K.P.; Pogosyan, V.A.

    1977-01-01

    The conditions under which the solutions of the general relativity theory equations satisfy the correspondence principle are considered. It is shown that in general relativity theory, as in a plane space any systems of coordinates satisfying the topological requirements of continuity and uniqueness are admissible. The coordinate transformations must be mutually unique, and the following requirements must be met: the transformations of the coordinates xsup(i)=xsup(i)(anti xsup(k)) must preserve the class of the function, while the transformation jacobian must be finite and nonzero. The admissible metrics in the Tolmen problem for a vacuum are considered. A prohibition of the vacuum solution of the Tolmen problem is obtained from the correspondence principle. The correspondence principle is applied to the solution of the Friedmann problem by constructing a spherical symmetric self-similar solution, in which replacement of compression by expansion occurs at a finite density. The examples adduced convince that the application of the correspondence principle makes it possible to discard physically inadmissible solutions and obtained new physical results

  6. On Independence of Variants of the Weak Pigeonhole Principle

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil

    2007-01-01

    Roč. 17, č. 3 (2007), s. 587-604 ISSN 0955-792X R&D Projects: GA AV ČR IAA1019401; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : bounded arithmetic * pigeonhole principle * KPT witnessing Subject RIV: BA - General Mathematics Impact factor: 0.821, year: 2007

  7. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  8. The equivalence of the Dekel-Fudenberg iterative procedure and weakly perfect rationalizability

    OpenAIRE

    HERINGS, P. J.-J.; VANNETELBOSCH, Vincent J.

    1998-01-01

    Two approaches have been proposed in the literature to refine the rationalizability solution concept: either assuming that a player believes that with small probability her opponents choose strategies that are irrational, or assuming that their is a small amount of payoff uncertainty. We show that both approaches lead to the same refinement if strategy perturbations are made according to the concept of weakly perfect rationalizability, and if there is payoff uncertainty as in Dekel and Fudenb...

  9. Equivalent and Alternative Forms for BF Gravity with Immirzi Parameter

    Directory of Open Access Journals (Sweden)

    Merced Montesinos

    2011-11-01

    Full Text Available A detailed analysis of the BF formulation for general relativity given by Capovilla, Montesinos, Prieto, and Rojas is performed. The action principle of this formulation is written in an equivalent form by doing a transformation of the fields of which the action depends functionally on. The transformed action principle involves two BF terms and the two Lorentz invariants that appear in the original action principle generically. As an application of this formalism, the action principle used by Engle, Pereira, and Rovelli in their spin foam model for gravity is recovered and the coupling of the cosmological constant in such a formulation is obtained.

  10. Physics with ultra-low energy antiprotons

    International Nuclear Information System (INIS)

    Holtkamp, D.B.; Holzscheiter, M.H.; Hughes, R.J.

    1989-01-01

    The experimental observation that all forms of matter experience the same gravitational acceleration is embodied in the weak equivalence principle of gravitational physics. However no experiment has tested this principle for particles of antimatter such as the antiproton or the antihydrogen atom. Clearly the question of whether antimatter is in compliance with weak equivalence is a fundamental experimental issue, which can best be addressed at an ultra-low energy antiproton facility. This paper addresses the issue. 20 refs

  11. Mathematically equivalent hydraulic analogue for teaching the pharmcokinetics and pharmacodynamics of atracurium

    NARCIS (Netherlands)

    Nikkelen, A.L.J.M.; Meurs, van W.L.; Ohrn, M.A.K.

    1997-01-01

    We evaluated the mathematical equivalence between the two-compartment pharmacokinetic model of the neuromuscular blocking agent atracurium and a hydraulic analogue that includes harmacodynamic principles.

  12. Solar neutrino results and Violation of the Equivalence Principle An analysis of the existing data and predictions for SNO

    CERN Document Server

    Majumdar, D; Sil, A; Majumdar, Debasish; Raychaudhuri, Amitava; Sil, Arunansu

    2001-01-01

    Violation of the Equivalence Principle (VEP) can lead to neutrino oscillation through the non-diagonal coupling of neutrino flavor eigenstates with the gravitational field. The neutrino energy dependence of this oscillation probability is different from that of the usual mass-mixing neutrino oscillations. In this work we explore, in detail, the viability of the VEP hypothesis as a solution to the solar neutrino problem in a two generation scenario with both the active and sterile neutrino alternatives, choosing these states to be massless. To obtain the best-fit values of the oscillation parameters we perform a chi square analysis for the total rates of solar neutrinos seen at the Chlorine (Homestake), Gallium (Gallex and SAGE), Kamiokande, and SuperKamiokande (SK) experiments. We find that the goodness of these fits is never satisfactory. It markedly improves if the Chlorine data is excluded from the analysis, especially for VEP transformation to sterile neutrinos. The 1117-day SK data for recoil electron sp...

  13. An Abstract Approach to Process Equivalence and a Coinduction Principle for Traces

    DEFF Research Database (Denmark)

    Klin, Bartek

    2004-01-01

    An abstract coalgebraic approach to well-structured relations on processes is presented, based on notions of tests and test suites. Preorders and equivalences on processes are modelled as coalgebras for behaviour endofunctors lifted to a category of test suites. The general framework is specializ...

  14. Applicability of ambient dose equivalent H*(d) in mixed radiation fields - a critical discussion

    International Nuclear Information System (INIS)

    Hajek, M.; Vana, N.

    2004-01-01

    For purposes of routine radiation protection, it is desirable to characterize the potential irradiation of individuals in terms of a single dose equivalent quantity that would exist in a phantom approximating the human body. The phantom of choice is the ICRU sphere made of 30 cm diameter tissue-equivalent plastic with a density of 1 g.cm-3 and a mass composition of 76.2 % O, 11.1 % C, 10.1 % H and 2.6 % N. Ambient dose equivalent, H*(d), was defined in ICRU report 51 as the dose equivalent that would be produced by an expanded and aligned radiation field at a depth d in the ICRU sphere. The recommended reference depths are 10 mm for strongly penetrating radiation and 0.07 mm for weakly penetrating radiation, respectively. As an operational quantity in radiation protection, H*(d) shall serve as a conservative and directly measurable estimate of protection quantities, e.g. effective dose E, which in turn are intended to give an indication of the risk associated with radiation exposure. The situation attains increased complexity in radiation environments being composed of a variety of charged and uncharged particles in a broad energetic spectrum. Radiation fields of similarly complex nature are, for example, encountered onboard aircraft and in space. Dose equivalent was assessed as a function of depth in quasi tissue-equivalent spheres by means of thermoluminescent dosemeters evaluated according to the high-temperature ratio (HTR) method. The presented experiments were performed both onboard aircraft and the Russian space station Mir. As a result of interaction processes within the phantom body, the incident primary spectrum may be significantly modified with increasing depth. For the radiation field at aviation altitudes we found the maximum of dose equivalent in a depth of 60 mm which conflicts with the 10 mm value recommended by ICRU. Contrary, for the space radiation environment the maximum dose equivalent was found at the surface of the sphere. This suggests that

  15. Applicability of Ambient Dose Equivalent H (d) in Mixed Radiation Fields - A Critical Discussion

    International Nuclear Information System (INIS)

    Vana, R.; Hajek, M.; Bergerm, T.

    2004-01-01

    For purposes of routine radiation protection, it is desirable to characterize the potential irradiation of individuals in terms of a single dose equivalent quantity that would exist in a phantom approximating the human body. The phantom of choice is the ICRU sphere made of 30 cm diameter tissue-equivalent plastic with a density of 1 g/cm3 and a mass composition of 76.2% O, 11.1% C, 10.1% H and 2.6% N. Ambient dose equivalent, H(d), was defined in ICRU report 51 as the dose equivalent that would be produced by an expanded and aligned radiation field at a depth d in the ICRU sphere. The recommended reference depths are 10 mm for strongly penetrating radiation and 0.07 mm for weakly penetrating radiation, respectively. As an operational quantity in radiation protection, H(d) shall serve as a conservative and directly measurable estimate of protection quantities, e.g. effective dose E, which in turn are intended to give an indication of the risk associated with radiation exposure. The situation attains increased complexity in radiation environments being composed of a variety of charged and uncharged particles in a broad energetic spectrum. Radiation fields of similarly complex nature are, for example, encountered onboard aircraft and in space. Dose equivalent was assessed as a function of depth in quasi tissue-equivalent spheres by means of thermoluminescent dosemeters evaluated according to the high-temperature ratio (HTR) method. The presented experiments were performed both onboard aircraft and the Russian space station Mir. As a result of interaction processes within the phantom body, the incident primary spectrum may be significantly modified with increasing depth. For the radiation field at aviation altitudes we found the maximum of dose equivalent in a depth of 60 mm which conflicts with the 10 mm value recommended by ICRU. Contrary, for the space radiation environment the maximum dose equivalent was found at the surface of the sphere. This suggests that skin

  16. Strengthening and Stabilization of the Weak Water Saturated Soils Using Stone Columns

    Directory of Open Access Journals (Sweden)

    Sinyakov Leonid

    2016-01-01

    Full Text Available The article considers innovative modern materials and structures for strengthening of weak soils. In this paper describes a method of strengthening of weak saturated soils using stone columns. The method of calculating the physical-mechanical characteristics of reinforced soil mass is presented. Two approaches to determining the stress-strain state and timeframe of consolidation of strengthened soil foundation using the finite element technique in two-dimensional formulation are proposed. The first one approach it is a modeling of reinforced soil mass, where each pile is represented as a separate 2D stripe. The second approach is to the simulation of the strengthened mass the equivalent composite block with improved physical-mechanical characteristics. The use of the equivalent composite block can significantly reduce the time spent on the preparation of a design scheme. The results of calculations were compared. They show the allowable divergence of results of calculation by two methods were presented, and the efficiency of the strengthening of weak water saturated soils by stone column is proved.

  17. Potentiometric titration and equivalent weight of humic acid

    Science.gov (United States)

    Pommer, A.M.; Breger, I.A.

    1960-01-01

    The "acid nature" of humic acid has been controversial for many years. Some investigators claim that humic acid is a true weak acid, while others feel that its behaviour during potentiometric titration can be accounted for by colloidal adsorption of hydrogen ions. The acid character of humic acid has been reinvestigated using newly-derived relationships for the titration of weak acids with strong base. Re-interpreting the potentiometric titration data published by Thiele and Kettner in 1953, it was found that Merck humic acid behaves as a weak polyelectrolytic acid having an equivalent weight of 150, a pKa of 6.8 to 7.0, and a titration exponent of about 4.8. Interdretation of similar data pertaining to the titration of phenol-formaldehyde and pyrogallol-formaldehyde resins, considered to be analogs for humic acid by Thiele and Kettner, leads to the conclusion that it is not possible to differentiate between adsorption and acid-base reaction for these substances. ?? 1960.

  18. Planck Constant Determination from Power Equivalence

    Science.gov (United States)

    Newell, David B.

    2000-04-01

    Equating mechanical to electrical power links the kilogram, the meter, and the second to the practical realizations of the ohm and the volt derived from the quantum Hall and the Josephson effects, yielding an SI determination of the Planck constant. The NIST watt balance uses this power equivalence principle, and in 1998 measured the Planck constant with a combined relative standard uncertainty of 8.7 x 10-8, the most accurate determination to date. The next generation of the NIST watt balance is now being assembled. Modification to the experimental facilities have been made to reduce the uncertainty components from vibrations and electromagnetic interference. A vacuum chamber has been installed to reduce the uncertainty components associated with performing the experiment in air. Most of the apparatus is in place and diagnostic testing of the balance should begin this year. Once a combined relative standard uncertainty of one part in 10-8 has been reached, the power equivalence principle can be used to monitor the possible drift in the artifact mass standard, the kilogram, and provide an accurate alternative definition of mass in terms of fundamental constants. *Electricity Division, Electronics and Electrical Engineering Laboratory, Technology Administration, U.S. Department of Commerce. Contribution of the National Institute of Standards and Technology, not subject to copyright in the U.S.

  19. An upper bound on right-chiral weak interactions

    International Nuclear Information System (INIS)

    Stephenson, G.J.; Goldman, T.; Maltman, K.

    1990-01-01

    Weak vertex corrections to the quark-gluon vertex functions produce differing form-factor corrections for quarks of differing chiralities. These differences grow with increasing four-momentum transfer in the gluon leg. Consequently, inclusive polarized proton--proton scattering to a final state jet should show a large parity-violating asymmetry at high energies. The absence of large signals at sufficiently high energies can be interpreted as being due to balancing vertex corrections from a right-handed weak vector boson of limited mass, and limits on the strength of such signals can, in principle, give upper bounds on that mass. 2 refs

  20. An upper bound on right-Chiral weak interactions

    International Nuclear Information System (INIS)

    Stephenson, G.J.; Goldman, T.; Maltman, K.

    1990-01-01

    Weak vertex corrections to the quark-gluon vertex functions produce differing form-factor corrections for quarks of differing chiralities. These differences grow with increasing four-momentum transfer in the gluon leg. Consequently, inclusive polarized proton-proton scattering to a final state jet should show a large parity-violating asymmetry at high energies. The absence of large signals at sufficiently high energies can be interpreted as being due to balancing vertex corrections from a right-handed weak vector boson of limited mass, and limits on the strength of such signals can, in principle, give upper bounds on that mass

  1. Diabetes and hypertension care among male prisoners in Mexico City: exploring transition of care and the equivalence principle.

    Science.gov (United States)

    Silverman-Retana, Omar; Servan-Mori, Edson; Lopez-Ridaura, Ruy; Bautista-Arredondo, Sergio

    2016-07-01

    To document the performance of diabetes and hypertension care in two large male prisons in Mexico City. We analyzed data from a cross-sectional study carried out during July-September 2010, including 496 prisoners with hypertension or diabetes in Mexico City. Bivariate and multivariable logistic regressions were used to assess process-of-care indicators and disease control status. Hypertension and diabetes prevalence were estimated on 2.1 and 1.4 %, respectively. Among prisoners with diabetes 22.7 % (n = 62) had hypertension as comorbidity. Low achievement of process-of-care indicators-follow-up visits, blood pressure and laboratory assessments-were observed during incarceration compared to the same prisoners in the year prior to incarceration. In contrast to nonimprisoned diabetes population from Mexico City and from the lowest quintile of socioeconomic status at the national level, prisoners with diabetes had the lowest performance on process-of-care indicators. Continuity of care for chronic diseases, coupled with the equivalence of care principle, should provide the basis for designing chronic disease health policy for prisoners, with the goal of consistent transition of care from community to prison and vice versa.

  2. Classical and Weak Solutions for Two Models in Mathematical Finance

    Science.gov (United States)

    Gyulov, Tihomir B.; Valkov, Radoslav L.

    2011-12-01

    We study two mathematical models, arising in financial mathematics. These models are one-dimensional analogues of the famous Black-Scholes equation on finite interval. The main difficulty is the degeneration at the both ends of the space interval. First, classical solutions are studied. Positivity and convexity properties of the solutions are discussed. Variational formulation in weighted Sobolev spaces is introduced and existence and uniqueness of the weak solution is proved. Maximum principle for weak solution is discussed.

  3. Fundamental symmetry tests with antihydrogen

    International Nuclear Information System (INIS)

    Hughes, R.J.

    1992-01-01

    The prospects for testing CPT invariance and the weak equivalence principle (WEP) for antimatter with spectroscopic measurements on antihydrogen are discussed. The potential precisions of these tests are compared with those from other measurements. The arguments involving energy conservation, the behavior of neutral kaons in a gravitational field and the equivalence principle for antiparticles are reviewed in detail

  4. Assaults by Mentally Disordered Offenders in Prison: Equity and Equivalence.

    Science.gov (United States)

    Hales, Heidi; Dixon, Amy; Newton, Zoe; Bartlett, Annie

    2016-06-01

    Managing the violent behaviour of mentally disordered offenders (MDO) is challenging in all jurisdictions. We describe the ethical framework and practical management of MDOs in England and Wales in the context of the move to equivalence of healthcare between hospital and prison. We consider the similarities and differences between prison and hospital management of the violent and challenging behaviours of MDOs. We argue that both types of institution can learn from each other and that equivalence of care should extend to equivalence of criminal proceedings in court and prisons for MDOs. We argue that any adjudication process in prison for MDOs is enhanced by the relevant involvement of mental health professionals and the articulation of the ethical principles underpinning health and criminal justice practices.

  5. Equivalent Circuit Modeling of Hysteresis Motors

    Energy Technology Data Exchange (ETDEWEB)

    Nitao, J J; Scharlemann, E T; Kirkendall, B A

    2009-08-31

    We performed a literature review and found that many equivalent circuit models of hysteresis motors in use today are incorrect. The model by Miyairi and Kataoka (1965) is the correct one. We extended the model by transforming it to quadrature coordinates, amenable to circuit or digital simulation. 'Hunting' is an oscillatory phenomenon often observed in hysteresis motors. While several works have attempted to model the phenomenon with some partial success, we present a new complete model that predicts hunting from first principles.

  6. Conditions for equivalence of statistical ensembles in nuclear multifragmentation

    International Nuclear Information System (INIS)

    Mallik, Swagata; Chaudhuri, Gargi

    2012-01-01

    Statistical models based on canonical and grand canonical ensembles are extensively used to study intermediate energy heavy-ion collisions. The underlying physical assumption behind canonical and grand canonical models is fundamentally different, and in principle agree only in the thermodynamical limit when the number of particles become infinite. Nevertheless, we show that these models are equivalent in the sense that they predict similar results if certain conditions are met even for finite nuclei. In particular, the results converge when nuclear multifragmentation leads to the formation of predominantly nucleons and low mass clusters. The conditions under which the equivalence holds are amenable to present day experiments.

  7. Progress in classical and quantum variational principles

    International Nuclear Information System (INIS)

    Gray, C G; Karl, G; Novikov, V A

    2004-01-01

    We review the development and practical uses of a generalized Maupertuis least action principle in classical mechanics in which the action is varied under the constraint of fixed mean energy for the trial trajectory. The original Maupertuis (Euler-Lagrange) principle constrains the energy at every point along the trajectory. The generalized Maupertuis principle is equivalent to Hamilton's principle. Reciprocal principles are also derived for both the generalized Maupertuis and the Hamilton principles. The reciprocal Maupertuis principle is the classical limit of Schroedinger's variational principle of wave mechanics and is also very useful to solve practical problems in both classical and semiclassical mechanics, in complete analogy with the quantum Rayleigh-Ritz method. Classical, semiclassical and quantum variational calculations are carried out for a number of systems, and the results are compared. Pedagogical as well as research problems are used as examples, which include nonconservative as well as relativistic systems. '... the most beautiful and important discovery of Mechanics.' Lagrange to Maupertuis (November 1756)

  8. Why is the correlation between gene importance and gene evolutionary rate so weak?

    Science.gov (United States)

    Wang, Zhi; Zhang, Jianzhi

    2009-01-01

    One of the few commonly believed principles of molecular evolution is that functionally more important genes (or DNA sequences) evolve more slowly than less important ones. This principle is widely used by molecular biologists in daily practice. However, recent genomic analysis of a diverse array of organisms found only weak, negative correlations between the evolutionary rate of a gene and its functional importance, typically measured under a single benign lab condition. A frequently suggested cause of the above finding is that gene importance determined in the lab differs from that in an organism's natural environment. Here, we test this hypothesis in yeast using gene importance values experimentally determined in 418 lab conditions or computationally predicted for 10,000 nutritional conditions. In no single condition or combination of conditions did we find a much stronger negative correlation, which is explainable by our subsequent finding that always-essential (enzyme) genes do not evolve significantly more slowly than sometimes-essential or always-nonessential ones. Furthermore, we verified that functional density, approximated by the fraction of amino acid sites within protein domains, is uncorrelated with gene importance. Thus, neither the lab-nature mismatch nor a potentially biased among-gene distribution of functional density explains the observed weakness of the correlation between gene importance and evolutionary rate. We conclude that the weakness is factual, rather than artifactual. In addition to being weakened by population genetic reasons, the correlation is likely to have been further weakened by the presence of multiple nontrivial rate determinants that are independent from gene importance. These findings notwithstanding, we show that the principle of slower evolution of more important genes does have some predictive power when genes with vastly different evolutionary rates are compared, explaining why the principle can be practically useful

  9. Major strengths and weaknesses of the lod score method.

    Science.gov (United States)

    Ott, J

    2001-01-01

    Strengths and weaknesses of the lod score method for human genetic linkage analysis are discussed. The main weakness is its requirement for the specification of a detailed inheritance model for the trait. Various strengths are identified. For example, the lod score (likelihood) method has optimality properties when the trait to be studied is known to follow a Mendelian mode of inheritance. The ELOD is a useful measure for information content of the data. The lod score method can emulate various "nonparametric" methods, and this emulation is equivalent to the nonparametric methods. Finally, the possibility of building errors into the analysis will prove to be essential for the large amount of linkage and disequilibrium data expected in the near future.

  10. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. II. The rejection of common mode forces

    International Nuclear Information System (INIS)

    Comandi, G.L.; Toncelli, R.; Chiofalo, M.L.; Bramanti, D.; Nobili, A.M.

    2006-01-01

    'Galileo Galilei on the ground' (GGG) is a fast rotating differential accelerometer designed to test the equivalence principle (EP). Its sensitivity to differential effects, such as the effect of an EP violation, depends crucially on the capability of the accelerometer to reject all effects acting in common mode. By applying the theoretical and simulation methods reported in Part I of this work, and tested therein against experimental data, we predict the occurrence of an enhanced common mode rejection of the GGG accelerometer. We demonstrate that the best rejection of common mode disturbances can be tuned in a controlled way by varying the spin frequency of the GGG rotor

  11. The four variational principles of mechanics

    International Nuclear Information System (INIS)

    Gray, C.G.; Karl, G.; Novikov, V.A.

    1996-01-01

    We argue that there are four basic forms of the variational principles of mechanics: Hamilton close-quote s least action principle (HP), the generalized Maupertuis principle (MP), and their two reciprocal principles, RHP and RMP. This set is invariant under reciprocity and Legendre transformations. One of these forms (HP) is in the literature: only special cases of the other three are known. The generalized MP has a weaker constraint compared to the traditional formulation, only the mean energy bar E is kept fixed between virtual paths. This reformulation of MP alleviates several weaknesses of the old version. The reciprocal Maupertuis principle (RMP) is the classical limit of Schroedinger close-quote s variational principle of quantum mechanics, and this connection emphasizes the importance of the reciprocity transformation for variational principles. Two unconstrained formulations (UHP and UMP) of these four principles are also proposed, with completely specified Lagrange multipliers Percival close-quote s variational principle for invariant tori and variational principles for scattering orbits are derived from the RMP. The RMP is very convenient for approximate variational solutions to problems in mechanics using Ritz type methods Examples are provided. Copyright copyright 1996 Academic Press, Inc

  12. Derivation of the blackbody radiation spectrum from the equivalence principle in classical physics with classical electromagnetic zero-point radiation

    International Nuclear Information System (INIS)

    Boyer, T.H.

    1984-01-01

    A derivation of Planck's spectrum including zero-point radiation is given within classical physics from recent results involving the thermal effects of acceleration through classical electromagnetic zero-point radiation. A harmonic electric-dipole oscillator undergoing a uniform acceleration a through classical electromagnetic zero-point radiation responds as would the same oscillator in an inertial frame when not in zero-point radiation but in a different spectrum of random classical radiation. Since the equivalence principle tells us that the oscillator supported in a gravitational field g = -a will respond in the same way, we see that in a gravitational field we can construct a perpetual-motion machine based on this different spectrum unless the different spectrum corresponds to that of thermal equilibrium at a finite temperature. Therefore, assuming the absence of perpetual-motion machines of the first kind in a gravitational field, we conclude that the response of an oscillator accelerating through classical zero-point radiation must be that of a thermal system. This then determines the blackbody radiation spectrum in an inertial frame which turns out to be exactly Planck's spectrum including zero-point radiation

  13. BH3105 type neutron dose equivalent meter of high sensitivity

    International Nuclear Information System (INIS)

    Ji Changsong; Zhang Enshan; Yang Jianfeng; Zhang Hong; Huang Jiling

    1995-10-01

    It is noted that to design a neutron dose meter of high sensitivity is almost impossible in the frame of traditional designing principle--'absorption net principle'. Based on a newly proposed principle of obtaining neutron dose equi-biological effect adjustment--' absorption stick principle', a brand-new neutron dose-equivalent meter with high neutron sensitivity BH3105 has been developed. Its sensitivity reaches 10 cps/(μSv·h -1 ), which is 18∼40 times higher than one of foreign products of the same kind and is 10 4 times higher than that of domestic FJ342 neutron rem-meter. BH3105 has a measurement range from 0.1μSv/h to 1 Sv/h which is 1 or 2 orders wider than that of the other's. It has the advanced properties of gamma-resistance, energy response, orientation, etc. (6 tabs., 5 figs.)

  14. Weak values in a classical theory with an epistemic restriction

    International Nuclear Information System (INIS)

    Karanjai, Angela; Cavalcanti, Eric G; Bartlett, Stephen D; Rudolph, Terry

    2015-01-01

    Weak measurement of a quantum system followed by postselection based on a subsequent strong measurement gives rise to a quantity called the weak value: a complex number for which the interpretation has long been debated. We analyse the procedure of weak measurement and postselection, and the interpretation of the associated weak value, using a theory of classical mechanics supplemented by an epistemic restriction that is known to be operationally equivalent to a subtheory of quantum mechanics. Both the real and imaginary components of the weak value appear as phase space displacements in the postselected expectation values of the measurement device's position and momentum distributions, and we recover the same displacements as in the quantum case by studying the corresponding evolution in our theory of classical mechanics with an epistemic restriction. By using this epistemically restricted theory, we gain insight into the appearance of the weak value as a result of the statistical effects of post selection, and this provides us with an operational interpretation of the weak value, both its real and imaginary parts. We find that the imaginary part of the weak value is a measure of how much postselection biases the mean phase space distribution for a given amount of measurement disturbance. All such biases proportional to the imaginary part of the weak value vanish in the limit where disturbance due to measurement goes to zero. Our analysis also offers intuitive insight into how measurement disturbance can be minimized and the limits of weak measurement. (paper)

  15. How the Weak Variance of Momentum Can Turn Out to be Negative

    Science.gov (United States)

    Feyereisen, M. R.

    2015-05-01

    Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.

  16. Measurement of weak radioactivity

    CERN Document Server

    Theodorsson , P

    1996-01-01

    This book is intended for scientists engaged in the measurement of weak alpha, beta, and gamma active samples; in health physics, environmental control, nuclear geophysics, tracer work, radiocarbon dating etc. It describes the underlying principles of radiation measurement and the detectors used. It also covers the sources of background, analyzes their effect on the detector and discusses economic ways to reduce the background. The most important types of low-level counting systems and the measurement of some of the more important radioisotopes are described here. In cases where more than one type can be used, the selection of the most suitable system is shown.

  17. Completely boundary-free minimum and maximum principles for neutron transport and their least-squares and Galerkin equivalents

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1982-01-01

    Some minimum and maximum variational principles for even-parity neutron transport are reviewed and the corresponding principles for odd-parity transport are derived by a simple method to show why the essential boundary conditions associated with these maximum principles have to be imposed. The method also shows why both the essential and some of the natural boundary conditions associated with these minimum principles have to be imposed. These imposed boundary conditions for trial functions in the variational principles limit the choice of the finite element used to represent trial functions. The reasons for the boundary conditions imposed on the principles for even- and odd-parity transport point the way to a treatment of composite neutron transport, for which completely boundary-free maximum and minimum principles are derived from a functional identity. In general a trial function is used for each parity in the composite neutron transport, but this can be reduced to one without any boundary conditions having to be imposed. (author)

  18. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  19. HIGH-REDSHIFT SDSS QUASARS WITH WEAK EMISSION LINES

    International Nuclear Information System (INIS)

    Diamond-Stanic, Aleksandar M.; Fan Xiaohui; Jiang Linhua; Kim, J. Serena; Schmidt, Gary D.; Smith, Paul S.; Vestergaard, Marianne; Young, Jason E.; Brandt, W. N.; Shemmer, Ohad; Gibson, Robert R.; Schneider, Donald P.; Strauss, Michael A.; Shen Yue; Anderson, Scott F.; Carilli, Christopher L.; Richards, Gordon T.

    2009-01-01

    We identify a sample of 74 high-redshift quasars (z > 3) with weak emission lines from the Fifth Data Release of the Sloan Digital Sky Survey and present infrared, optical, and radio observations of a subsample of four objects at z > 4. These weak emission-line quasars (WLQs) constitute a prominent tail of the Lyα + N v equivalent width distribution, and we compare them to quasars with more typical emission-line properties and to low-redshift active galactic nuclei with weak/absent emission lines, namely BL Lac objects. We find that WLQs exhibit hot (T ∼ 1000 K) thermal dust emission and have rest-frame 0.1-5 μm spectral energy distributions that are quite similar to those of normal quasars. The variability, polarization, and radio properties of WLQs are also different from those of BL Lacs, making continuum boosting by a relativistic jet an unlikely physical interpretation. The most probable scenario for WLQs involves broad-line region properties that are physically distinct from those of normal quasars.

  20. Heisenberg's principle of uncertainty and the uncertainty relations

    International Nuclear Information System (INIS)

    Redei, Miklos

    1987-01-01

    The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs

  1. The role of general relativity in the uncertainty principle

    International Nuclear Information System (INIS)

    Padmanabhan, T.

    1986-01-01

    The role played by general relativity in quantum mechanics (especially as regards the uncertainty principle) is investigated. It is confirmed that the validity of time-energy uncertainty does depend on gravitational time dilation. It is also shown that there exists an intrinsic lower bound to the accuracy with which acceleration due to gravity can be measured. The motion of equivalence principle in quantum mechanics is clarified. (author)

  2. The design of high performance weak current integrated amplifier

    International Nuclear Information System (INIS)

    Chen Guojie; Cao Hui

    2005-01-01

    A design method of high performance weak current integrated amplifier using ICL7650 operational amplifier is introduced. The operating principle of circuits and the step of improving amplifier's performance are illustrated. Finally, the experimental results are given. The amplifier has programmable measurement range of 10 -9 -10 -12 A, automatic zero-correction, accurate measurement, and good stability. (authors)

  3. Extended Huygens-Fresnel principle and optical waves propagation in turbulence: discussion.

    Science.gov (United States)

    Charnotskii, Mikhail

    2015-07-01

    Extended Huygens-Fresnel principle (EHF) currently is the most common technique used in theoretical studies of the optical propagation in turbulence. A recent review paper [J. Opt. Soc. Am. A31, 2038 (2014)JOAOD60740-323210.1364/JOSAA.31.002038] cites several dozens of papers that are exclusively based on the EHF principle. We revisit the foundations of the EHF, and show that it is burdened by very restrictive assumptions that make it valid only under weak scintillation conditions. We compare the EHF to the less-restrictive Markov approximation and show that both theories deliver identical results for the second moment of the field, rendering the EHF essentially worthless. For the fourth moment of the field, the EHF principle is accurate under weak scintillation conditions, but is known to provide erroneous results for strong scintillation conditions. In addition, since the EHF does not obey the energy conservation principle, its results cannot be accurate for scintillations of partially coherent beam waves.

  4. Testing the strong equivalence principle with the triple pulsar PSR J 0337 +1715

    Science.gov (United States)

    Shao, Lijing

    2016-04-01

    Three conceptually different masses appear in equations of motion for objects under gravity, namely, the inertial mass, mI , the passive gravitational mass, mP, and the active gravitational mass, mA. It is assumed that, for any objects, mI=mP=mA in the Newtonian gravity, and mI=mP in the Einsteinian gravity, oblivious to objects' sophisticated internal structure. Empirical examination of the equivalence probes deep into gravity theories. We study the possibility of carrying out new tests based on pulsar timing of the stellar triple system, PSR J 0337 +1715 . Various machine-precision three-body simulations are performed, from which, the equivalence-violating parameters are extracted with Markov chain Monte Carlo sampling that takes full correlations into account. We show that the difference in masses could be probed to 3 ×1 0-8 , improving the current constraints from lunar laser ranging on the post-Newtonian parameters that govern violations of mP=mI and mA=mP by thousands and millions, respectively. The test of mP=mA would represent the first test of Newton's third law with compact objects.

  5. Interpretation of Ukrainian and Polish Adverbial Word Equivalents Form and Meaning Interaction in National Explanatory Lexicography

    Directory of Open Access Journals (Sweden)

    Alla Luchyk

    2015-06-01

    Full Text Available Interpretation of Ukrainian and Polish Adverbial Word Equivalents Form and Meaning Interaction in National Explanatory Lexicography The article proves the necessity and possibility of compiling dictionaries with intermediate existence status glossary units, to which the word equivalents belong. In order to form the Ukrainian-Polish dictionary glossary of this type the form and meaning analysis of Ukrainian and Polish word equivalents is done, the common and distinctive features of these language system elements are described, the compiling principles of such dictionary are clarified.

  6. From the Neutral Theory to a Comprehensive and Multiscale Theory of Ecological Equivalence.

    Science.gov (United States)

    Munoz, François; Huneman, Philippe

    2016-09-01

    The neutral theory of biodiversity assumes that coexisting organisms are equally able to survive, reproduce, and disperse (ecological equivalence), but predicts that stochastic fluctuations of these abilities drive diversity dynamics. It predicts remarkably well many biodiversity patterns, although substantial evidence for the role of niche variation across organisms seems contradictory. Here, we discuss this apparent paradox by exploring the meaning and implications of ecological equivalence. We address the question whether neutral theory provides an explanation for biodiversity patterns and acknowledges causal processes. We underline that ecological equivalence, although central to neutral theory, can emerge at local and regional scales from niche-based processes through equalizing and stabilizing mechanisms. Such emerging equivalence corresponds to a weak conception of neutral theory, as opposed to the assumption of strict equivalence at the individual level in strong conception. We show that this duality is related to diverging views on hypothesis testing and modeling in ecology. In addition, the stochastic dynamics exposed in neutral theory are pervasive in ecological systems and, rather than a null hypothesis, ecological equivalence is best understood as a parsimonious baseline to address biodiversity dynamics at multiple scales.

  7. Deep inelastic inclusive weak and electromagnetic interactions

    International Nuclear Information System (INIS)

    Adler, S.L.

    1976-01-01

    The theory of deep inelastic inclusive interactions is reviewed, emphasizing applications to electromagnetic and weak charged current processes. The following reactions are considered: e + N → e + X, ν + N → μ - + X, anti ν + N → μ + + X where X denotes a summation over all final state hadrons and the ν's are muon neutrinos. After a discussion of scaling, the quark-parton model is invoked to explain the principle experimental features of deep inelastic inclusive reactions

  8. Quantum Groups, Property (T), and Weak Mixing

    Science.gov (United States)

    Brannan, Michael; Kerr, David

    2018-06-01

    For second countable discrete quantum groups, and more generally second countable locally compact quantum groups with trivial scaling group, we show that property (T) is equivalent to every weakly mixing unitary representation not having almost invariant vectors. This is a generalization of a theorem of Bekka and Valette from the group setting and was previously established in the case of low dual by Daws, Skalski, and Viselter. Our approach uses spectral techniques and is completely different from those of Bekka-Valette and Daws-Skalski-Viselter. By a separate argument we furthermore extend the result to second countable nonunimodular locally compact quantum groups, which are shown in particular not to have property (T), generalizing a theorem of Fima from the discrete setting. We also obtain quantum group versions of characterizations of property (T) of Kerr and Pichot in terms of the Baire category theory of weak mixing representations and of Connes and Weiss in terms of the prevalence of strongly ergodic actions.

  9. A study of principle and testing of piezoelectric transformer

    International Nuclear Information System (INIS)

    Liu Weiyue; Wang Yanfang; Huang Yihua; Shi Jun

    2002-01-01

    The operating principle and structure of a kind of piezoelectric transformer which can be used in a particle accelerator are investigated. The properties of piezoelectric transformer are tested through equivalent circuit combined with experiment

  10. On organizing principles of discrete differential geometry. Geometry of spheres

    International Nuclear Information System (INIS)

    Bobenko, Alexander I; Suris, Yury B

    2007-01-01

    Discrete differential geometry aims to develop discrete equivalents of the geometric notions and methods of classical differential geometry. This survey contains a discussion of the following two fundamental discretization principles: the transformation group principle (smooth geometric objects and their discretizations are invariant with respect to the same transformation group) and the consistency principle (discretizations of smooth parametrized geometries can be extended to multidimensional consistent nets). The main concrete geometric problem treated here is discretization of curvature-line parametrized surfaces in Lie geometry. Systematic use of the discretization principles leads to a discretization of curvature-line parametrization which unifies circular and conical nets.

  11. Gluon Bremsstrahlung in Weakly-Coupled Plasmas

    International Nuclear Information System (INIS)

    Arnold, Peter

    2009-01-01

    I report on some theoretical progress concerning the calculation of gluon bremsstrahlung for very high energy particles crossing a weakly-coupled quark-gluon plasma. (i) I advertise that two of the several formalisms used to study this problem, the BDMPS-Zakharov formalism and the AMY formalism (the latter used only for infinite, uniform media), can be made equivalent when appropriately formulated. (ii) A standard technique to simplify calculations is to expand in inverse powers of logarithms ln(E/T). I give an example where such expansions are found to work well for ω/T≥10 where ω is the bremsstrahlung gluon energy. (iii) Finally, I report on perturbative calculations of q.

  12. Unifying weak and electromagnetic forces in Weinberg-Salam theory

    International Nuclear Information System (INIS)

    Savoy, C.A.

    1978-01-01

    In this introduction to the ideas related to the unified gauge theories of the weak and electromagnetic interactions, we begin with the motivations for its basic principles. Then, the formalism is briefly developed, in particular the so-called Higgs mechanism. The advantages and the consequences of the (non-abelian) gauge invariance are emphasized, together with the experimental tests of the theory [fr

  13. Variational principle for the Bloch unified reaction theory

    International Nuclear Information System (INIS)

    MacDonald, W.; Rapheal, R.

    1975-01-01

    The unified reaction theory formulated by Claude Bloch uses a boundary value operator to write the Schroedinger equation for a scattering state as an inhomogeneous equation over the interaction region. As suggested by Lane and Robson, this equation can be solved by using a matrix representation on any set which is complete over the interaction volume. Lane and Robson have proposed, however, that a variational form of the Bloch equation can be used to obtain a ''best'' value for the S-matrix when a finite subset of this basis is used. The variational principle suggested by Lane and Robson, which gives a many-channel S-matrix different from the matrix solution on a finite basis, is considered first, and it is shown that the difference results from the fact that their variational principle is not, in fact, equivalent to the Bloch equation. Then a variational principle is presented which is fully equivalent to the Bloch form of the Schroedinger equation, and it is shown that the resulting S-matrix is the same as that obtained from the matrix solution of this equation. (U.S.)

  14. Variational principles for locally variational forms

    International Nuclear Information System (INIS)

    Brajercik, J.; Krupka, D.

    2005-01-01

    We present the theory of higher order local variational principles in fibered manifolds, in which the fundamental global concept is a locally variational dynamical form. Any two Lepage forms, defining a local variational principle for this form, differ on intersection of their domains, by a variationally trivial form. In this sense, but in a different geometric setting, the local variational principles satisfy analogous properties as the variational functionals of the Chern-Simons type. The resulting theory of extremals and symmetries extends the first order theories of the Lagrange-Souriau form, presented by Grigore and Popp, and closed equivalents of the first order Euler-Lagrange forms of Hakova and Krupkova. Conceptually, our approach differs from Prieto, who uses the Poincare-Cartan forms, which do not have higher order global analogues

  15. Reconstructing weak values without weak measurements

    International Nuclear Information System (INIS)

    Johansen, Lars M.

    2007-01-01

    I propose a scheme for reconstructing the weak value of an observable without the need for weak measurements. The post-selection in weak measurements is replaced by an initial projector measurement. The observable can be measured using any form of interaction, including projective measurements. The reconstruction is effected by measuring the change in the expectation value of the observable due to the projector measurement. The weak value may take nonclassical values if the projector measurement disturbs the expectation value of the observable

  16. On Resolution Complexity of Matching Principles

    DEFF Research Database (Denmark)

    Dantchev, Stefan S.

    proof system. The results in the thesis fall in this category. We study the Resolution complexity of some Matching Principles. The three major contributions of the thesis are as follows. Firstly, we develop a general technique of proving resolution lower bounds for the perfect matchingprinciples based...... Chessboard as well as for Tseitin tautologies based on rectangular grid graph. We reduce these problems to Tiling games, a concept introduced by us, which may be of interest on its own. Secondly, we find the exact Tree-Resolution complexity of the Weak Pigeon-Hole Principle. It is the most studied...

  17. The equivalent energy method: an engineering approach to fracture

    International Nuclear Information System (INIS)

    Witt, F.J.

    1981-01-01

    The equivalent energy method for elastic-plastic fracture evaluations was developed around 1970 for determining realistic engineering estimates for the maximum load-displacement or stress-strain conditions for fracture of flawed structures. The basis principles were summarized but the supporting experimental data, most of which were obtained after the method was proposed, have never been collated. This paper restates the original bases more explicitly and presents the validating data in graphical form. Extensive references are given. The volumetric energy ratio, a modelling parameter encompassing both size and temperature, is the fundamental parameter of the equivalent energy method. It is demonstrated that, in an engineering sense, the volumetric energy ratio is a unique material characteristic for a steel, much like a material property except size must be taken into account. With this as a proposition, the basic formula of the equivalent energy method is derived. Sufficient information is presented so that investigators and analysts may judge the viability and applicability of the method to their areas of interest. (author)

  18. Designing sustainable concrete on the basis of equivalence performance: assessment criteria for safety

    NARCIS (Netherlands)

    Visser, J.H.M.; Bigaj, A.J.

    2014-01-01

    In order not to hampers innovations, the Dutch National Building Regulations (NBR), allow an alternative approval route for new building materials. It is based on the principles of equivalent performance which states that if the solution proposed can be proven to have the same level of safety,

  19. Politico-economic equivalence

    DEFF Research Database (Denmark)

    Gonzalez Eiras, Martin; Niepelt, Dirk

    2015-01-01

    Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime and a st......Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime...... their use in the context of several applications, relating to social security reform, tax-smoothing policies and measures to correct externalities....

  20. Development of a superconducting position sensor for the Satellite Test of the Equivalence Principle

    Science.gov (United States)

    Clavier, Odile Helene

    The Satellite Test of the Equivalence Principle (STEP) is a joint NASA/ESA mission that proposes to measure the differential acceleration of two cylindrical test masses orbiting the earth in a drag-free satellite to a precision of 10-18 g. Such an experiment would conceptually reproduce Galileo's tower of Pisa experiment with a much longer time of fall and greatly reduced disturbances. The superconducting test masses are constrained in all degrees of freedom except their axial direction (the sensitive axis) using superconducting bearings. The STEP accelerometer measures the differential position of the masses in their sensitive direction using superconducting inductive pickup coils coupled to an extremely sensitive magnetometer called a DC-SQUID (Superconducting Quantum Interference Device). Position sensor development involves the design, manufacture and calibration of pickup coils that will meet the acceleration sensitivity requirement. Acceleration sensitivity depends on both the displacement sensitivity and stiffness of the position sensor. The stiffness must kept small while maintaining stability of the accelerometer. Using a model for the inductance of the pickup coils versus displacement of the test masses, a computer simulation calculates the sensitivity and stiffness of the accelerometer in its axial direction. This simulation produced a design of pickup coils for the four STEP accelerometers. Manufacture of the pickup coils involves standard photolithography techniques modified for superconducting thin-films. A single-turn pickup coil was manufactured and produced a successful superconducting coil using thin-film Niobium. A low-temperature apparatus was developed with a precision position sensor to measure the displacement of a superconducting plate (acting as a mock test mass) facing the coil. The position sensor was designed to detect five degrees of freedom so that coupling could be taken into account when measuring the translation of the plate

  1. On Uniform Weak König's Lemma

    DEFF Research Database (Denmark)

    Kohlenbach, Ulrich

    2002-01-01

    The so-called weak Konig's lemma WKL asserts the existence of an infinite path b in any infinite binary tree (given by a representing function f). Based on this principle one can formulate subsystems of higher-order arithmetic which allow to carry out very substantial parts of classical mathematics...... which-relative to PRA -implies the schema of 10-induction). In this setting one can consider also a uniform version UWKL of WKL which asserts the existence of a functional which selects uniformly in a given infinite binary tree f an infinite path f of that tree. This uniform version of WKL...

  2. What is correct: equivalent dose or dose equivalent

    International Nuclear Information System (INIS)

    Franic, Z.

    1994-01-01

    In Croatian language some physical quantities in radiation protection dosimetry have not precise names. Consequently, in practice either terms in English or mathematical formulas are used. The situation is even worse since the Croatian language only a limited number of textbooks, reference books and other papers are available. This paper compares the concept of ''dose equivalent'' as outlined in International Commission on Radiological Protection (ICRP) recommendations No. 26 and newest, conceptually different concept of ''equivalent dose'' which is introduced in ICRP 60. It was found out that Croatian terminology is both not uniform and unprecise. For the term ''dose equivalent'' was, under influence of Russian and Serbian languages, often used as term ''equivalent dose'' even from the point of view of ICRP 26 recommendations, which was not justified. Unfortunately, even now, in Croatia the legal unit still ''dose equivalent'' defined as in ICRP 26, but the term used for it is ''equivalent dose''. Therefore, in Croatian legislation a modified set of quantities introduced in ICRP 60, should be incorporated as soon as possible

  3. Hypernuclear weak decay puzzle

    International Nuclear Information System (INIS)

    Barbero, C.; Horvat, D.; Narancic, Z.; Krmpotic, F.; Kuo, T.T.S.; Tadic, D.

    2002-01-01

    A general shell model formalism for the nonmesonic weak decay of the hypernuclei has been developed. It involves a partial wave expansion of the emitted nucleon waves, preserves naturally the antisymmetrization between the escaping particles and the residual core, and contains as a particular case the weak Λ-core coupling formalism. The extreme particle-hole model and the quasiparticle Tamm-Dancoff approximation are explicitly worked out. It is shown that the nuclear structure manifests itself basically through the Pauli principle, and a very simple expression is derived for the neutron- and proton-induced decays rates Γ n and Γ p , which does not involve the spectroscopic factors. We use the standard strangeness-changing weak ΛN→NN transition potential which comprises the exchange of the complete pseudoscalar and vector meson octets (π,η,K,ρ,ω,K * ), taking into account some important parity-violating transition operators that are systematically omitted in the literature. The interplay between different mesons in the decay of Λ 12 C is carefully analyzed. With the commonly used parametrization in the one-meson-exchange model (OMEM), the calculated rate Γ NM =Γ n +Γ p is of the order of the free Λ decay rate Γ 0 (Γ NM th congruent with Γ 0 ) and is consistent with experiments. Yet the measurements of Γ n/p =Γ n /Γ p and of Γ p are not well accounted for by the theory (Γ n/p th p th > or approx. 0.60Γ 0 ). It is suggested that, unless additional degrees of freedom are incorporated, the OMEM parameters should be radically modified

  4. Equivalence between the Energy Stable Flux Reconstruction and Filtered Discontinuous Galerkin Schemes

    Science.gov (United States)

    Zwanenburg, Philip; Nadarajah, Siva

    2016-02-01

    The aim of this paper is to demonstrate the equivalence between filtered Discontinuous Galerkin (DG) schemes and the Energy Stable Flux Reconstruction (ESFR) schemes, expanding on previous demonstrations in 1D [1] and for straight-sided elements in 3D [2]. We first derive the DG and ESFR schemes in strong form and compare the respective flux penalization terms while highlighting the implications of the fundamental assumptions for stability in the ESFR formulations, notably that all ESFR scheme correction fields can be interpreted as modally filtered DG correction fields. We present the result in the general context of all higher dimensional curvilinear element formulations. Through a demonstration that there exists a weak form of the ESFR schemes which is both discretely and analytically equivalent to the strong form, we then extend the results obtained for the strong formulations to demonstrate that ESFR schemes can be interpreted as a DG scheme in weak form where discontinuous edge flux is substituted for numerical edge flux correction. Theoretical derivations are then verified with numerical results obtained from a 2D Euler testcase with curved boundaries. Given the current choice of high-order DG-type schemes and the question as to which might be best to use for a specific application, the main significance of this work is the bridge that it provides between them. Clearly outlining the similarities between the schemes results in the important conclusion that it is always less efficient to use ESFR schemes, as opposed to the weak DG scheme, when solving problems implicitly.

  5. A Brief Talk of Functional Equivalence Used in Chinese Translation of English Lyrics

    Institute of Scientific and Technical Information of China (English)

    吴晨

    2015-01-01

    With the development of cultural exchanges between China and foreign countries,a great number of English songs,serving as one important part of cultural exchanges,have been an important part of Chinese people’s daily life. However,barriers always could be encountered in translating those lyrics into Chinese. Thus,how to realize functional equivalence between Chinese translation version and English song lyrics has been a tough target which could not be neglected. By looking at Chinese translation of English song lyrics,a study of functional equivalence used on it will be made to solve such problems,it includes the principles for producing functional equivalence and adjustment. The key to realize functional equivalence in Chinese translation of English song lyrics is,namely,to balance rhythm and tones with the style of English songs,to minimize the loss of meaning in Chinese translation version,so that Chinese music lovers would understand the meaning of English song lyrics.

  6. Thermal versus high pressure processing of carrots: A comparative pilot-scale study on equivalent basis

    NARCIS (Netherlands)

    Vervoort, L.; Plancken, Van der L.; Grauwet, T.; Verlinde, P.; Matser, A.M.; Hendrickx, M.; Loey, van A.

    2012-01-01

    This report describes the first study comparing different high pressure (HP) and thermal treatments at intensities ranging from mild pasteurization to sterilization conditions. To allow a fair comparison, the processing conditions were selected based on the principles of equivalence. Moreover,

  7. Framework for assessing causality in disease management programs: principles.

    Science.gov (United States)

    Wilson, Thomas; MacDowell, Martin

    2003-01-01

    To credibly state that a disease management (DM) program "caused" a specific outcome it is required that metrics observed in the DM population be compared with metrics that would have been expected in the absence of a DM intervention. That requirement can be very difficult to achieve, and epidemiologists and others have developed guiding principles of causality by which credible estimates of DM impact can be made. This paper introduces those key principles. First, DM program metrics must be compared with metrics from a "reference population." This population should be "equivalent" to the DM intervention population on all factors that could independently impact the outcome. In addition, the metrics used in both groups should use the same defining criteria (ie, they must be "comparable" to each other). The degree to which these populations fulfill the "equivalent" assumption and metrics fulfill the "comparability" assumption should be stated. Second, when "equivalence" or "comparability" is not achieved, the DM managers should acknowledge this fact and, where possible, "control" for those factors that may impact the outcome(s). Finally, it is highly unlikely that one study will provide definitive proof of any specific DM program value for all time; thus, we strongly recommend that studies be ongoing, at multiple points in time, and at multiple sites, and, when observational study designs are employed, that more than one type of study design be utilized. Methodologically sophisticated studies that follow these "principles of causality" will greatly enhance the reputation of the important and growing efforts in DM.

  8. Matter tensor from the Hilbert variational principle

    International Nuclear Information System (INIS)

    Pandres, D. Jr.

    1976-01-01

    We consider the Hilbert variational principle which is conventionally used to derive Einstein's equations for the source-free gravitational field. We show that at least one version of the equivalence principle suggests an alternative way of performing the variation, resulting in a different set of Einstein equations with sources automatically present. This illustrates a technique which may be applied to any theory that is derived from a variational principle and that admits a gauge group. The essential point is that, if one first imposes a gauge condition and then performs the variation, one obtains field equations with source terms which do not appear if one first performs the variation and then imposes the gauge condition. A second illustration is provided by the variational principle conventionally used to derive Maxwell's equations for the source-free electromagnetic field. If one first imposes the Lorentz gauge condition and then performs the variation, one obtains Maxwell's equations with sources present

  9. Twenty-five years of maximum-entropy principle

    Science.gov (United States)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  10. Equivalent Modeling of DFIG-Based Wind Power Plant Considering Crowbar Protection

    Directory of Open Access Journals (Sweden)

    Qianlong Zhu

    2016-01-01

    Full Text Available Crowbar conduction has an impact on the transient characteristics of a doubly fed induction generator (DFIG in the short-circuit fault condition. But crowbar protection is seldom considered in the aggregation method for equivalent modeling of DFIG-based wind power plants (WPPs. In this paper, the relationship between the growth of postfault rotor current and the amplitude of the terminal voltage dip is studied by analyzing the rotor current characteristics of a DFIG during the fault process. Then, a terminal voltage dip criterion which can identify crowbar conduction is proposed. Considering the different grid connection structures for single DFIG and WPP, the criterion is revised and the crowbar conduction is judged depending on the revised criterion. Furthermore, an aggregation model of the WPP is established based on the division principle of crowbar conduction. Finally, the proposed equivalent WPP is simulated on a DIgSILENT PowerFactory platform and the results are compared with those of the traditional equivalent WPPs and the detailed WPP. The simulation results show the effectiveness of the method for equivalent modeling of DFIG-based WPP when crowbar protection is also taken into account.

  11. Gravitational leptogenesis, C, CP and strong equivalence

    International Nuclear Information System (INIS)

    McDonald, Jamie I.; Shore, Graham M.

    2015-01-01

    The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.

  12. Gravitational leptogenesis, C, CP and strong equivalence

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, Jamie I.; Shore, Graham M. [Department of Physics, Swansea University,Swansea, SA2 8PP (United Kingdom)

    2015-02-12

    The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.

  13. PLASMA EMISSION BY WEAK TURBULENCE PROCESSES

    Energy Technology Data Exchange (ETDEWEB)

    Ziebell, L. F.; Gaelzer, R. [Instituto de Física, UFRGS, Porto Alegre, RS (Brazil); Yoon, P. H. [Institute for Physical Science and Technology, University of Maryland, College Park, MD (United States); Pavan, J., E-mail: luiz.ziebell@ufrgs.br, E-mail: rudi.gaelzer@ufrgs.br, E-mail: yoonp@umd.edu, E-mail: joel.pavan@ufpel.edu.br [Instituto de Física e Matemática, UFPel, Pelotas, RS (Brazil)

    2014-11-10

    The plasma emission is the radiation mechanism responsible for solar type II and type III radio bursts. The first theory of plasma emission was put forth in the 1950s, but the rigorous demonstration of the process based upon first principles had been lacking. The present Letter reports the first complete numerical solution of electromagnetic weak turbulence equations. It is shown that the fundamental emission is dominant and unless the beam speed is substantially higher than the electron thermal speed, the harmonic emission is not likely to be generated. The present findings may be useful for validating reduced models and for interpreting particle-in-cell simulations.

  14. Experimental studies of gravitation and feebler forces

    International Nuclear Information System (INIS)

    Cowsik, R.

    1993-01-01

    The theoretical motivations and the experimental context pertaining to the recent experimental studies of the Weak equivalence Principle and the open-quotes Fifth Forceclose quotes are reviewed briefly. With such a backdrop, the innovative design and the technical details of the several new experiments in this area are presented with a special emphasis on the experiments underway at Gauribidanur, situated on the Deccan Plateau. These experiments jointly rule out the existence of any new forces coupling to baryon or lepton number with a coupling greater than about 10 -4 of gravitation per a.m.u. at ranges of about 0.5m and longer. In a few years the author hopes to test the weak equivalence principle with sensitivity exceeding 10 -13

  15. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    OpenAIRE

    Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte

    2015-01-01

    This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...

  16. The one-shot deviation principle for sequential rationality

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Whitta-Jacobsen, Hans Jørgen; Sloth, Birgitte

    1996-01-01

    We present a decentralization result which is useful for practical and theoretical work with sequential equilibrium, perfect Bayesian equilibrium, and related equilibrium concepts for extensive form games. A weak consistency condition is sufficient to obtain an analogy to the well known One-Stage......-Stage-Deviation Principle for subgame perfect equilibrium...

  17. A new mathematical formulation of the line-by-line method in case of weak line overlapping

    Science.gov (United States)

    Ishov, Alexander G.; Krymova, Natalie V.

    1994-01-01

    A rigorous mathematical proof is presented for multiline representation on the equivalent width of a molecular band which consists in the general case of n overlapping spectral lines. The multiline representation includes a principal term and terms of minor significance. The principal term is the equivalent width of the molecular band consisting of the same n nonoverlapping spectral lines. The terms of minor significance take into consideration the overlapping of two, three and more spectral lines. They are small in case of the weak overlapping of spectral lines in the molecular band. The multiline representation can be easily generalized for optically inhomogeneous gas media and holds true for combinations of molecular bands. If the band lines overlap weakly the standard formulation of line-by-line method becomes too labor-consuming. In this case the multiline representation permits line-by-line calculations to be performed more effectively. Other useful properties of the multiline representation are pointed out.

  18. Principles of Economic Rationality in Mice.

    Science.gov (United States)

    Rivalan, Marion; Winter, York; Nachev, Vladislav

    2017-12-12

    Humans and non-human animals frequently violate principles of economic rationality, such as transitivity, independence of irrelevant alternatives, and regularity. The conditions that lead to these violations are not completely understood. Here we report a study on mice tested in automated home-cage setups using rewards of drinking water. Rewards differed in one of two dimensions, volume or probability. Our results suggest that mouse choice conforms to the principles of economic rationality for options that differ along a single reward dimension. A psychometric analysis of mouse choices further revealed that mice responded more strongly to differences in probability than to differences in volume, despite equivalence in return rates. This study also demonstrates the synergistic effect between the principles of economic rationality and psychophysics in making quantitative predictions about choices of healthy laboratory mice. This opens up new possibilities for the analyses of multi-dimensional choice and the use of mice with cognitive impairments that may violate economic rationality.

  19. Statistical analogues of thermodynamic extremum principles

    Science.gov (United States)

    Ramshaw, John D.

    2018-05-01

    As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.

  20. Detecting overpressure using the Eaton and Equivalent Depth methods in Offshore Nova Scotia, Canada

    Science.gov (United States)

    Ernanda; Primasty, A. Q. T.; Akbar, K. A.

    2018-03-01

    Overpressure is an abnormal high subsurface pressure of any fluids which exceeds the hydrostatic pressure of column of water or formation brine. In Offshore Nova Scotia Canada, the values and depth of overpressure zone are determined using the eaton and equivalent depth method, based on well data and the normal compaction trend analysis. Since equivalent depth method is using effective vertical stress principle and Eaton method considers physical property ratio (velocity). In this research, pressure evaluation only applicable on Penobscot L-30 well. An abnormal pressure is detected at depth 11804 feet as possibly overpressure zone, based on pressure gradient curve and calculation between the Eaton method (7241.3 psi) and Equivalent Depth method (6619.4 psi). Shales within Abenaki formation especially Baccaro Member is estimated as possible overpressure zone due to hydrocarbon generation mechanism.

  1. Evaluating the Quality of Transfer versus Nontransfer Accounting Principles Grades.

    Science.gov (United States)

    Colley, J. R.; And Others

    1996-01-01

    Using 1989-92 student records from three colleges accepting large numbers of transfers from junior schools into accounting, regression analyses compared grades of transfer and nontransfer students. Quality of accounting principle grades of transfer students was not equivalent to that of nontransfer students. (SK)

  2. Dependence on age at intake of committed dose equivalents from radionuclides

    International Nuclear Information System (INIS)

    Adams, N.

    1981-01-01

    The dependence of committed dose equivalents on age at intake is needed to assess the significance of exposures of young persons among the general public resulting from inhaled or ingested radionuclides. The committed dose equivalents, evaluated using ICRP principles, depend on the body dimensions of the young person at the time of intake of a radionuclide and on subsequent body growth. Representation of growth by a series of exponential segments facilitates the derivation of general expressions for the age dependence of committed dose equivalents if metabolic models do not change with age. The additional assumption that intakes of radionuclides in air or food are proportional to a person's energy expenditure (implying age-independent dietary composition) enables the demonstration that the age of the most highly exposed 'critical groups' of the general public from these radionuclides is either about 1 year or 17 years. With the above assumptions the exposure of the critical group is less than three times the exposure of adult members of the general public. Approximate values of committed dose equivalents which avoid both underestimation and excessive overestimation are shown to be obtainable by simplified procedures. Modified procedures are suggested for use if metabolic models change with age. (author)

  3. VizieR Online Data Catalog: Blazars equivalent widths and radio luminosity (Landt+, 2004)

    Science.gov (United States)

    Landt, H.; Padovani, P.; Perlman, E. S.; Giommi, P.

    2004-07-01

    Blazars are currently separated into BL Lacertae objects (BL Lacs) and flat spectrum radio quasars based on the strength of their emission lines. This is performed rather arbitrarily by defining a diagonal line in the Ca H&K break value-equivalent width plane, following Marcha et al. (1996MNRAS.281..425M). We readdress this problem and put the classification scheme for blazars on firm physical grounds. We study ~100 blazars and radio galaxies from the Deep X-ray Radio Blazar Survey (DXRBS, Cat. and ) and 2-Jy radio survey and find a significant bimodality for the narrow emission line [OIII]{lambda}5007. This suggests the presence of two physically distinct classes of radio-loud active galactic nuclei (AGN). We show that all radio-loud AGN, blazars and radio galaxies, can be effectively separated into weak- and strong-lined sources using the [OIII]{lambda}5007-[OII]{lambda}3727 equivalent width plane. This plane allows one to disentangle orientation effects from intrinsic variations in radio-loud AGN. Based on DXRBS, the strongly beamed sources of the new class of weak-lined radio-loud AGN are made up of BL Lacs at the ~75 per cent level, whereas those of the strong-lined radio-loud AGN include mostly (~97 per cent) quasars. (4 data files).

  4. Symmetry Principles and Conservation Laws in Atomic and ...

    Indian Academy of Sciences (India)

    Symmetry Principles and Conservation Laws in. Atomic and Subatomic Physics – 2. P C Deshmukh .... dicated that parity conservation, though often assumed, had not been verified in weak interactions. Acting on ... The gauge bosons W§ have a charge of +1 and −1 unit, but the Z0 boson of the standard model is neutral.

  5. Equivalence between short-time biphasic and incompressible elastic material responses.

    Science.gov (United States)

    Ateshian, Gerard A; Ellis, Benjamin J; Weiss, Jeffrey A

    2007-06-01

    Porous-permeable tissues have often been modeled using porous media theories such as the biphasic theory. This study examines the equivalence of the short-time biphasic and incompressible elastic responses for arbitrary deformations and constitutive relations from first principles. This equivalence is illustrated in problems of unconfined compression of a disk, and of articular contact under finite deformation, using two different constitutive relations for the solid matrix of cartilage, one of which accounts for the large disparity observed between the tensile and compressive moduli in this tissue. Demonstrating this equivalence under general conditions provides a rationale for using available finite element codes for incompressible elastic materials as a practical substitute for biphasic analyses, so long as only the short-time biphasic response is sought. In practice, an incompressible elastic analysis is representative of a biphasic analysis over the short-term response deltatelasticity tensor, and K is the hydraulic permeability tensor of the solid matrix. Certain notes of caution are provided with regard to implementation issues, particularly when finite element formulations of incompressible elasticity employ an uncoupled strain energy function consisting of additive deviatoric and volumetric components.

  6. Equity-regarding poverty measures: differences in needs and the role of equivalence scales

    OpenAIRE

    Udo Ebert

    2010-01-01

    The paper investigates the definition of equity-regarding poverty measures when there are different household types in the population. It demonstrates the implications of a between-type regressive transfer principle for poverty measures, for the choice of poverty lines, and for the measurement of living standard. The role of equivalence scales, which are popular in empirical work on poverty measurement, is clarified.

  7. Maximum principles for boundary-degenerate second-order linear elliptic differential operators

    OpenAIRE

    Feehan, Paul M. N.

    2012-01-01

    We prove weak and strong maximum principles, including a Hopf lemma, for smooth subsolutions to equations defined by linear, second-order, partial differential operators whose principal symbols vanish along a portion of the domain boundary. The boundary regularity property of the smooth subsolutions along this boundary vanishing locus ensures that these maximum principles hold irrespective of the sign of the Fichera function. Boundary conditions need only be prescribed on the complement in th...

  8. Weak measurements and quantum weak values for NOON states

    Science.gov (United States)

    Rosales-Zárate, L.; Opanchuk, B.; Reid, M. D.

    2018-03-01

    Quantum weak values arise when the mean outcome of a weak measurement made on certain preselected and postselected quantum systems goes beyond the eigenvalue range for a quantum observable. Here, we propose how to determine quantum weak values for superpositions of states with a macroscopically or mesoscopically distinct mode number, that might be realized as two-mode Bose-Einstein condensate or photonic NOON states. Specifically, we give a model for a weak measurement of the Schwinger spin of a two-mode NOON state, for arbitrary N . The weak measurement arises from a nondestructive measurement of the two-mode occupation number difference, which for atomic NOON states might be realized via phase contrast imaging and the ac Stark effect using an optical meter prepared in a coherent state. The meter-system coupling results in an entangled cat-state. By subsequently evolving the system under the action of a nonlinear Josephson Hamiltonian, we show how postselection leads to quantum weak values, for arbitrary N . Since the weak measurement can be shown to be minimally invasive, the weak values provide a useful strategy for a Leggett-Garg test of N -scopic realism.

  9. Variational principles of fluid mechanics and electromagnetism: imposition and neglect of the Lin constraint

    International Nuclear Information System (INIS)

    Allen, R.R. Jr.

    1987-01-01

    The Lin constraint has been utilized by a number of authors who have sought to develop Eulerian variational principles in both fluid mechanics and electromagnetics (or plasmadynamics). This dissertation first reviews the work of earlier authors concerning the development of variational principles in both the Eulerian and Lagrangian nomenclatures. In the process, it is shown whether or not the Euler-Lagrange equations that result from the variational principles are equivalent to the generally accepted equations of motion. In particular, it is shown in the case of several Eulerian variational principles that imposition of the Lin constraint results in Euler-Lagrange equations equivalent to the generally accepted equations of motion, whereas neglect of the Lin constraint results in restrictive Euler-Lagrange equations. In an effort to improve the physical motivation behind introduction of the Lin constraint, a new variational constraint is developed based on teh concept of surface forces within a fluid. Additionally, it is shown that a quantity often referred to as the canonical momentum of a charged fluid is not always a constant of the motion of the fluid; and it is demonstrated that there does not exist an unconstrained Eulerian variational principle giving rise to the generally accepted equations of motion for both a perfect fluid and a cold, electromagnetic fluid

  10. Kinetic energy principle and neoclassical toroidal torque in tokamaks

    International Nuclear Information System (INIS)

    Park, Jong-Kyu

    2011-01-01

    It is shown that when tokamaks are perturbed, the kinetic energy principle is closely related to the neoclassical toroidal torque by the action invariance of particles. Especially when tokamaks are perturbed from scalar pressure equilibria, the imaginary part of the potential energy in the kinetic energy principle is equivalent to the toroidal torque by the neoclassical toroidal viscosity. A unified description therefore should be made for both physics. It is also shown in this case that the potential energy operator can be self-adjoint and thus the stability calculation can be simplified by minimizing the potential energy.

  11. Breakdown of the equivalence between gravitational mass and energy for a composite quantum body

    International Nuclear Information System (INIS)

    Lebed, Andrei G

    2014-01-01

    The simplest quantum composite body, a hydrogen atom, is considered in the presence of a weak external gravitational field. We define an operator for the passive gravitational mass of the atom in the post-Newtonian approximation of the general relativity and show that it does not commute with its energy operator. Nevertheless, the equivalence between the expectation values of the mass and energy is shown to survive at a macroscopic level for stationary quantum states. Breakdown of the equivalence between passive gravitational mass and energy at a microscopic level for stationary quantum states can be experimentally detected by studying unusual electromagnetic radiation, emitted by the atoms, supported by and moving in the Earth's gravitational field with constant velocity, using spacecraft or satellite

  12. A variational principle for the plasma centrifuge

    International Nuclear Information System (INIS)

    Ludwig, G.O.

    1986-09-01

    A variational principle is derived which describes the stationary state of the plasma column in a plasma centrifuge. Starting with the fluid equations in a rotating frame the theory is developed using the method of irreversible thermodynamics. This formulation easily leads to an expression for the density distribution of the l-species at sedimentation equilibrium, taking into account the effect of the electric and magnetic forces. Assuming stationary boundary conditions and rigid rotation nonequilibrium states the condition for thermodynamic stability integrated over the volume of the system reduces, under certain restrictions, to the principle of minimum entropy production in the stationary state. This principle yields a variational problem which is equivalent to the original problem posed by the stationary fluid equations. The variational method is useful in achieving approximate solutions that give the electric potential and current distributions in the rotating plasma column consistent with an assumed plasma density profile. (Author) [pt

  13. Small vacuum energy from small equivalence violation in scalar gravity

    International Nuclear Information System (INIS)

    Agrawal, Prateek; Sundrum, Raman

    2017-01-01

    The theory of scalar gravity proposed by Nordström, and refined by Einstein and Fokker, provides a striking analogy to general relativity. In its modern form, scalar gravity appears as the low-energy effective field theory of the spontaneous breaking of conformal symmetry within a CFT, and is AdS/CFT dual to the original Randall-Sundrum I model, but without a UV brane. Scalar gravity faithfully exhibits several qualitative features of the cosmological constant problem of standard gravity coupled to quantum matter, and the Weinberg no-go theorem can be extended to this case as well. Remarkably, a solution to the scalar gravity cosmological constant problem has been proposed, where the key is a very small violation of the scalar equivalence principle, which can be elegantly formulated as a particular type of deformation of the CFT. In the dual AdS picture this involves implementing Goldberger-Wise radion stabilization where the Goldberger-Wise field is a pseudo-Nambu Goldstone boson. In quantum gravity however, global symmetries protecting pNGBs are not expected to be fundamental. We provide a natural six-dimensional gauge theory origin for this global symmetry and show that the violation of the equivalence principle and the size of the vacuum energy seen by scalar gravity can naturally be exponentially small. Our solution may be of interest for study of non-supersymmetric CFTs in the spontaneously broken phase.

  14. Small vacuum energy from small equivalence violation in scalar gravity

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, Prateek [Department of Physics, Harvard University,Cambridge, MA 02138 (United States); Sundrum, Raman [Department of Physics, University of Maryland,College Park, MD 20742 (United States)

    2017-05-29

    The theory of scalar gravity proposed by Nordström, and refined by Einstein and Fokker, provides a striking analogy to general relativity. In its modern form, scalar gravity appears as the low-energy effective field theory of the spontaneous breaking of conformal symmetry within a CFT, and is AdS/CFT dual to the original Randall-Sundrum I model, but without a UV brane. Scalar gravity faithfully exhibits several qualitative features of the cosmological constant problem of standard gravity coupled to quantum matter, and the Weinberg no-go theorem can be extended to this case as well. Remarkably, a solution to the scalar gravity cosmological constant problem has been proposed, where the key is a very small violation of the scalar equivalence principle, which can be elegantly formulated as a particular type of deformation of the CFT. In the dual AdS picture this involves implementing Goldberger-Wise radion stabilization where the Goldberger-Wise field is a pseudo-Nambu Goldstone boson. In quantum gravity however, global symmetries protecting pNGBs are not expected to be fundamental. We provide a natural six-dimensional gauge theory origin for this global symmetry and show that the violation of the equivalence principle and the size of the vacuum energy seen by scalar gravity can naturally be exponentially small. Our solution may be of interest for study of non-supersymmetric CFTs in the spontaneously broken phase.

  15. Precision measurement with atom interferometry

    International Nuclear Information System (INIS)

    Wang Jin

    2015-01-01

    Development of atom interferometry and its application in precision measurement are reviewed in this paper. The principle, features and the implementation of atom interferometers are introduced, the recent progress of precision measurement with atom interferometry, including determination of gravitational constant and fine structure constant, measurement of gravity, gravity gradient and rotation, test of weak equivalence principle, proposal of gravitational wave detection, and measurement of quadratic Zeeman shift are reviewed in detail. Determination of gravitational redshift, new definition of kilogram, and measurement of weak force with atom interferometry are also briefly introduced. (topical review)

  16. Measurement equivalence of patient safety climate in Chinese hospitals: can we compare across physicians and nurses?

    Science.gov (United States)

    Zhu, Junya

    2018-06-11

    Self-report instruments have been widely used to better understand variations in patient safety climate between physicians and nurses. Research is needed to determine whether differences in patient safety climate reflect true differences in the underlying concepts. This is known as measurement equivalence, which is a prerequisite for meaningful group comparisons. This study aims to examine the degree of measurement equivalence of the responses to a patient safety climate survey of Chinese hospitals and to demonstrate how the measurement equivalence method can be applied to self-report climate surveys for patient safety research. Using data from the Chinese Hospital Survey of Patient Safety Climate from six Chinese hospitals in 2011, we constructed two groups: physicians and nurses (346 per group). We used multiple-group confirmatory factor analyses to examine progressively more stringent restrictions for measurement equivalence. We identified weak factorial equivalence across the two groups. Strong factorial equivalence was found for Organizational Learning, Unit Management Support for Safety, Adequacy of Safety Arrangements, Institutional Commitment to Safety, Error Reporting and Teamwork. Strong factorial equivalence, however, was not found for Safety System, Communication and Peer Support and Staffing. Nevertheless, further analyses suggested that nonequivalence did not meaningfully affect the conclusions regarding physician-nurse differences in patient safety climate. Our results provide evidence of at least partial equivalence of the survey responses between nurses and physicians, supporting mean comparisons of its constructs between the two groups. The measurement equivalence approach is essential to ensure that conclusions about group differences are valid.

  17. A method to obtain new cross-sections transport equivalent

    International Nuclear Information System (INIS)

    Palmiotti, G.

    1988-01-01

    We present a method, that allows the calculation, by the mean of variational principle, of equivalent cross-sections in order to take into account the transport and mesh size effects on reactivity variation calculations. The method validation has been made in two and three dimensions geometries. The reactivity variations calculated in three dimensional hexagonal geometry with seven points by subassembly using two sets of equivalent cross-sections for control rods are in a very good agreement with the ones of a transport, extrapolated to zero mesh size, calculation. The difficulty encountered in obtaining a good flux distribution has lead to the utilisation of a single set of equivalent cross-sections calculated by starting from an appropriated R-Z model that allows to take into account also the axial transport effects for the control rod followers. The global results in reactivity variations are still satisfactory with a good performance for the flux distribution. The main interest of the proposed method is the possibility to simulate a full 3D transport calculation, with fine mesh size, using a 3D diffusion code, with a larger mesh size. The results obtained should be affected by uncertainties, which do not exceed ± 4% for a large LMFBR control rod worth and for very different rod configurations. This uncertainty is by far smaller than the experimental uncertainties. (author). 5 refs, 8 figs, 9 tabs

  18. Further investigation on the precise formulation of the equivalence theorem

    International Nuclear Information System (INIS)

    He, H.; Kuang, Y.; Li, X.

    1994-01-01

    Based on a systematic analysis of the renormalization schemes in the general R ξ gauge, the precise formulation of the equivalence theorem for longitudinal weak boson scatterings is given both in the SU(2) L Higgs theory and in the realistic SU(2)xU(1) electroweak theory to all orders in the perturbation for an arbitrary Higgs boson mass m H . It is shown that there is generally a renormalization-scheme- and ξ-dependent modification factor C mod and a simple formula for C mod is obtained. Furthermore, a convenient particular renormalization scheme is proposed in which C mod is exactly unity. Results of C mod in other currently used schemes are also discussed especially on their ξ and m H dependence through explicit one-loop calculations. It is shown that in some currently used schemes the deviation of C mod from unity and the ξ dependence of C mod are significant even in the large-m H limit. Therefore care should be taken when applying the equivalence theorem

  19. Modified physiologically equivalent temperature—basics and applications for western European climate

    Science.gov (United States)

    Chen, Yung-Chang; Matzarakis, Andreas

    2018-05-01

    A new thermal index, the modified physiologically equivalent temperature (mPET) has been developed for universal application in different climate zones. The mPET has been improved against the weaknesses of the original physiologically equivalent temperature (PET) by enhancing evaluation of the humidity and clothing variability. The principles of mPET and differences between original PET and mPET are introduced and discussed in this study. Furthermore, this study has also evidenced the usability of mPET with climatic data in Freiburg, which is located in Western Europe. Comparisons of PET, mPET, and Universal Thermal Climate Index (UTCI) have shown that mPET gives a more realistic estimation of human thermal sensation than the other two thermal indices (PET, UTCI) for the thermal conditions in Freiburg. Additionally, a comparison of physiological parameters between mPET model and PET model (Munich Energy Balance Model for Individual, namely MEMI) is proposed. The core temperatures and skin temperatures of PET model vary more violently to a low temperature during cold stress than the mPET model. It can be regarded as that the mPET model gives a more realistic core temperature and mean skin temperature than the PET model. Statistical regression analysis of mPET based on the air temperature, mean radiant temperature, vapor pressure, and wind speed has been carried out. The R square (0.995) has shown a well co-relationship between human biometeorological factors and mPET. The regression coefficient of each factor represents the influence of the each factor on changing mPET (i.e., ±1 °C of T a = ± 0.54 °C of mPET). The first-order regression has been considered predicting a more realistic estimation of mPET at Freiburg during 2003 than the other higher order regression model, because the predicted mPET from the first-order regression has less difference from mPET calculated from measurement data. Statistic tests recognize that mPET can effectively evaluate the

  20. Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection

    Directory of Open Access Journals (Sweden)

    Haibin Zhang

    2015-08-01

    Full Text Available Stochastic resonance (SR has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR. Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system.

  1. Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection.

    Science.gov (United States)

    Zhang, Haibin; He, Qingbo; Kong, Fanrang

    2015-08-28

    Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system.

  2. Biophysical modelling of phytoplankton communities from first principles using two-layered spheres: Equivalent Algal Populations (EAP) model

    CSIR Research Space (South Africa)

    Robertson Lain, L

    2014-07-01

    Full Text Available (PFT) analysis. To these ends, an initial validation of a new model of Equivalent Algal Populations (EAP) is presented here. This paper makes a first order comparison of two prominent phytoplankton Inherent Optical Property (IOP) models with the EAP...

  3. Useful Principles in Plant Excellence Promotion

    International Nuclear Information System (INIS)

    Kavsek, D.; Bozin, B.

    2002-01-01

    This presentation offers a discussion of some principles identified from a review of significant industry events that affected the safety or reliability of a large number of nuclear power plants worldwide. Over the years of operation, a number of events have occurred in nuclear power plants that have involved problems in human performance. A review of these and other significant events has identified recurring weaknesses in plant safety culture and policy and procedure weaknesses. Focusing attention on strengthening relevant processes can help plants avoid similar significant events. Events continue to occur because the lessons learned from industry and plant operating experience are ineffectively used. In some cases, industry events have been communicated to the personnel of the plant in question, but without a thorough explanation of the lessons learned and applicability to the plant. The corrective actions identified have sometimes been limited in scope and have not fully addressed generic issues. The review and implementation of corrective actions for in-house events have sometimes been inadequate to prevent recurrences. An effective operating experience program can significantly reduce the potential for recurring events. The value of learning and applying the knowledge gained from operating experience should be an integral part of plant culture and promoted as an expectation. When operating experience is reviewed, generic issues and causal factors should be explored, rather than focusing on the unique problems that led to a specific event. Lessons learned and applicability to the plant must be clearly identified, corrective action taken, and changes thoroughly communicated to the plant personnel. Their understanding of the changes and the reasons for them should then be confirmed. The following principles will be discussed in this presentation: recognizing conditions during evolutions, promoting teamwork, recognizing fundamental knowledge weaknesses

  4. Application of a value-based equivalency method to assess environmental damage compensation under the European Environmental Liability Directive

    NARCIS (Netherlands)

    Martin-Ortega, J.; Brouwer, R.; Aiking, H.

    2011-01-01

    The Environmental Liability Directive (ELD) establishes a framework of liability based on the 'polluter-pays' principle to prevent and remedy environmental damage. The ELD requires the testing of appropriate equivalency methods to assess the scale of compensatory measures needed to offset damage.

  5. Cellular gauge symmetry and the Li organization principle: General considerations.

    Science.gov (United States)

    Tozzi, Arturo; Peters, James F; Navarro, Jorge; Kun, Wu; Lin, Bi; Marijuán, Pedro C

    2017-12-01

    Based on novel topological considerations, we postulate a gauge symmetry for living cells and proceed to interpret it from a consistent Eastern perspective: the li organization principle. In our framework, the reference system is the living cell, equipped with general symmetries and energetic constraints standing for the intertwined biochemical, metabolic and signaling pathways that allow the global homeostasis of the system. Environmental stimuli stand for forces able to locally break the symmetry of metabolic/signaling pathways, while the species-specific DNA is the gauge field that restores the global homeostasis after external perturbations. We apply the Borsuk-Ulam Theorem (BUT) to operationalize a methodology in terms of topology/gauge fields and subsequently inquire about the evolution from inorganic to organic structures and to the prokaryotic and eukaryotic modes of organization. We converge on the strategic role that second messengers have played regarding the emergence of a unitary gauge field with profound evolutionary implications. A new avenue for a deeper investigation of biological complexity looms. Philosophically, we might be reminded of the duality between two essential concepts proposed by the great Chinese synthesizer Zhu Xi (in the XIII Century). On the one side the li organization principle, equivalent to the dynamic interplay between symmetry and information; and on the other side the qi principle, equivalent to the energy participating in the process-both always interlinked with each other. In contemporary terms, it would mean the required interconnection between information and energy, and the necessity to revise essential principles of information philosophy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Meta-Analyses of Seven of NIDA’s Principles of Drug Addiction Treatment

    Science.gov (United States)

    Pearson, Frank S.; Prendergast, Michael L.; Podus, Deborah; Vazan, Peter; Greenwell, Lisa; Hamilton, Zachary

    2011-01-01

    Seven of the 13 Principles of Drug Addiction Treatment disseminated by the National Institute on Drug Abuse (NIDA) were meta-analyzed as part of the Evidence-based Principles of Treatment (EPT) project. By averaging outcomes over the diverse programs included in EPT, we found that five of the NIDA principles examined are supported: matching treatment to the client’s needs; attending to the multiple needs of clients; behavioral counseling interventions; treatment plan reassessment; and counseling to reduce risk of HIV. Two of the NIDA principles are not supported: remaining in treatment for an adequate period of time and frequency of testing for drug use. These weak effects could be the result of the principles being stated too generally to apply to the diverse interventions and programs that exist or of unmeasured moderator variables being confounded with the moderators that measured the principles. Meta-analysis should be a standard tool for developing principles of effective treatment for substance use disorders. PMID:22119178

  7. Retrieval-based Face Annotation by Weak Label Regularized Local Coordinate Coding.

    Science.gov (United States)

    Wang, Dayong; Hoi, Steven C H; He, Ying; Zhu, Jianke; Mei, Tao; Luo, Jiebo

    2013-08-02

    Retrieval-based face annotation is a promising paradigm of mining massive web facial images for automated face annotation. This paper addresses a critical problem of such paradigm, i.e., how to effectively perform annotation by exploiting the similar facial images and their weak labels which are often noisy and incomplete. In particular, we propose an effective Weak Label Regularized Local Coordinate Coding (WLRLCC) technique, which exploits the principle of local coordinate coding in learning sparse features, and employs the idea of graph-based weak label regularization to enhance the weak labels of the similar facial images. We present an efficient optimization algorithm to solve the WLRLCC task. We conduct extensive empirical studies on two large-scale web facial image databases: (i) a Western celebrity database with a total of $6,025$ persons and $714,454$ web facial images, and (ii)an Asian celebrity database with $1,200$ persons and $126,070$ web facial images. The encouraging results validate the efficacy of the proposed WLRLCC algorithm. To further improve the efficiency and scalability, we also propose a PCA-based approximation scheme and an offline approximation scheme (AWLRLCC), which generally maintains comparable results but significantly saves much time cost. Finally, we show that WLRLCC can also tackle two existing face annotation tasks with promising performance.

  8. Maximum principle for a stochastic delayed system involving terminal state constraints.

    Science.gov (United States)

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  9. Relativistic transformation law of quantum fields: A slight generalization consistent with the equivalence of all Lorentz frames

    International Nuclear Information System (INIS)

    Ingraham, R.L.

    1985-01-01

    The well-known relativistic transformation law of quantum fields satisfies the relativity principle, which asserts the complete equivalence of all Lorentz (inertial) frames as far as physical measurements go. We point out a slight generalization which is allowed by the relativity principle, but violates a further, tacit assumption usually made in connection with it but which is actually logically independent of it and subject to a feasible experimental test. The interest of the generalization is that it permits the incorporation of an ultraviolet cutoff in a simple, direct way which avoids the usual difficulties

  10. Weak interactions

    International Nuclear Information System (INIS)

    Ogava, S.; Savada, S.; Nakagava, M.

    1983-01-01

    The problem of the use of weak interaction laws to study models of elementary particles is discussed. The most typical examples of weak interaction is beta-decay of nucleons and muons. Beta-interaction is presented by quark currents in the form of universal interaction of the V-A type. Universality of weak interactions is well confirmed using as examples e- and μ-channels of pion decay. Hypothesis on partial preservation of axial current is applicable to the analysis of processes with pion participation. In the framework of the model with four flavours lepton decays of hadrons are considered. Weak interaction without lepton participation are also considered. Properties of neutral currents are described briefly

  11. Weakly clopen functions

    International Nuclear Information System (INIS)

    Son, Mi Jung; Park, Jin Han; Lim, Ki Moon

    2007-01-01

    We introduce a new class of functions called weakly clopen function which includes the class of almost clopen functions due to Ekici [Ekici E. Generalization of perfectly continuous, regular set-connected and clopen functions. Acta Math Hungar 2005;107:193-206] and is included in the class of weakly continuous functions due to Levine [Levine N. A decomposition of continuity in topological spaces. Am Math Mon 1961;68:44-6]. Some characterizations and several properties concerning weakly clopenness are obtained. Furthermore, relationships among weak clopenness, almost clopenness, clopenness and weak continuity are investigated

  12. First results of the CERN Resonant Weakly Interacting sub-eV Particle Search (CROWS)

    CERN Document Server

    Betz, M; Gasior, M; Thumm, M; Rieger, S W

    2013-01-01

    The CERN Resonant Weakly Interacting sub-eV Particle Search probes the existence of weakly interacting sub-eV particles like axions or hidden sector photons. It is based on the principle of an optical light shining through the wall experiment, adapted to microwaves. Critical aspects of the experiment are electromagnetic shielding, design and operation of low loss cavity resonators, and the detection of weak sinusoidal microwave signals. Lower bounds are set on the coupling constant g=4.5 x 10$^{-8}$ GeV$^{-1}$ for axionlike particles with a mass of m$_a$=7.2 $\\mu$eV. For hidden sector photons, lower bounds are set for the coupling constant $\\chi$=4.1 x 10$^{^-9}$ at a mass of m$\\gamma$=10.8 $\\mu$eV. For the latter we are probing a previously unexplored region in the parameter space.

  13. Existence and multiplicity of weak solutions for a class of degenerate nonlinear elliptic equations

    Directory of Open Access Journals (Sweden)

    Mihăilescu Mihai

    2006-01-01

    Full Text Available The goal of this paper is to study the existence and the multiplicity of non-trivial weak solutions for some degenerate nonlinear elliptic equations on the whole space . The solutions will be obtained in a subspace of the Sobolev space . The proofs rely essentially on the Mountain Pass theorem and on Ekeland's Variational principle.

  14. Principle and methods for measurement of snow water equivalent by detection of natural gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Endrestoel, G O [Institutt for Atomenergi, Kjeller (Norway)

    1979-01-01

    The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion an estimate of the precision that can be obtained by these methods is given.

  15. Principle and methods for measurement of snow water equivalent by detection of natural gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Endrestol, G O

    1979-01-01

    The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion estimates of the precision that can be obtained by these methods are given.

  16. Weak value controversy

    Science.gov (United States)

    Vaidman, L.

    2017-10-01

    Recent controversy regarding the meaning and usefulness of weak values is reviewed. It is argued that in spite of recent statistical arguments by Ferrie and Combes, experiments with anomalous weak values provide useful amplification techniques for precision measurements of small effects in many realistic situations. The statistical nature of weak values is questioned. Although measuring weak values requires an ensemble, it is argued that the weak value, similarly to an eigenvalue, is a property of a single pre- and post-selected quantum system. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  17. Estimation of equivalent dose on the ends of hemodynamic physicians during neurological procedures

    International Nuclear Information System (INIS)

    Squair, Peterson L.; Souza, Luiz C. de; Oliveira, Paulo Marcio C. de

    2005-01-01

    The estimation of doses in the hands of physicists during hemodynamic procedures is important to verify the application of radiation protection related to the optimization and limit of dose, principles required by the Portaria 453/98 of Ministry of Health/ANVISA, Brazil. It was checked the levels of exposure of the hands of doctors during the use of the equipment in hemodynamic neurological procedures through dosimetric rings with thermoluminescent dosemeters detectors of LiF: Mg, Ti (TLD-100), calibrated in personal Dose equivalent HP (0.07). The average equivalent dose in the end obtained was 41.12. μSv per scan with an expanded uncertainty of 20% for k = 2. This value is relative to the hemodynamic Neurology procedure using radiological protection procedures accessible to minimize the dose

  18. The meaning of “anomalous weak values” in quantum and classical theories

    International Nuclear Information System (INIS)

    Sokolovski, D.

    2015-01-01

    The readings of a highly inaccurate “weak” quantum meter, employed to determine the value of a dichotomous variable S without destroying the interference between the alternatives, may take arbitrary values. We show that the expected values of its readings may take any real value, depending on the choice of the states in which the system is pre- and post-selected. Some of these values must fall outside the range of eigenvalues of S, in which case they may be expressed as “anomalous” averages obtained with negative probability weights, constructed from available probability amplitudes. This behaviour is a natural consequence of the Uncertainty Principle. The phenomenon of “anomalous weak values” has no non-trivial analogue in classical statistics. - Highlights: • Average reading of a weak meter can take any value, depending on the transition. • No information about intrinsic properties of the measured system, e.g., the size of a spin. • This is a direct consequence of the Uncertainty Principle, which forbids distinguishing between interfering alternatives. • Some of the average have to lie outside the spectrum of the measured operator, i.e., be “anomalous”. • There can be no anomalous averages in a purely classical theory

  19. Quantum theory and Einstein's general relativity

    International Nuclear Information System (INIS)

    Borzeszkowski, H. von; Treder, H.

    1982-01-01

    We dicusss the meaning and prove the accordance of general relativity, wave mechanics, and the quantization of Einstein's gravitation equations themselves. Firstly, we have the problem of the influence of gravitational fields on the de Broglie waves, which influence is in accordance with Einstein's weak principle of equivalence and the limitation of measurements given by Heisenberg's uncertainty relations. Secondly, the quantization of the gravitational fields is a ''quantization of geometry.'' However, classical and quantum gravitation have the same physical meaning according to limitations of measurements given by Einstein's strong principle of equivalence and the Heisenberg uncertainties for the mechanics of test bodies

  20. Photodetectors for weak-signal detection fabricated from ZnO:(Li,N) films

    Energy Technology Data Exchange (ETDEWEB)

    He, G.H. [State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences Changchun 130033 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Zhou, H. [Key Laboratory of Semiconductors and Applications of Fujian Province, Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Department of Physics, Xiamen University, Xiamen 361005 (China); Shen, H. [State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences Changchun 130033 (China); Lu, Y.J. [Henan Key Laboratory of Diamond Optoelectronic Materials and Devices, School of Physics and Engineering, Zhengzhou University, Zhengzhou 450001 (China); Wang, H.Q.; Zheng, J.C. [Key Laboratory of Semiconductors and Applications of Fujian Province, Collaborative Innovation Center for Optoelectronic Semiconductors and Efficient Devices, Department of Physics, Xiamen University, Xiamen 361005 (China); Li, B.H. [State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences Changchun 130033 (China); Shan, C.X., E-mail: shancx@ciomp.ac.cn [State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences Changchun 130033 (China); Henan Key Laboratory of Diamond Optoelectronic Materials and Devices, School of Physics and Engineering, Zhengzhou University, Zhengzhou 450001 (China); Shen, D.Z. [State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences Changchun 130033 (China)

    2017-08-01

    Highlights: • ZnO films with carrier concentration as low as 5.0 × 10{sup 13} cm{sup −3} have been prepared via a lithium and nitrogen codoping method. • Ultraviolet photodetector that can detect weak signal with power density as low as 20 nw/cm{sup 2} have been fabricated from the ZnO:(Li,N) films. • The detectivity and noise equivalent power of the photodetector can reach 3.60 × 10{sup 15} cmHz{sup 1/2}/W and 6.67 × 10{sup −18} W{sup −1}, both of which are amongst the best values ever reported for ZnO photodetectors. - Abstract: ZnO films with carrier concentration as low as 5.0 × 10{sup 13} cm{sup −3} have been prepared via a lithium and nitrogen codoping method, and ultraviolet photodetectors have been fabricated from the films. The photodetectors can be used to detect weak signals with power density as low as 20 nw/cm{sup 2}, and the detectivity and noise equivalent power of the photodetector can reach 3.60 × 10{sup 15} cmHz{sup 1/2}/W and 6.67 × 10{sup −18} W{sup −1}, respectively, both of which are amongst the best values ever reported for ZnO based photodetectors. The high-performance of the photodetector can be attributed to the relatively low carrier concentration of the ZnO:(Li,N) films.

  1. Capital taxation : principles , properties and optimal taxation issues

    OpenAIRE

    Antonin, Céline; Touze, Vincent

    2017-01-01

    This article addresses the issue of capital taxation relying on three levels of analysis. The first level deals with the multiple ways to tax capital (income or value, proportional or progressive taxation, and the temporality of the taxation) and presents some of France's particular features within a heterogeneous European context. The second area of investigation focuses on the main dynamic properties generated by capital taxation: the principle of equivalence with a tax on consu...

  2. Weak Acid Ionization Constants and the Determination of Weak Acid-Weak Base Reaction Equilibrium Constants in the General Chemistry Laboratory

    Science.gov (United States)

    Nyasulu, Frazier; McMills, Lauren; Barlag, Rebecca

    2013-01-01

    A laboratory to determine the equilibrium constants of weak acid negative weak base reactions is described. The equilibrium constants of component reactions when multiplied together equal the numerical value of the equilibrium constant of the summative reaction. The component reactions are weak acid ionization reactions, weak base hydrolysis…

  3. Electric cars : The climate impact of electric cars, focusing on carbon dioxide equivalent emissions

    OpenAIRE

    Ly, Sandra; Sundin, Helena; Thell, Linda

    2012-01-01

    This bachelor thesis examines and models the emissions of carbon dioxide equivalents of the composition of automobiles in Sweden 2012. The report will be based on three scenarios of electricity valuation principles, which are a snapshot perspective, a retrospective perspective and a future perspective. The snapshot perspective includes high and low values for electricity on the margin, the retrospective perspective includes Nordic and European electricity mix and the future perspective includ...

  4. “Stringy” coherent states inspired by generalized uncertainty principle

    Science.gov (United States)

    Ghosh, Subir; Roy, Pinaki

    2012-05-01

    Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.

  5. “Stringy” coherent states inspired by generalized uncertainty principle

    International Nuclear Information System (INIS)

    Ghosh, Subir; Roy, Pinaki

    2012-01-01

    Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.

  6. On Equivalence of Nonequilibrium Thermodynamic and Statistical Entropies

    Directory of Open Access Journals (Sweden)

    Purushottam D. Gujrati

    2015-02-01

    Full Text Available We review the concept of nonequilibrium thermodynamic entropy and observables and internal variables as state variables, introduced recently by us, and provide a simple first principle derivation of additive statistical entropy, applicable to all nonequilibrium states by treating thermodynamics as an experimental science. We establish their numerical equivalence in several cases, which includes the most important case when the thermodynamic entropy is a state function. We discuss various interesting aspects of the two entropies and show that the number of microstates in the Boltzmann entropy includes all possible microstates of non-zero probabilities even if the system is trapped in a disjoint component of the microstate space. We show that negative thermodynamic entropy can appear from nonnegative statistical entropy.

  7. Equivalent formulations of “the equation of life”

    International Nuclear Information System (INIS)

    Ao Ping

    2014-01-01

    Motivated by progress in theoretical biology a recent proposal on a general and quantitative dynamical framework for nonequilibrium processes and dynamics of complex systems is briefly reviewed. It is nothing but the evolutionary process discovered by Charles Darwin and Alfred Wallace. Such general and structured dynamics may be tentatively named “the equation of life”. Three equivalent formulations are discussed, and it is also pointed out that such a quantitative dynamical framework leads naturally to the powerful Boltzmann-Gibbs distribution and the second law in physics. In this way, the equation of life provides a logically consistent foundation for thermodynamics. This view clarifies a particular outstanding problem and further suggests a unifying principle for physics and biology. (topical review - statistical physics and complex systems)

  8. Modeling Electric Discharges with Entropy Production Rate Principles

    Directory of Open Access Journals (Sweden)

    Thomas Christen

    2009-12-01

    Full Text Available Under which circumstances are variational principles based on entropy production rate useful tools for modeling steady states of electric (gas discharge systems far from equilibrium? It is first shown how various different approaches, as Steenbeck’s minimum voltage and Prigogine’s minimum entropy production rate principles are related to the maximum entropy production rate principle (MEPP. Secondly, three typical examples are discussed, which provide a certain insight in the structure of the models that are candidates for MEPP application. It is then thirdly argued that MEPP, although not being an exact physical law, may provide reasonable model parameter estimates, provided the constraints contain the relevant (nonlinear physical effects and the parameters to be determined are related to disregarded weak constraints that affect mainly global entropy production. Finally, it is additionally conjectured that a further reason for the success of MEPP in certain far from equilibrium systems might be based on a hidden linearity of the underlying kinetic equation(s.

  9. Existence and multiplicity of weak solutions for a class of degenerate nonlinear elliptic equations

    Directory of Open Access Journals (Sweden)

    Mihai Mihăilescu

    2006-02-01

    Full Text Available The goal of this paper is to study the existence and the multiplicity of non-trivial weak solutions for some degenerate nonlinear elliptic equations on the whole space RN. The solutions will be obtained in a subspace of the Sobolev space W1/p(RN. The proofs rely essentially on the Mountain Pass theorem and on Ekeland's Variational principle.

  10. The Problem of Weak Governments and Weak Societies in Eastern Europe

    Directory of Open Access Journals (Sweden)

    Marko Grdešić

    2008-01-01

    Full Text Available This paper argues that, for Eastern Europe, the simultaneous presence of weak governments and weak societies is a crucial obstacle which must be faced by analysts and reformers. The understanding of other normatively significant processes will be deficient without a consciousness-raising deliberation on this problem and its implications. This paper seeks to articulate the “relational” approach to state and society. In addition, the paper lays out a typology of possible patterns of relationship between state and society, dependent on whether the state is weak or strong and whether society is weak or strong. Comparative data are presented in order to provide an empirical support for the theses. Finally, the paper outlines two reform approaches which could enable breaking the vicious circle emerging in the context of weak governments and weak societies.

  11. A review of the generalized uncertainty principle

    International Nuclear Information System (INIS)

    Tawfik, Abdel Nasser; Diab, Abdel Magied

    2015-01-01

    Based on string theory, black hole physics, doubly special relativity and some ‘thought’ experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed. (review)

  12. Electro-weak theory

    International Nuclear Information System (INIS)

    Deshpande, N.G.

    1980-01-01

    By electro-weak theory is meant the unified field theory that describes both weak and electro-magnetic interactions. The development of a unified electro-weak theory is certainly the most dramatic achievement in theoretical physics to occur in the second half of this century. It puts weak interactions on the same sound theoretical footing as quantum elecrodynamics. Many theorists have contributed to this development, which culminated in the works of Glashow, Weinberg and Salam, who were jointly awarded the 1979 Nobel Prize in physics. Some of the important ideas that contributed to this development are the theory of beta decay formulated by Fermi, Parity violation suggested by Lee and Yang, and incorporated into immensely successful V-A theory of weak interactions by Sudarshan and Marshak. At the same time ideas of gauge invariance were applied to weak interaction by Schwinger, Bludman and Glashow. Weinberg and Salam then went one step further and wrote a theory that is renormalizable, i.e., all higher order corrections are finite, no mean feat for a quantum field theory. The theory had to await the development of the quark model of hadrons for its completion. A description of the electro-weak theory is given

  13. The meaning and the principle of determination of the effective dose equivalent in radiation protection

    International Nuclear Information System (INIS)

    Drexler, G.; Williams, G.; Zankl, M.

    1985-01-01

    Since the introduction of the quantity ''effective dose equivalent'' within the framework of new radiation concepts, the meaning and interpretation of the quantity is often discussed and debated. Because of its adoption as a limiting quantity in many international and national laws, it is necessary to be able to interpret this main radiation protection quantity. Examples of organ doses and the related Hsub(E) values in occupational and medical exposures are presented and the meaning of the quantity is considered for whole body exposures to external and internal photon sources, as well as for partial body external exposures to photons. (author)

  14. Weak decays

    International Nuclear Information System (INIS)

    Wojcicki, S.

    1978-11-01

    Lectures are given on weak decays from a phenomenological point of view, emphasizing new results and ideas and the relation of recent results to the new standard theoretical model. The general framework within which the weak decay is viewed and relevant fundamental questions, weak decays of noncharmed hadrons, decays of muons and the tau, and the decays of charmed particles are covered. Limitation is made to the discussion of those topics that either have received recent experimental attention or are relevant to the new physics. (JFP) 178 references

  15. Weak currents

    International Nuclear Information System (INIS)

    Leite Lopes, J.

    1976-01-01

    A survey of the fundamental ideas on weak currents such as CVC and PCAC and a presentation of the Cabibbo current and the neutral weak currents according to the Salam-Weinberg model and the Glashow-Iliopoulos-Miami model are given [fr

  16. Homological properties of modules with finite weak injective and weak flat dimensions

    OpenAIRE

    Zhao, Tiwei

    2017-01-01

    In this paper, we define a class of relative derived functors in terms of left or right weak flat resolutions to compute the weak flat dimension of modules. Moreover, we investigate two classes of modules larger than that of weak injective and weak flat modules, study the existence of covers and preenvelopes, and give some applications.

  17. Effect of Pauli principle accounting an the two-phonon states of spherical nuclej

    International Nuclear Information System (INIS)

    Solov'ev, V.G.; Stoyanov, Ch.; Nikolaeva, R.

    1983-01-01

    The effect of account for the Pauli principle in two-phonon components of the wave functions on low-lying collective states of even-even spherical nuclei is investigated. The calculations are performed for sup(114, 116)Sn and sup(142, 144, 146, 148)Sm. The account of the Pauli principle is shown to exert a weak effect on the states with large one-phonon or two-phonon components. It is concluded that in some spherical nuclei sufficiently pure two-phonon states may exist

  18. Weak circulation theorems as a way of distinguishing between generalized gravitation theories

    International Nuclear Information System (INIS)

    Enosh, M.

    1980-01-01

    It was proved in a previous paper that a generalized circulation theorem characterizes Einstein's theory of gravitation as a special case of a more general theory of gravitation, which is also based on the principle of equivalence. Here the question of whether it is possible to weaken this circulation theorem in such ways that it would imply more general theories than Einstein's is posed. This problem is solved. Principally, there are two possibilities. One of them is essentially Weyl's theory. (author)

  19. Strong and weak adsorption of CO{sub 2} on PuO{sub 2} (1 1 0) surfaces from first principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yu, H.L. [Science and Technology on Surface Physics and Chemistry Laboratory, P.O. Box 718-35, Mianyang 621907 (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Deng, X.D. [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Li, G.; Lai, X.C. [Science and Technology on Surface Physics and Chemistry Laboratory, P.O. Box 718-35, Mianyang 621907 (China); Meng, D.Q., E-mail: yuhuilong2002@126.com [Science and Technology on Surface Physics and Chemistry Laboratory, P.O. Box 718-35, Mianyang 621907 (China)

    2014-10-15

    Highlights: • The CO{sub 2} adsorption on PuO{sub 2} (1 1 0) surface was studied by GGA + U. • Both weak and strong adsorptions exist between CO{sub 2} and the PuO{sub 2} (1 1 0) surface. • Electrostatic interactions were involved in the weak interactions. • Covalent bonding was developed in the strong adsorptions. - Abstract: The CO{sub 2} adsorption on plutonium dioxide (PuO{sub 2}) (1 1 0) surface was studied using projector-augmented wave (PAW) method based on density-functional theory corrected for onsite Coulombic interactions (GGA + U). It is found that CO{sub 2} has several different adsorption features on PuO{sub 2} (1 1 0) surface. Both weak and strong adsorptions exist between CO{sub 2} and the PuO{sub 2} (1 1 0) surface. Further investigation of partial density of states (PDOS) and charge density difference on two typical absorption sites reveal that electrostatic interactions were involved in the weak interactions, while covalent bonding was developed in the strong adsorptions.

  20. A general maximum entropy framework for thermodynamic variational principles

    International Nuclear Information System (INIS)

    Dewar, Roderick C.

    2014-01-01

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law

  1. A general maximum entropy framework for thermodynamic variational principles

    Energy Technology Data Exchange (ETDEWEB)

    Dewar, Roderick C., E-mail: roderick.dewar@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Minimum free energy principles are familiar in equilibrium thermodynamics, as expressions of the second law. They also appear in statistical mechanics as variational approximation schemes, such as the mean-field and steepest-descent approximations. These well-known minimum free energy principles are here unified and extended to any system analyzable by MaxEnt, including non-equilibrium systems. The MaxEnt Lagrangian associated with a generic MaxEnt distribution p defines a generalized potential Ψ for an arbitrary probability distribution p-hat, such that Ψ is a minimum at (p-hat) = p. Minimization of Ψ with respect to p-hat thus constitutes a generic variational principle, and is equivalent to minimizing the Kullback-Leibler divergence between p-hat and p. Illustrative examples of min–Ψ are given for equilibrium and non-equilibrium systems. An interpretation of changes in Ψ is given in terms of the second law, although min–Ψ itself is an intrinsic variational property of MaxEnt that is distinct from the second law.

  2. The Principle of Energetic Consistency

    Science.gov (United States)

    Cohn, Stephen E.

    2009-01-01

    A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of

  3. EQUIVALENCE VERSUS NON-EQUIVALENCE IN ECONOMIC TRANSLATION

    Directory of Open Access Journals (Sweden)

    Cristina, Chifane

    2012-01-01

    Full Text Available This paper aims at highlighting the fact that “equivalence” represents a concept worth revisiting and detailing upon when tackling the translation process of economic texts both from English into Romanian and from Romanian into English. Far from being exhaustive, our analysis will focus upon the problems arising from the lack of equivalence at the word level. Consequently, relevant examples from the economic field will be provided to account for the following types of non-equivalence at word level: culturespecific concepts; the source language concept is not lexicalised in the target language; the source language word is semantically complex; differences in physical and interpersonal perspective; differences in expressive meaning; differences in form; differences in frequency and purpose of using specific forms and the use of loan words in the source text. Likewise, we shall illustrate a number of translation strategies necessary to deal with the afore-mentioned cases of non-equivalence: translation by a more general word (superordinate; translation by a more neutral/less expressive word; translation by cultural substitution; translation using a loan word or loan word plus explanation; translation by paraphrase using a related word; translation by paraphrase using unrelated words; translation by omission and translation by illustration.

  4. Radioactive waste equivalence

    International Nuclear Information System (INIS)

    Orlowski, S.; Schaller, K.H.

    1990-01-01

    The report reviews, for the Member States of the European Community, possible situations in which an equivalence concept for radioactive waste may be used, analyses the various factors involved, and suggests guidelines for the implementation of such a concept. Only safety and technical aspects are covered. Other aspects such as commercial ones are excluded. Situations where the need for an equivalence concept has been identified are processes where impurities are added as a consequence of the treatment and conditioning process, the substitution of wastes from similar waste streams due to the treatment process, and exchange of waste belonging to different waste categories. The analysis of factors involved and possible ways for equivalence evaluation, taking into account in particular the chemical, physical and radiological characteristics of the waste package, and the potential risks of the waste form, shows that no simple all-encompassing equivalence formula may be derived. Consequently, a step-by-step approach is suggested, which avoids complex evaluations in the case of simple exchanges

  5. New recommendations for dose equivalent

    International Nuclear Information System (INIS)

    Bengtsson, G.

    1985-01-01

    In its report 39, the International Commission on Radiation Units and Measurements (ICRU), has defined four new quantities for the determination of dose equivalents from external sources: the ambient dose equivalent, the directional dose equivalent, the individual dose equivalent, penetrating and the individual dose equivalent, superficial. The rationale behind these concepts and their practical application are discussed. Reference is made to numerical values of these quantities which will be the subject of a coming publication from the International Commission on Radiological Protection, ICRP. (Author)

  6. Equivalent models of wind farms by using aggregated wind turbines and equivalent winds

    International Nuclear Information System (INIS)

    Fernandez, L.M.; Garcia, C.A.; Saenz, J.R.; Jurado, F.

    2009-01-01

    As a result of the increasing wind farms penetration on power systems, the wind farms begin to influence power system, and therefore the modeling of wind farms has become an interesting research topic. In this paper, new equivalent models of wind farms equipped with wind turbines based on squirrel-cage induction generators and doubly-fed induction generators are proposed to represent the collective behavior on large power systems simulations, instead of using a complete model of wind farms where all the wind turbines are modeled. The models proposed here are based on aggregating wind turbines into an equivalent wind turbine which receives an equivalent wind of the ones incident on the aggregated wind turbines. The equivalent wind turbine presents re-scaled power capacity and the same complete model as the individual wind turbines, which supposes the main feature of the present equivalent models. Two equivalent winds are evaluated in this work: (1) the average wind from the ones incident on the aggregated wind turbines with similar winds, and (2) an equivalent incoming wind derived from the power curve and the wind incident on each wind turbine. The effectiveness of the equivalent models to represent the collective response of the wind farm at the point of common coupling to grid is demonstrated by comparison with the wind farm response obtained from the detailed model during power system dynamic simulations, such as wind fluctuations and a grid disturbance. The present models can be used for grid integration studies of large power system with an important reduction of the model order and the computation time

  7. The Mayer-Joule Principle: The Foundation of the First Law of Thermodynamics

    Science.gov (United States)

    Newburgh, Ronald; Leff, Harvey S.

    2011-01-01

    To most students today the mechanical equivalent of heat, called the Mayer-Joule principle, is simply a way to convert from calories to joules and vice versa. However, in linking work and heat--once thought to be disjointed concepts--it goes far beyond unit conversion. Heat had eluded understanding for two centuries after Galileo Galilei…

  8. A generalized Principle of Relativity

    International Nuclear Information System (INIS)

    Felice, Fernando de; Preti, Giovanni

    2009-01-01

    The Theory of Relativity stands as a firm groundstone on which modern physics is founded. In this paper we bring to light an hitherto undisclosed richness of this theory, namely its admitting a consistent reformulation which is able to provide a unified scenario for all kinds of particles, be they lightlike or not. This result hinges on a generalized Principle of Relativity which is intrinsic to Einstein's theory - a fact which went completely unnoticed before. The road leading to this generalization starts, in the very spirit of Relativity, from enhancing full equivalence between the four spacetime directions by requiring full equivalence between the motions along these four spacetime directions as well. So far, no measurable spatial velocity in the direction of the time axis has ever been defined, on the same footing of the usual velocities - the 'space-velocities' - in the local three-space of a given observer. In this paper, we show how Relativity allows such a 'time-velocity' to be defined in a very natural way, for any particle and in any reference frame. As a consequence of this natural definition, it also follows that the time- and space-velocity vectors sum up to define a spacelike 'world-velocity' vector, the modulus of which - the world-velocity - turns out to be equal to the Maxwell's constant c, irrespective of the observer who measures it. This measurable world-velocity (not to be confused with the space-velocities we are used to deal with) therefore represents the speed at which all kinds of particles move in spacetime, according to any observer. As remarked above, the unifying scenario thus emerging is intrinsic to Einstein's Theory; it extends the role traditionally assigned to Maxwell's constant c, and can therefore justly be referred to as 'a generalized Principle of Relativity'.

  9. Toward a measurement of weak magnetism in {sup 6}He decay

    Energy Technology Data Exchange (ETDEWEB)

    Huyan, X.; Naviliat-Cuncic, O., E-mail: naviliat@nscl.msu.edu; Bazin, D.; Gade, A.; Hughes, M.; Liddick, S.; Minamisono, K.; Noji, S.; Paulauskas, S. V.; Simon, A. [Michigan State University, National Superconducting Cyclotron Laboratory (United States); Voytas, P. [Wittenberg University, Department of Physics (United States); Weisshaar, D. [Michigan State University, National Superconducting Cyclotron Laboratory (United States)

    2016-12-15

    Sensitive searches for exotic scalar and tensor couplings in nuclear and neutron decays involve precision measurements of the shape of the β-energy spectrum. We have performed a high statistics measurement of the β-energy spectrum in the allowed Gamow-Teller decay of {sup 6}He with the aim to first find evidence of the contribution due to the weak magnetism form factor. We review here the motivation, describe the principle of the measurement, summarize the theoretical corrections to the allowed phase space, and anticipate the expected statistical precision.

  10. Tests of fundamental symmetries with trapped antihydrogen

    DEFF Research Database (Denmark)

    Rasmussen, Chris Ørum

    2016-01-01

    Antihydrogen is the simplest pure antimatter atomic system, and it allows for direct tests of CPT symmetry as well as the weak equivalence principle. Furthermore the study of antihydrogen may provide clues to the matter- antimatter asymmetry observed in the universe - one of the major unanswered...

  11. Statistical inference using weak chaos and infinite memory

    International Nuclear Information System (INIS)

    Welling, Max; Chen Yutian

    2010-01-01

    We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These 'herding systems' combine learning and inference into one algorithm, where moments or data-items are converted directly into an arbitrarily long sequence of pseudo-samples. This sequence has infinite range correlations and as such is highly structured. We show that its information content, as measured by sub-extensive entropy, can grow as fast as K log T, which is faster than the usual 1/2 K log T for exchangeable sequences generated by random posterior sampling from a Bayesian model. In one dimension we prove that herding sequences are equivalent to Sturmian sequences which have complexity exactly log(T + 1). More generally, we advocate the application of the rich theoretical framework around nonlinear dynamical systems, chaos theory and fractal geometry to statistical learning.

  12. Statistical inference using weak chaos and infinite memory

    Energy Technology Data Exchange (ETDEWEB)

    Welling, Max; Chen Yutian, E-mail: welling@ics.uci.ed, E-mail: yutian.chen@uci.ed [Donald Bren School of Information and Computer Science, University of California Irvine CA 92697-3425 (United States)

    2010-06-01

    We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These 'herding systems' combine learning and inference into one algorithm, where moments or data-items are converted directly into an arbitrarily long sequence of pseudo-samples. This sequence has infinite range correlations and as such is highly structured. We show that its information content, as measured by sub-extensive entropy, can grow as fast as K log T, which is faster than the usual 1/2 K log T for exchangeable sequences generated by random posterior sampling from a Bayesian model. In one dimension we prove that herding sequences are equivalent to Sturmian sequences which have complexity exactly log(T + 1). More generally, we advocate the application of the rich theoretical framework around nonlinear dynamical systems, chaos theory and fractal geometry to statistical learning.

  13. The monetary value of the collective dose equivalent unit (person-rem)

    International Nuclear Information System (INIS)

    Rodgers, Reginald C.

    1978-01-01

    In the design and operation of nuclear power reactor facilities, it is recommended that radiation exposures to the workers and the general public be kept as 'low as reasonably achievable' (ALARA). In the process of implementing this principle cost-benefit evaluations are part of the decision making process. For this reason a monetary value has to be assigned to the collective dose equivalent unit (person-rem). The various factors such as medical health care, societal penalty and manpower replacement/saving are essential ingredients to determine a monetary value for the person-rem. These factors and their dependence on the level of risk (or exposure level) are evaluated. Monetary values of well under $100 are determined for the public dose equivalent unit. The occupational worker person-rem value is determined to be in the range of $500 to about $5000 depending on the exposure level and the type of worker and his affiliation, i.e., temporary or permanent. A discussion of the variability and the range of the monetary values will be presented. (author)

  14. Tests of gravity with future space-based experiments

    Science.gov (United States)

    Sakstein, Jeremy

    2018-03-01

    Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.

  15. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  16. Correspondences. Equivalence relations

    International Nuclear Information System (INIS)

    Bouligand, G.M.

    1978-03-01

    We comment on sections paragraph 3 'Correspondences' and paragraph 6 'Equivalence Relations' in chapter II of 'Elements de mathematique' by N. Bourbaki in order to simplify their comprehension. Paragraph 3 exposes the ideas of a graph, correspondence and map or of function, and their composition laws. We draw attention to the following points: 1) Adopting the convention of writting from left to right, the composition law for two correspondences (A,F,B), (U,G,V) of graphs F, G is written in full generality (A,F,B)o(U,G,V) = (A,FoG,V). It is not therefore assumed that the co-domain B of the first correspondence is identical to the domain U of the second (EII.13 D.7), (1970). 2) The axiom of choice consists of creating the Hilbert terms from the only relations admitting a graph. 3) The statement of the existence theorem of a function h such that f = goh, where f and g are two given maps having the same domain (of definition), is completed if h is more precisely an injection. Paragraph 6 considers the generalisation of equality: First, by 'the equivalence relation associated with a map f of a set E identical to (x is a member of the set E and y is a member of the set E and x:f = y:f). Consequently, every relation R(x,y) which is equivalent to this is an equivalence relation in E (symmetrical, transitive, reflexive); then R admits a graph included in E x E, etc. Secondly, by means of the Hilbert term of a relation R submitted to the equivalence. In this last case, if R(x,y) is separately collectivizing in x and y, theta(x) is not the class of objects equivalent to x for R (EII.47.9), (1970). The interest of bringing together these two subjects, apart from this logical order, resides also in the fact that the theorem mentioned in 3) can be expressed by means of the equivalence relations associated with the functions f and g. The solutions of the examples proposed reveal their simplicity [fr

  17. Hartman effect and weak measurements that are not really weak

    International Nuclear Information System (INIS)

    Sokolovski, D.; Akhmatskaya, E.

    2011-01-01

    We show that in wave packet tunneling, localization of the transmitted particle amounts to a quantum measurement of the delay it experiences in the barrier. With no external degree of freedom involved, the envelope of the wave packet plays the role of the initial pointer state. Under tunneling conditions such ''self-measurement'' is necessarily weak, and the Hartman effect just reflects the general tendency of weak values to diverge, as postselection in the final state becomes improbable. We also demonstrate that it is a good precision, or a 'not really weak' quantum measurement: no matter how wide the barrier d, it is possible to transmit a wave packet with a width σ small compared to the observed advancement. As is the case with all weak measurements, the probability of transmission rapidly decreases with the ratio σ/d.

  18. A Weak Solution of a Stochastic Nonlinear Problem

    Directory of Open Access Journals (Sweden)

    M. L. Hadji

    2015-01-01

    Full Text Available We consider a problem modeling a porous medium with a random perturbation. This model occurs in many applications such as biology, medical sciences, oil exploitation, and chemical engineering. Many authors focused their study mostly on the deterministic case. The more classical one was due to Biot in the 50s, where he suggested to ignore everything that happens at the microscopic level, to apply the principles of the continuum mechanics at the macroscopic level. Here we consider a stochastic problem, that is, a problem with a random perturbation. First we prove a result on the existence and uniqueness of the solution, by making use of the weak formulation. Furthermore, we use a numerical scheme based on finite differences to present numerical results.

  19. Variational energy principle for compressible, baroclinic flow. 2: Free-energy form of Hamilton's principle

    Science.gov (United States)

    Schmid, L. A.

    1977-01-01

    The first and second variations are calculated for the irreducible form of Hamilton's Principle that involves the minimum number of dependent variables necessary to describe the kinetmatics and thermodynamics of inviscid, compressible, baroclinic flow in a specified gravitational field. The form of the second variation shows that, in the neighborhood of a stationary point that corresponds to physically stable flow, the action integral is a complex saddle surface in parameter space. There exists a form of Hamilton's Principle for which a direct solution of a flow problem is possible. This second form is related to the first by a Friedrichs transformation of the thermodynamic variables. This introduces an extra dependent variable, but the first and second variations are shown to have direct physical significance, namely they are equal to the free energy of fluctuations about the equilibrium flow that satisfies the equations of motion. If this equilibrium flow is physically stable, and if a very weak second order integral constraint on the correlation between the fluctuations of otherwise independent variables is satisfied, then the second variation of the action integral for this free energy form of Hamilton's Principle is positive-definite, so the action integral is a minimum, and can serve as the basis for a direct trail and error solution. The second order integral constraint states that the unavailable energy must be maximum at equilibrium, i.e. the fluctuations must be so correlated as to produce a second order decrease in the total unavailable energy.

  20. The twin paradox and the principle of relativity

    International Nuclear Information System (INIS)

    Grøn, Øyvind

    2013-01-01

    The twin paradox is intimately related to the principle of relativity. Two twins A and B meet, travel away from each other and meet again. From the point of view of A, B is the traveller. Thus, A predicts B to be younger than A herself, and vice versa. Both cannot be correct. The special relativistic solution is to say that if one of the twins, say A, was inertial during the separation, she will be the older one. Since the principle of relativity is not valid for accelerated motion according to the special theory of relativity B cannot consider herself as at rest permanently because she must accelerate in order to return to her sister. A general relativistic solution is to say that due to the principle of equivalence B can consider herself as at rest, but she must invoke the gravitational change of time in order to predict correctly the age of A during their separation. However one may argue that the fact that B is younger than A shows that B was accelerated, not A, and hence the principle of relativity is not valid for accelerated motion in the general theory of relativity either. I here argue that perfect inertial dragging may save the principle of relativity, and that this requires a new model of the Minkowski spacetime where the cosmic mass is represented by a massive shell with radius equal to its own Schwarzschild radius. (paper)

  1. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1987-11-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab

  2. Weak KAM theory for a weakly coupled system of Hamilton–Jacobi equations

    KAUST Repository

    Figalli, Alessio; Gomes, Diogo A.; Marcon, Diego

    2016-01-01

    Here, we extend the weak KAM and Aubry–Mather theories to optimal switching problems. We consider three issues: the analysis of the calculus of variations problem, the study of a generalized weak KAM theorem for solutions of weakly coupled systems of Hamilton–Jacobi equations, and the long-time behavior of time-dependent systems. We prove the existence and regularity of action minimizers, obtain necessary conditions for minimality, extend Fathi’s weak KAM theorem, and describe the asymptotic limit of the generalized Lax–Oleinik semigroup. © 2016, Springer-Verlag Berlin Heidelberg.

  3. Weak KAM theory for a weakly coupled system of Hamilton–Jacobi equations

    KAUST Repository

    Figalli, Alessio

    2016-06-23

    Here, we extend the weak KAM and Aubry–Mather theories to optimal switching problems. We consider three issues: the analysis of the calculus of variations problem, the study of a generalized weak KAM theorem for solutions of weakly coupled systems of Hamilton–Jacobi equations, and the long-time behavior of time-dependent systems. We prove the existence and regularity of action minimizers, obtain necessary conditions for minimality, extend Fathi’s weak KAM theorem, and describe the asymptotic limit of the generalized Lax–Oleinik semigroup. © 2016, Springer-Verlag Berlin Heidelberg.

  4. Effective dose equivalent

    International Nuclear Information System (INIS)

    Huyskens, C.J.; Passchier, W.F.

    1988-01-01

    The effective dose equivalent is a quantity which is used in the daily practice of radiation protection as well as in the radiation hygienic rules as measure for the health risks. In this contribution it is worked out upon which assumptions this quantity is based and in which cases the effective dose equivalent can be used more or less well. (H.W.)

  5. Henry Fayol’s 14 Principles of Management: Implications for Libraries and Information Centres

    Directory of Open Access Journals (Sweden)

    Uzuegbu, C. P.

    2015-06-01

    Full Text Available This paper focuses generally on the ‘fourteen principles of management’ by Henri Fayol. However, it specifically analyses their application to and implications for libraries and information centres. An extensive review of published works on management generally, and library management in particular, was conducted. This yielded vital insights on the original meaning and later modifications of these principles, as well as their application in the management of various organisations. Consequently, the strengths and weaknesses of these principles were examined to determine their suitability in libraries and information centres. Inferences, illustrations, and examples were drawn from both developed and developing countries which gives the paper a global perspective. Based on available literature, it was concluded that Fayol’s principles of management are as relevant to libraries as they are in other organisations. The paper, therefore, recommends that in addition to modifying some aspects to make these principles more responsive to the peculiar needs of libraries, further research should be undertaken to expand the breadth of these principles and ascertain their impacts on the management of information organisations.

  6. Comment on ''Modified photon equation of motion as a test for the principle of equivalence''

    International Nuclear Information System (INIS)

    Nityananda, R.

    1992-01-01

    In a recent paper, a modification of the geodesic equation was proposed for spinning photons containing a spin-curvature coupling term. The difference in arrival times of opposite circular polarizations starting simultaneously from a source was computed, obtaining a result linear in the coupling parameter. It is pointed out here that this linear term violates causality and, more generally, Fermat's principle, implying calculational errors. Even if these are corrected, there is a violation of covariance in the way the photon spin was introduced. Rectifying this makes the effect computed vanish entirely

  7. Dark-Matter Particles without Weak-Scale Masses or Weak Interactions

    International Nuclear Information System (INIS)

    Feng, Jonathan L.; Kumar, Jason

    2008-01-01

    We propose that dark matter is composed of particles that naturally have the correct thermal relic density, but have neither weak-scale masses nor weak interactions. These models emerge naturally from gauge-mediated supersymmetry breaking, where they elegantly solve the dark-matter problem. The framework accommodates single or multiple component dark matter, dark-matter masses from 10 MeV to 10 TeV, and interaction strengths from gravitational to strong. These candidates enhance many direct and indirect signals relative to weakly interacting massive particles and have qualitatively new implications for dark-matter searches and cosmological implications for colliders

  8. Equivalence relations of AF-algebra extensions

    Indian Academy of Sciences (India)

    In this paper, we consider equivalence relations of *-algebra extensions and describe the relationship between the isomorphism equivalence and the unitary equivalence. We also show that a certain group homomorphism is the obstruction for these equivalence relations to be the same.

  9. Symmetries of the triple degenerate DNLS equations for weakly nonlinear dispersive MHD waves

    International Nuclear Information System (INIS)

    Webb, G. M.; Brio, M.; Zank, G. P.

    1996-01-01

    A formulation of Hamiltonian and Lagrangian variational principles, Lie point symmetries and conservation laws for the triple degenerate DNLS equations describing the propagation of weakly nonlinear dispersive MHD waves along the ambient magnetic field, in β∼1 plasmas is given. The equations describe the interaction of the Alfven and magnetoacoustic modes near the triple umbilic point, where the fast magnetosonic, slow magnetosonic and Alfven speeds coincide and a g 2 =V A 2 where a g is the gas sound speed and V A is the Alfven speed. A discussion is given of the travelling wave similarity solutions of the equations, which include solitary wave and periodic traveling waves. Strongly compressible solutions indicate the necessity for the insertion of shocks in the flow, whereas weakly compressible, near Alfvenic solutions resemble similar, shock free travelling wave solutions of the DNLS equation

  10. Detection of light-matter interaction in the weak-coupling regime by quantum light

    Science.gov (United States)

    Bin, Qian; Lü, Xin-You; Zheng, Li-Li; Bin, Shang-Wu; Wu, Ying

    2018-04-01

    "Mollow spectroscopy" is a photon statistics spectroscopy, obtained by scanning the quantum light scattered from a source system. Here, we apply this technique to detect the weak light-matter interaction between the cavity and atom (or a mechanical oscillator) when the strong system dissipation is included. We find that the weak interaction can be measured with high accuracy when exciting the target cavity by quantum light scattered from the source halfway between the central peak and each side peak. This originally comes from the strong correlation of the injected quantum photons. In principle, our proposal can be applied into the normal cavity quantum electrodynamics system described by the Jaynes-Cummings model and an optomechanical system. Furthermore, it is state of the art for experiment even when the interaction strength is reduced to a very small value.

  11. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    International Nuclear Information System (INIS)

    Yao Dezhong; He Bin

    2003-01-01

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping

  12. Equivalent physical models and formulation of equivalent source layer in high-resolution EEG imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yao Dezhong [School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu City, 610054, Sichuan Province (China); He Bin [The University of Illinois at Chicago, IL (United States)

    2003-11-07

    In high-resolution EEG imaging, both equivalent dipole layer (EDL) and equivalent charge layer (ECL) assumed to be located just above the cortical surface have been proposed as high-resolution imaging modalities or as intermediate steps to estimate the epicortical potential. Presented here are the equivalent physical models of these two equivalent source layers (ESL) which show that the strength of EDL is proportional to the surface potential of the layer when the outside of the layer is filled with an insulator, and that the strength of ECL is the normal current of the layer when the outside is filled with a perfect conductor. Based on these equivalent physical models, closed solutions of ECL and EDL corresponding to a dipole enclosed by a spherical layer are given. These results provide the theoretical basis of ESL applications in high-resolution EEG mapping.

  13. The transnational ne bis in idem principle in the EU. Mutual recognition and equivalent protection of human rights

    Directory of Open Access Journals (Sweden)

    John A.E. Vervaele

    2005-12-01

    Full Text Available The deepening and widening of European integration has led to an increase in transborder crime. Concurrent prosecution and sanctioning by several Member States is not only a problem in inter-state relations and an obstacle in the European integration process, but also a violation of the ne bis in idem principle, defined as a transnational human right in a common judicial area. This article analyzes whether and to what extent the ECHR has contributed and may continue to contribute to the development of such a common ne bis in idem standard in Europe. It is also examined whether the application of the ne bis in idem principle in classic inter-state judicial cooperation in criminal matters in the framework of the Council of Europe may make such a contribution as well. The transnational function of the ne bis in idem principle is discussed in the light of the Court of Justice’s case law on ne bis in idem in the framework of the area of Freedom, Security and Justice. Finally the inherent tension between mutual recognition and the protection of human rights in transnational justice is analyzed by looking at the insertion of the ne bis in idem principle in the Framework Decision on the European arrest warrant.

  14. Coexistence of weak ferromagnetism and ferroelectricity in the high pressure LiNbO3-type phase of FeTiO3.

    Science.gov (United States)

    Varga, T; Kumar, A; Vlahos, E; Denev, S; Park, M; Hong, S; Sanehira, T; Wang, Y; Fennie, C J; Streiffer, S K; Ke, X; Schiffer, P; Gopalan, V; Mitchell, J F

    2009-07-24

    We report the magnetic and electrical characteristics of polycrystalline FeTiO_{3} synthesized at high pressure that is isostructural with acentric LiNbO_{3} (LBO). Piezoresponse force microscopy, optical second harmonic generation, and magnetometry demonstrate ferroelectricity at and below room temperature and weak ferromagnetism below approximately 120 K. These results validate symmetry-based criteria and first-principles calculations of the coexistence of ferroelectricity and weak ferromagnetism in a series of transition metal titanates crystallizing in the LBO structure.

  15. An Invariance Principle to Ferrari-Spohn Diffusions

    Science.gov (United States)

    Ioffe, Dmitry; Shlosman, Senya; Velenik, Yvan

    2015-06-01

    We prove an invariance principle for a class of tilted 1 + 1-dimensional SOS models or, equivalently, for a class of tilted random walk bridges in . The limiting objects are stationary reversible ergodic diffusions with drifts given by the logarithmic derivatives of the ground states of associated singular Sturm-Liouville operators. In the case of a linear area tilt, we recover the Ferrari-Spohn diffusion with log-Airy drift, which was derived in Ferrari and Spohn (Ann Probab 33(4):1302—1325, 2005) in the context of Brownian motions conditioned to stay above circular and parabolic barriers.

  16. An alternative treatment of phenomenological higher-order strain-gradient plasticity theory

    DEFF Research Database (Denmark)

    Kuroda, Mitsutoshi; Tvergaard, Viggo

    2010-01-01

    strain is discussed, applying a dislocation theory-based consideration. Then, a differential equation for the equivalent plastic strain-gradient is introduced as an additional governing equation. Its weak form makes it possible to deduce and impose extra boundary conditions for the equivalent plastic...... strain. A connection between the present treatment and strain-gradient theories based on an extended virtual work principle is discussed. Furthermore, a numerical implementation and analysis of constrained simple shear of a thin strip are presented....

  17. The action principle for a system of differential equations

    International Nuclear Information System (INIS)

    Gitman, D M; Kupriyanov, V G

    2007-01-01

    We consider the problem of constructing an action functional for physical systems whose classical equations of motion cannot be directly identified with Euler-Lagrange equations for an action principle. Two ways of constructing the action principle are presented. From simple consideration, we derive the necessary and sufficient conditions for the existence of a multiplier matrix which can endow a prescribed set of second-order differential equations with the structure of the Euler-Lagrange equations. An explicit form of the action is constructed if such a multiplier exists. If a given set of differential equations cannot be derived from an action principle, one can reformulate such a set in an equivalent first-order form which can always be treated as the Euler-Lagrange equations of a certain action. We construct such an action explicitly. There exists an ambiguity (not reduced to a total time derivative) in associating a Lagrange function with a given set of equations. We present a complete description of this ambiguity. The general procedure is illustrated by several examples

  18. The action principle for a system of differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D M [Instituto de FIsica, Universidade de Sao Paulo (Brazil); Kupriyanov, V G [Instituto de FIsica, Universidade de Sao Paulo (Brazil)

    2007-08-17

    We consider the problem of constructing an action functional for physical systems whose classical equations of motion cannot be directly identified with Euler-Lagrange equations for an action principle. Two ways of constructing the action principle are presented. From simple consideration, we derive the necessary and sufficient conditions for the existence of a multiplier matrix which can endow a prescribed set of second-order differential equations with the structure of the Euler-Lagrange equations. An explicit form of the action is constructed if such a multiplier exists. If a given set of differential equations cannot be derived from an action principle, one can reformulate such a set in an equivalent first-order form which can always be treated as the Euler-Lagrange equations of a certain action. We construct such an action explicitly. There exists an ambiguity (not reduced to a total time derivative) in associating a Lagrange function with a given set of equations. We present a complete description of this ambiguity. The general procedure is illustrated by several examples.

  19. Evaluation of the efficacy of four weak acids as antifungal preservatives in low-acid intermediate moisture model food systems.

    Science.gov (United States)

    Huang, Yang; Wilson, Mark; Chapman, Belinda; Hocking, Ailsa D

    2010-02-01

    The potential efficacy of four weak acids as preservatives in low-acid intermediate moisture foods was assessed using a glycerol based agar medium. The minimum inhibitory concentrations (MIC, % wt./wt.) of each acid was determined at two pH values (pH 5.0, pH 6.0) and two a(w) values (0.85, 0.90) for five food spoilage fungi, Eurotium herbariorum, Eurotium rubrum, Aspergillus niger, Aspergillus flavus and Penicillium roqueforti. Sorbic acid, a preservative commonly used to control fungal growth in low-acid intermediate moisture foods, was included as a reference. The MIC values of the four acids were lower at pH 5.0 than pH 6.0 at equivalent a(w) values, and lower at 0.85 a(w) than 0.90 a(w) at equivalent pH values. By comparison with the MIC values of sorbic acid, those of caprylic acid and dehydroacetic acid were generally lower, whereas those for caproic acid were generally higher. No general observation could be made in the case of capric acid. The antifungal activities of all five weak acids appeared related not only to the undissociated form, but also the dissociated form, of each acid.

  20. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  1. Weakly sheared active suspensions: hydrodynamics, stability, and rheology.

    Science.gov (United States)

    Cui, Zhenlu

    2011-03-01

    We present a kinetic model for flowing active suspensions and analyze the behavior of a suspension subjected to a weak steady shear. Asymptotic solutions are sought in Deborah number expansions. At the leading order, we explore the steady states and perform their stability analysis. We predict the rheology of active systems including an activity thickening or thinning behavior of the apparent viscosity and a negative apparent viscosity depending on the particle type, flow alignment, and the anchoring conditions, which can be tested on bacterial suspensions. We find remarkable dualities that show that flow-aligning rodlike contractile (extensile) particles are dynamically and rheologically equivalent to flow-aligning discoid extensile (contractile) particles for both tangential and homeotropic anchoring conditions. Another key prediction of this work is the role of the concentration of active suspensions in controlling the rheological behavior: the apparent viscosity may decrease with the increase of the concentration.

  2. Modelling of Airship Flight Mechanics by the Projection Equivalent Method

    Directory of Open Access Journals (Sweden)

    Frantisek Jelenciak

    2015-12-01

    Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.

  3. Information Leakage from Logically Equivalent Frames

    Science.gov (United States)

    Sher, Shlomi; McKenzie, Craig R. M.

    2006-01-01

    Framing effects are said to occur when equivalent frames lead to different choices. However, the equivalence in question has been incompletely conceptualized. In a new normative analysis of framing effects, we complete the conceptualization by introducing the notion of information equivalence. Information equivalence obtains when no…

  4. 21 CFR 26.9 - Equivalence determination.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Equivalence determination. 26.9 Section 26.9 Food... Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.9 Equivalence determination... document insufficient evidence of equivalence, lack of opportunity to assess equivalence or a determination...

  5. Ultra-Weak Fiber Bragg Grating Sensing Network Coated with Sensitive Material for Multi-Parameter Measurements

    OpenAIRE

    Bai, Wei; Yang, Minghong; Hu, Chenyuan; Dai, Jixiang; Zhong, Xuexiang; Huang, Shuai; Wang, Gaopeng

    2017-01-01

    A multi-parameter measurement system based on ultra-weak fiber Bragg grating (UFBG) array with sensitive material was proposed and experimentally demonstrated. The UFBG array interrogation principle is time division multiplex technology with two semiconductor optical amplifiers as timing units. Experimental results showed that the performance of the proposed UFBG system is almost equal to that of traditional FBG, while the UFBG array system has obvious superiority with potential multiplexing ...

  6. Editorial: New operational dose equivalent quantities

    International Nuclear Information System (INIS)

    Harvey, J.R.

    1985-01-01

    The ICRU Report 39 entitled ''Determination of Dose Equivalents Resulting from External Radiation Sources'' is briefly discussed. Four new operational dose equivalent quantities have been recommended in ICRU 39. The 'ambient dose equivalent' and the 'directional dose equivalent' are applicable to environmental monitoring and the 'individual dose equivalent, penetrating' and the 'individual dose equivalent, superficial' are applicable to individual monitoring. The quantities should meet the needs of day-to-day operational practice, while being acceptable to those concerned with metrological precision, and at the same time be used to give effective control consistent with current perceptions of the risks associated with exposure to ionizing radiations. (U.K.)

  7. Long-range correlations, geometrical structure, and transport properties of macromolecular solutions. The equivalence of configurational statistics and geometrodynamics of large molecules.

    Science.gov (United States)

    Mezzasalma, Stefano A

    2007-12-04

    A special theory of Brownian relativity was previously proposed to describe the universal picture arising in ideal polymer solutions. In brief, it redefines a Gaussian macromolecule in a 4-dimensional diffusive spacetime, establishing a (weak) Lorentz-Poincaré invariance between liquid and polymer Einstein's laws for Brownian movement. Here, aimed at inquiring into the effect of correlations, we deepen the extension of the special theory to a general formulation. The previous statistical equivalence, for dynamic trajectories of liquid molecules and static configurations of macromolecules, and rather obvious in uncorrelated systems, is enlarged by a more general principle of equivalence, for configurational statistics and geometrodynamics. Accordingly, the three geodesic motion, continuity, and field equations could be rewritten, and a number of scaling behaviors were recovered in a spacetime endowed with general static isotropic metric (i.e., for equilibrium polymer solutions). We also dealt with universality in the volume fraction and, unexpectedly, found that a hyperscaling relation of the form, (average size) x (diffusivity) x (viscosity)1/2 ~f(N0, phi0) is fulfilled in several regimes, both in the chain monomer number (N) and polymer volume fraction (phi). Entangled macromolecular dynamics was treated as a geodesic light deflection, entaglements acting in close analogy to the field generated by a spherically symmetric mass source, where length fluctuations of the chain primitive path behave as azimuth fluctuations of its shape. Finally, the general transformation rule for translational and diffusive frames gives a coordinate gauge invariance, suggesting a widened Lorentz-Poincaré symmetry for Brownian statistics. We expect this approach to find effective applications to solutions of arbitrarily large molecules displaying a variety of structures, where the effect of geometry is more explicit and significant in itself (e.g., surfactants, lipids, proteins).

  8. Laser-beam scintillations for weak and moderate turbulence

    Science.gov (United States)

    Baskov, R. A.; Chumak, O. O.

    2018-04-01

    The scintillation index is obtained for the practically important range of weak and moderate atmospheric turbulence. To study this challenging range, the Boltzmann-Langevin kinetic equation, describing light propagation, is derived from first principles of quantum optics based on the technique of the photon distribution function (PDF) [Berman et al., Phys. Rev. A 74, 013805 (2006), 10.1103/PhysRevA.74.013805]. The paraxial approximation for laser beams reduces the collision integral for the PDF to a two-dimensional operator in the momentum space. Analytical solutions for the average value of PDF as well as for its fluctuating constituent are obtained using an iterative procedure. The calculated scintillation index is considerably greater than that obtained within the Rytov approximation even at moderate turbulence strength. The relevant explanation is proposed.

  9. Mixed field dose equivalent measuring instruments

    International Nuclear Information System (INIS)

    Brackenbush, L.W.; McDonald, J.C.; Endres, G.W.R.; Quam, W.

    1985-01-01

    In the past, separate instruments have been used to monitor dose equivalent from neutrons and gamma rays. It has been demonstrated that it is now possible to measure simultaneously neutron and gamma dose with a single instrument, the tissue equivalent proportional counter (TEPC). With appropriate algorithms dose equivalent can also be determined from the TEPC. A simple ''pocket rem meter'' for measuring neutron dose equivalent has already been developed. Improved algorithms for determining dose equivalent for mixed fields are presented. (author)

  10. Characterization of revenue equivalence

    NARCIS (Netherlands)

    Heydenreich, B.; Müller, R.; Uetz, Marc Jochen; Vohra, R.

    2009-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called revenue equivalence. We give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The characterization holds

  11. Weak mixing below the weak scale in dark-matter direct detection

    Science.gov (United States)

    Brod, Joachim; Grinstein, Benjamin; Stamou, Emmanuel; Zupan, Jure

    2018-02-01

    If dark matter couples predominantly to the axial-vector currents with heavy quarks, the leading contribution to dark-matter scattering on nuclei is either due to one-loop weak corrections or due to the heavy-quark axial charges of the nucleons. We calculate the effects of Higgs and weak gauge-boson exchanges for dark matter coupling to heavy-quark axial-vector currents in an effective theory below the weak scale. By explicit computation, we show that the leading-logarithmic QCD corrections are important, and thus resum them to all orders using the renormalization group.

  12. Bagging Weak Predictors

    DEFF Research Database (Denmark)

    Lukas, Manuel; Hillebrand, Eric

    Relations between economic variables can often not be exploited for forecasting, suggesting that predictors are weak in the sense that estimation uncertainty is larger than bias from ignoring the relation. In this paper, we propose a novel bagging predictor designed for such weak predictor variab...

  13. Characterization of Revenue Equivalence

    NARCIS (Netherlands)

    Heydenreich, Birgit; Müller, Rudolf; Uetz, Marc Jochen; Vohra, Rakesh

    2008-01-01

    The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called \\emph{revenue equivalence}. In this paper we give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The

  14. Limiting absorption principle at low energies for a mathematical model of weak interaction: the decay of a boson; Proprietes spectrales et principe d'absorption limite a faible energie pour un modele mathematique d'interaction faible: la desintegration d'un boson

    Energy Technology Data Exchange (ETDEWEB)

    Barbarouxa, J.M. [Centre de Physique Theorique, 13 - Marseille (France); Toulon-Var Univ. du Sud, Dept. de Mathematiques, 83 - La Garde (France); Guillot, J.C. [Centre de Mathematiques Appliquees, UMR 7641, Ecole Polytechnique - CNRS, 91 - Palaiseau (France)

    2009-09-15

    We study the spectral properties of a Hamiltonian describing the weak decay of spin 1 massive bosons into the full family of leptons. We prove that the considered Hamiltonian is self-adjoint, with a unique ground state and we derive a Mourre estimate and a limiting absorption principle above the ground state energy and below the first threshold, for a sufficiently small coupling constant. As a corollary, we prove absence of eigenvalues and absolute continuity of the energy spectrum in the same spectral interval. (authors)

  15. Dynamical response of the Galileo Galilei on the ground rotor to test the equivalence principle: Theory, simulation, and experiment. I. The normal modes

    International Nuclear Information System (INIS)

    Comandi, G.L.; Chiofalo, M.L.; Toncelli, R.; Bramanti, D.; Polacco, E.; Nobili, A.M.

    2006-01-01

    Recent theoretical work suggests that violation of the equivalence principle might be revealed in a measurement of the fractional differential acceleration η between two test bodies-of different compositions, falling in the gravitational field of a source mass--if the measurement is made to the level of η≅10 -13 or better. This being within the reach of ground based experiments gives them a new impetus. However, while slowly rotating torsion balances in ground laboratories are close to reaching this level, only an experiment performed in a low orbit around the Earth is likely to provide a much better accuracy. We report on the progress made with the 'Galileo Galilei on the ground' (GGG) experiment, which aims to compete with torsion balances using an instrument design also capable of being converted into a much higher sensitivity space test. In the present and following articles (Part I and Part II), we demonstrate that the dynamical response of the GGG differential accelerometer set into supercritical rotation-in particular, its normal modes (Part I) and rejection of common mode effects (Part II)-can be predicted by means of a simple but effective model that embodies all the relevant physics. Analytical solutions are obtained under special limits, which provide the theoretical understanding. A simulation environment is set up, obtaining a quantitative agreement with the available experimental data on the frequencies of the normal modes and on the whirling behavior. This is a needed and reliable tool for controlling and separating perturbative effects from the expected signal, as well as for planning the optimization of the apparatus

  16. Weakly interacting topological insulators: Quantum criticality and the renormalization group approach

    Science.gov (United States)

    Chen, Wei

    2018-03-01

    For D -dimensional weakly interacting topological insulators in certain symmetry classes, the topological invariant can be calculated from a D - or (D +1 ) -dimensional integration over a certain curvature function that is expressed in terms of single-particle Green's functions. Based on the divergence of curvature function at the topological phase transition, we demonstrate how a renormalization group approach circumvents these integrations and reduces the necessary calculation to that for the Green's function alone, rendering a numerically efficient tool to identify topological phase transitions in a large parameter space. The method further unveils a number of statistical aspects related to the quantum criticality in weakly interacting topological insulators, including correlation function, critical exponents, and scaling laws, that can be used to characterize the topological phase transitions driven by either interacting or noninteracting parameters. We use 1D class BDI and 2D class A Dirac models with electron-electron and electron-phonon interactions to demonstrate these principles and find that interactions may change the critical exponents of the topological insulators.

  17. Detection of radionuclides from weak and poorly resolved spectra using Lasso and subsampling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Er-Wei, E-mail: er-wei-bai@uiowa.edu [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States); Chan, Kung-sik, E-mail: kung-sik-chan@uiowa.edu [Department of Statistical and Actuarial Science, University of Iowa, Iowa City, IA 52242 (United States); Eichinger, William, E-mail: william-eichinger@uiowa.edu [Department of Civil and Environmental Engineering, University of Iowa, Iowa City, IA 52242 (United States); Kump, Paul [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States)

    2011-10-15

    We consider a problem of identification of nuclides from weak and poorly resolved spectra. A two stage algorithm is proposed and tested based on the principle of majority voting. The idea is to model gamma-ray counts as Poisson processes. Then, the average part is taken to be the model and the difference between the observed gamma-ray counts and the average is considered as random noise. In the linear part, the unknown coefficients correspond to if isotopes of interest are present or absent. Lasso types of algorithms are applied to find non-vanishing coefficients. Since Lasso or any prediction error based algorithm is inconsistent with variable selection for finite data length, an estimate of parameter distribution based on subsampling techniques is added in addition to Lasso. Simulation examples are provided in which the traditional peak detection algorithms fail to work and the proposed two stage algorithm performs well in terms of both the False Negative and False Positive errors. - Highlights: > Identification of nuclides from weak and poorly resolved spectra. > An algorithm is proposed and tested based on the principle of majority voting. > Lasso types of algorithms are applied to find non-vanishing coefficients. > An estimate of parameter distribution based on sub-sampling techniques is included. > Simulations compare the results of the proposed method with those of peak detection.

  18. Detection of radionuclides from weak and poorly resolved spectra using Lasso and subsampling techniques

    International Nuclear Information System (INIS)

    Bai, Er-Wei; Chan, Kung-sik; Eichinger, William; Kump, Paul

    2011-01-01

    We consider a problem of identification of nuclides from weak and poorly resolved spectra. A two stage algorithm is proposed and tested based on the principle of majority voting. The idea is to model gamma-ray counts as Poisson processes. Then, the average part is taken to be the model and the difference between the observed gamma-ray counts and the average is considered as random noise. In the linear part, the unknown coefficients correspond to if isotopes of interest are present or absent. Lasso types of algorithms are applied to find non-vanishing coefficients. Since Lasso or any prediction error based algorithm is inconsistent with variable selection for finite data length, an estimate of parameter distribution based on subsampling techniques is added in addition to Lasso. Simulation examples are provided in which the traditional peak detection algorithms fail to work and the proposed two stage algorithm performs well in terms of both the False Negative and False Positive errors. - Highlights: → Identification of nuclides from weak and poorly resolved spectra. → An algorithm is proposed and tested based on the principle of majority voting. → Lasso types of algorithms are applied to find non-vanishing coefficients. → An estimate of parameter distribution based on sub-sampling techniques is included. → Simulations compare the results of the proposed method with those of peak detection.

  19. On the operator equivalents

    International Nuclear Information System (INIS)

    Grenet, G.; Kibler, M.

    1978-06-01

    A closed polynomial formula for the qth component of the diagonal operator equivalent of order k is derived in terms of angular momentum operators. The interest in various fields of molecular and solid state physics of using such a formula in connection with symmetry adapted operator equivalents is outlined

  20. Degeneracy relations in QCD and the equivalence of two systematic all-orders methods for setting the renormalization scale

    Directory of Open Access Journals (Sweden)

    Huan-Yu Bi

    2015-09-01

    Full Text Available The Principle of Maximum Conformality (PMC eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I; the other, more recent, method (PMC-II uses the Rδ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfy all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio Re+e− and the Higgs partial width Γ(H→bb¯. Both methods lead to the same resummed (‘conformal’ series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {βi}-terms in the pQCD expansion are taken into account. We also show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.

  1. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Science.gov (United States)

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  2. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Science.gov (United States)

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  3. Legitimacy of the Restorative Justice Principle in the Context of Criminal Law Enforcement

    Directory of Open Access Journals (Sweden)

    - Sukardi

    2014-10-01

    Full Text Available This research reviews the essence of the restorative justice principle as an approach in the settlement of criminal cases, and it aims to provide an overview of the construction of the restorative justice principle in criminal law enforcement. The outcomes of the research indicate that the restorative justice principle has been subject to frequent study in its understanding as an alternative criminal case settlement method, by way of positioning outside the criminal judiciary system. As it turns out in practice, however, it has certain weaknesses, particularly in view of the accountability and legitimacy aspects of its establishment. Therefore, there is a need for a scientific investigation process for the purpose of determining the status of parties involved in a case, as well as for positioning the case concerned. Based on such view, the restorative justice principle appears to be the ideal approach to be applied in the criminal judiciary system.

  4. Strong equivalence, Lorentz and CPT violation, anti-hydrogen spectroscopy and gamma-ray burst polarimetry

    International Nuclear Information System (INIS)

    Shore, Graham M.

    2005-01-01

    The strong equivalence principle, local Lorentz invariance and CPT symmetry are fundamental ingredients of the quantum field theories used to describe elementary particle physics. Nevertheless, each may be violated by simple modifications to the dynamics while apparently preserving the essential fundamental structure of quantum field theory itself. In this paper, we analyse the construction of strong equivalence, Lorentz and CPT violating Lagrangians for QED and review and propose some experimental tests in the fields of astrophysical polarimetry and precision atomic spectroscopy. In particular, modifications of the Maxwell action predict a birefringent rotation of the direction of linearly polarised radiation from synchrotron emission which may be studied using radio galaxies or, potentially, gamma-ray bursts. In the Dirac sector, changes in atomic energy levels are predicted which may be probed in precision spectroscopy of hydrogen and anti-hydrogen atoms, notably in the Doppler-free, two-photon 1s-2s and 2s-nd (n∼10) transitions

  5. Major concerns in developing countries: applications of the Precautionary Principle in Ecuador.

    Science.gov (United States)

    Harari, Raúl; Freire Morales, Rocío; Harari, Homero

    2004-01-01

    Ecuador is a Latin American country with one of the biggest biodiversities. At the same time, social and environmental problems are also big. Poverty, political and social problems as well as questions like old transport systems, imported hazards from industrialized countries and lack of information and weak health care systems are the framework of this situation. The most common problems are the use of heavy metals in many activities without safety and health protection, a low technological oil production during two decades, intensive use of pesticides in agriculture, and some other chemical risks. A limited capacity to develop prevention strategies, reduced technical and scientific skills, and the absence of a reliable information and control system, lead to a weak response mechanism. The Precautionary Principle could help to stimulate prevention, protection and to have a new tool to improve the interest in environment and health problems. Reinforcing the presence of international organizations like the World Health Organization or the International Labour Organization, establishing bridges among scientific organizations from developed and developing countries and introducing the Precautionary Principle in the legislation and daily practices of industry and agriculture could lead to an improvement in our environment and health.

  6. Investigation of Equivalent Circuit for PEMFC Assessment

    International Nuclear Information System (INIS)

    Myong, Kwang Jae

    2011-01-01

    Chemical reactions occurring in a PEMFC are dominated by the physical conditions and interface properties, and the reactions are expressed in terms of impedance. The performance of a PEMFC can be simply diagnosed by examining the impedance because impedance characteristics can be expressed by an equivalent electrical circuit. In this study, the characteristics of a PEMFC are assessed using the AC impedance and various equivalent circuits such as a simple equivalent circuit, equivalent circuit with a CPE, equivalent circuit with two RCs, and equivalent circuit with two CPEs. It was found in this study that the characteristics of a PEMFC could be assessed using impedance and an equivalent circuit, and the accuracy was highest for an equivalent circuit with two CPEs

  7. A POPULATION OF X-RAY WEAK QUASARS: PHL 1811 ANALOGS AT HIGH REDSHIFT

    International Nuclear Information System (INIS)

    Wu Jianfeng; Brandt, W. N.; Schneider, Donald P.; Hall, Patrick B.; Gibson, Robert R.; Schmidt, Sarah J.; Richards, Gordon T.; Shemmer, Ohad; Just, Dennis W.

    2011-01-01

    We report the results from Chandra and XMM-Newton observations of a sample of 10 type 1 quasars selected to have unusual UV emission-line properties (weak and blueshifted high-ionization lines; strong UV Fe emission) similar to those of PHL 1811, a confirmed intrinsically X-ray weak quasar. These quasars were identified by the Sloan Digital Sky Survey at high redshift (z ∼ 2.2); eight are radio quiet while two are radio intermediate. All of the radio-quiet PHL 1811 analogs, without exception, are notably X-ray weak by a mean factor of ∼13. These sources lack broad absorption lines and have blue UV/optical continua, supporting the hypothesis that they are intrinsically X-ray weak like PHL 1811 itself. However, their average X-ray spectrum appears to be harder than those of typical quasars, which may indicate the presence of heavy intrinsic X-ray absorption. Our sample of radio-quiet PHL 1811 analogs supports a connection between an X-ray weak spectral energy distribution and PHL 1811-like UV emission lines; this connection provides an economical way to identify X-ray weak type 1 quasars. The fraction of radio-quiet PHL 1811 analogs in the radio-quiet quasar population is estimated to be ∼< 1.2%. We have investigated correlations between relative X-ray brightness and UV emission-line properties (e.g., C IV equivalent width and blueshift) for a sample combining our radio-quiet PHL 1811 analogs, PHL 1811 itself, and typical type 1 quasars. These correlation analyses suggest that PHL 1811 analogs may have extreme wind-dominated broad emission-line regions. Observationally, the radio-quiet PHL 1811 analogs appear to be a subset (∼30%) of radio-quiet weak-line quasars (WLQs). The existence of a subset of quasars in which high-ionization 'shielding gas' covers most of the broad emission-line region (BELR), but little more than the BELR, could potentially unify the PHL 1811 analogs and WLQs. The two radio-intermediate PHL 1811 analogs are X-ray bright. X

  8. Underwater electric field detection system based on weakly electric fish

    Science.gov (United States)

    Xue, Wei; Wang, Tianyu; Wang, Qi

    2018-04-01

    Weakly electric fish sense their surroundings in complete darkness by their active electric field detection system. However, due to the insufficient detection capacity of the electric field, the detection distance is not enough, and the detection accuracy is not high. In this paper, a method of underwater detection based on rotating current field theory is proposed to improve the performance of underwater electric field detection system. First of all, we built underwater detection system based on the theory of the spin current field mathematical model with the help of the results of previous researchers. Then we completed the principle prototype and finished the metal objects in the water environment detection experiments, laid the foundation for the further experiments.

  9. Compatibility between weak gel and microorganisms in weak gel-assisted microbial enhanced oil recovery.

    Science.gov (United States)

    Qi, Yi-Bin; Zheng, Cheng-Gang; Lv, Cheng-Yuan; Lun, Zeng-Min; Ma, Tao

    2018-03-20

    To investigate weak gel-assisted microbial flooding in Block Wang Long Zhuang in the Jiangsu Oilfield, the compatibility of weak gel and microbe was evaluated using laboratory experiments. Bacillus sp. W5 was isolated from the formation water in Block Wang Long Zhuang. The rate of oil degradation reached 178 mg/day, and the rate of viscosity reduction reached 75.3%. Strain W5 could produce lipopeptide with a yield of 1254 mg/L. Emulsified crude oil was dispersed in the microbial degradation system, and the average diameter of the emulsified oil particles was 18.54 μm. Bacillus sp. W5 did not affect the rheological properties of the weak gel, and the presence of the weak gel did not significantly affect bacterial reproduction (as indicated by an unchanged microbial biomass), emulsification (surface tension is 35.56 mN/m and average oil particles size is 21.38 μm), oil degradation (162 mg/day) and oil viscosity reduction (72.7%). Core-flooding experiments indicated oil recovery of 23.6% when both weak gel and Bacillus sp. W5 were injected into the system, 14.76% when only the weak gel was injected, and 9.78% with strain W5 was injected without the weak gel. The results demonstrate good compatibility between strains W5 and the weak gel and highlight the application potential of weak gel-assisted microbial flooding. Copyright © 2018 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  10. The Principle of Will Autonomy in the Obligatory Law

    Directory of Open Access Journals (Sweden)

    MA. Shyhrete Kastrati

    2015-06-01

    Full Text Available The principle of autonomy of will is legislated with the Article 2 of the Law no. 04/L–077 on Obligational Relationships1, thereby providing the legal grounds for the regulation of legal relations between parties in obligational relationship. This study aims to provide a contribution to the theory and practice, and also aims at providing a modest contribution to the obligational law doctrine in Kosovo. The purpose of the paper is to explore the gaps and weaknesses in practical implementation of the principle, which represents the main pillar of obligational law. In this paper, combined methods were used, including research and descriptive methods, analysis and synthesis, comparative and normative methods. The exploration method was used throughout the paper, and entails the collection of hard-copy and electronic materials. The descriptive method implies a description of concepts, important thoughts of legal science, and in this case, on the principle of autonomy of will, thereby using literature of various authors. The analytical and synthetic methodology is aimed at achieving the study objectives, the recognition of the principle of autonomy of will, practical implementation thereof, and conclusions. The comparative method was applied in comparing the implementation of the principle in the Law on Obligational Relationships of Kosovo and the Law on Obligational Relationships of the former Socialist Federal Republic of Kosovo, and the Civil Code of the Republic of Albania. The normative method was necessary, since the topic of the study is about legal norms.

  11. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  12. GAUGE PRINCIPLE AND VARIATIONAL FORMULATION FOR FLOWS OF AN IDEAL FLUID

    Institute of Scientific and Technical Information of China (English)

    KAMBE Tsutomu

    2003-01-01

    A gauge principle is applied to mass flows of an ideal compressible fluid subject to Galilei transformation. A free-field Lagrangian defined at the outset is invariant with respect to global SO(3) gauge transformations as well as Galilei transformations. The action principle leads to the equation of potential flows under constraint of a continuity equation. However, the irrotational flow is not invariant with respect to local SO(3) gauge transformations. According to the gauge principle,a gauge-covariant derivative is defined by introducing a new gauge field. Galilei invariance of the derivative requires the gauge field to coincide with the vorticity, i.e. the curl of the velocity field. A full gauge-covariant variational formulation is proposed on the basis of the Hamilton's principle and an assoicated Lagrangian. By means of an isentropic material variation taking into account individual particle motion, the Euler's equation of motion is derived for isentropic flows by using the covariant derivative. Noether's law associated with global SO(3) gauge invariance leads to the conservation of total angular momentum. In addition, the Lagrangian has a local symmetry of particle permutation which results in local conservation law equivalent to the vorticity equation.

  13. The Entropy Principle from Continuum Mechanics to Hyperbolic Systems of Balance Laws: The Modern Theory of Extended Thermodynamics

    Directory of Open Access Journals (Sweden)

    Tommaso Ruggeri

    2008-09-01

    Full Text Available We discuss the different roles of the entropy principle in modern thermodynamics. We start with the approach of rational thermodynamics in which the entropy principle becomes a selection rule for physical constitutive equations. Then we discuss the entropy principle for selecting admissible discontinuous weak solutions and to symmetrize general systems of hyperbolic balance laws. A particular attention is given on the local and global well-posedness of the relative Cauchy problem for smooth solutions. Examples are given in the case of extended thermodynamics for rarefied gases and in the case of a multi-temperature mixture of fluids.

  14. History of Weak Interactions

    Science.gov (United States)

    Lee, T. D.

    1970-07-01

    While the phenomenon of beta-decay was discovered near the end of the last century, the notion that the weak interaction forms a separate field of physical forces evolved rather gradually. This became clear only after the experimental discoveries of other weak reactions such as muon-decay, muon-capture, etc., and the theoretical observation that all these reactions can be described by approximately the same coupling constant, thus giving rise to the notion of a universal weak interaction. Only then did one slowly recognize that the weak interaction force forms an independent field, perhaps on the same footing as the gravitational force, the electromagnetic force, and the strong nuclear and sub-nuclear forces.

  15. Maximum principles for boundary-degenerate linear parabolic differential operators

    OpenAIRE

    Feehan, Paul M. N.

    2013-01-01

    We develop weak and strong maximum principles for boundary-degenerate, linear, parabolic, second-order partial differential operators, $Lu := -u_t-\\tr(aD^2u)-\\langle b, Du\\rangle + cu$, with \\emph{partial} Dirichlet boundary conditions. The coefficient, $a(t,x)$, is assumed to vanish along a non-empty open subset, $\\mydirac_0!\\sQ$, called the \\emph{degenerate boundary portion}, of the parabolic boundary, $\\mydirac!\\sQ$, of the domain $\\sQ\\subset\\RR^{d+1}$, while $a(t,x)$ may be non-zero at po...

  16. Logically automorphically equivalent knowledge bases

    OpenAIRE

    Aladova, Elena; Plotkin, Tatjana

    2017-01-01

    Knowledge bases theory provide an important example of the field where applications of universal algebra and algebraic logic look very natural, and their interaction with practical problems arising in computer science might be very productive. In this paper we study the equivalence problem for knowledge bases. Our interest is to find out how the informational equivalence is related to the logical description of knowledge. Studying various equivalences of knowledge bases allows us to compare d...

  17. Introduction to First-Principles Electronic Structure Methods: Application to Actinide Materials

    International Nuclear Information System (INIS)

    Klepeis, J E

    2005-01-01

    The purpose of this paper is to provide an introduction for non-experts to first-principles electronic structure methods that are widely used in the field of condensed-matter physics, including applications to actinide materials. The methods I describe are based on density functional theory (DFT) within the local density approximation (LDA) and the generalized gradient approximation (GGA). In addition to explaining the meaning of this terminology I also describe the underlying theory itself in some detail in order to enable a better understanding of the relative strengths and weaknesses of the methods. I briefly mention some particular numerical implementations of DFT, including the linear muffin-tin orbital (LMTO), linear augmented plane wave (LAPW), and pseudopotential methods, as well as general methodologies that go beyond DFT and specifically address some of the weaknesses of the theory. The last third of the paper is devoted to a few selected applications that illustrate the ideas discussed in the first two-thirds. In particular, I conclude by addressing the current controversy regarding magnetic DFT calculations for actinide materials. Throughout this paper particular emphasis is placed on providing the appropriate background to enable the non-expert to gain a better appreciation of the application of first-principles electronic structure methods to the study of actinide and other materials

  18. Implication of two-coupled differential Van der Pol Duffing oscillator in weak signal detection

    International Nuclear Information System (INIS)

    Peng Hanghang; Xu Xuemei; Yang Bingchu; Yin Linzi

    2016-01-01

    The principle of the Van der Pol Duffing oscillator for state transition and for determining critical value is described, which has been studied to indicate that the application of the Van der Pol Duffing oscillator in weak signal detection is feasible. On the basis of this principle, an improved two-coupled differential Van der Pol Duffing oscillator is proposed which can identify signals under any frequency and ameliorate signal-to-noise ratio (SNR). The analytical methods of the proposed model and the construction of the proposed oscillator are introduced in detail. Numerical experiments on the properties of the proposed oscillator compared with those of the Van der Pol Duffing oscillator are carried out. Our numerical simulations have confirmed the analytical treatment. The results demonstrate that this novel oscillator has better detection performance than the Van der Pol Duffing oscillator. (author)

  19. Implication of Two-Coupled Differential Van der Pol Duffing Oscillator in Weak Signal Detection

    Science.gov (United States)

    Peng, Hang-hang; Xu, Xue-mei; Yang, Bing-chu; Yin, Lin-zi

    2016-04-01

    The principle of the Van der Pol Duffing oscillator for state transition and for determining critical value is described, which has been studied to indicate that the application of the Van der Pol Duffing oscillator in weak signal detection is feasible. On the basis of this principle, an improved two-coupled differential Van der Pol Duffing oscillator is proposed which can identify signals under any frequency and ameliorate signal-to-noise ratio (SNR). The analytical methods of the proposed model and the construction of the proposed oscillator are introduced in detail. Numerical experiments on the properties of the proposed oscillator compared with those of the Van der Pol Duffing oscillator are carried out. Our numerical simulations have confirmed the analytical treatment. The results demonstrate that this novel oscillator has better detection performance than the Van der Pol Duffing oscillator.

  20. Weakly nonlocal symplectic structures, Whitham method and weakly nonlocal symplectic structures of hydrodynamic type

    International Nuclear Information System (INIS)

    Maltsev, A Ya

    2005-01-01

    We consider the special type of field-theoretical symplectic structures called weakly nonlocal. The structures of this type are, in particular, very common for integrable systems such as KdV or NLS. We introduce here the special class of weakly nonlocal symplectic structures which we call weakly nonlocal symplectic structures of hydrodynamic type. We investigate then the connection of such structures with the Whitham averaging method and propose the procedure of 'averaging' the weakly nonlocal symplectic structures. The averaging procedure gives the weakly nonlocal symplectic structure of hydrodynamic type for the corresponding Whitham system. The procedure also gives 'action variables' corresponding to the wave numbers of m-phase solutions of the initial system which give the additional conservation laws for the Whitham system

  1. The Method of Calculating the Settlement of Weak Ground Strengthened with the Reinforced Sandy Piles

    Directory of Open Access Journals (Sweden)

    Maltseva Tatyana

    2016-01-01

    Full Text Available The paper presents an engineering method for calculating the weak clay base, strengthened with sandy piles reinforced along the contour. The method is based on the principle of layer-by-layer summation, which is used when designing the bases and foundations. The novelty of the suggested method lies in the taking account of the soil reaction along the pile lateral surface and the impact of external vertical loads on the vertical displacement of the base.

  2. Ambient dose equivalent H*(d) - an appropriate philosophy for radiation monitoring onboard aircraft and in space?

    International Nuclear Information System (INIS)

    Vana, N.; Hajek, M.; Berger, T.

    2003-01-01

    In this paper authors deals with the ambient dose equivalent H * (d) and their application for onboard Aircraft and Space station. The discussion and the carried out experiments demonstrated that the philosophy of H * (10) leads to an underestimation of the whole-body radiation exposure when applied onboard aircraft and in space. It therefore has to be considered to introduce a new concept that could be based on microdosimetric principles, offering the unique potential of a more direct correlation to radiobiological parameters

  3. Wormholes, emergent gauge fields, and the weak gravity conjecture

    Energy Technology Data Exchange (ETDEWEB)

    Harlow, Daniel [Center for the Fundamental Laws of Nature, Physics Department, Harvard University,Cambridge MA, 02138 (United States)

    2016-01-20

    This paper revisits the question of reconstructing bulk gauge fields as boundary operators in AdS/CFT. In the presence of the wormhole dual to the thermofield double state of two CFTs, the existence of bulk gauge fields is in some tension with the microscopic tensor factorization of the Hilbert space. I explain how this tension can be resolved by splitting the gauge field into charged constituents, and I argue that this leads to a new argument for the “principle of completeness”, which states that the charge lattice of a gauge theory coupled to gravity must be fully populated. I also claim that it leads to a new motivation for (and a clarification of) the “weak gravity conjecture”, which I interpret as a strengthening of this principle. This setup gives a simple example of a situation where describing low-energy bulk physics in CFT language requires knowledge of high-energy bulk physics. This contradicts to some extent the notion of “effective conformal field theory”, but in fact is an expected feature of the resolution of the black hole information problem. An analogous factorization issue exists also for the gravitational field, and I comment on several of its implications for reconstructing black hole interiors and the emergence of spacetime more generally.

  4. DETECTION OF REST-FRAME OPTICAL LINES FROM X-SHOOTER SPECTROSCOPY OF WEAK EMISSION-LINE QUASARS

    International Nuclear Information System (INIS)

    Plotkin, Richard M.; Gallo, Elena; Shemmer, Ohad; Trakhtenbrot, Benny; Anderson, Scott F.; Brandt, W. N.; Luo, Bin; Schneider, Donald P.; Fan, Xiaohui; Lira, Paulina; Richards, Gordon T.; Strauss, Michael A.; Wu, Jianfeng

    2015-01-01

    Over the past 15 yr, examples of exotic radio-quiet quasars with intrinsically weak or absent broad emission line regions (BELRs) have emerged from large-scale spectroscopic sky surveys. Here, we present spectroscopy of seven such weak emission line quasars (WLQs) at moderate redshifts (z = 1.4–1.7) using the X-shooter spectrograph, which provides simultaneous optical and near-infrared spectroscopy covering the rest-frame ultraviolet (UV) through optical. These new observations effectively double the number of WLQs with spectroscopy in the optical rest-frame, and they allow us to compare the strengths of (weak) high-ionization emission lines (e.g., C iv) to low-ionization lines (e.g., Mg ii, Hβ, Hα) in individual objects. We detect broad Hβ and Hα emission in all objects, and these lines are generally toward the weaker end of the distribution expected for typical quasars (e.g., Hβ has rest-frame equivalent widths ranging from 15–40 Å). However, these low-ionization lines are not exceptionally weak, as is the case for high-ionization lines in WLQs. The X-shooter spectra also display relatively strong optical Fe ii emission, Hβ FWHM ≲ 4000 km s −1 , and significant C iv blueshifts (≈1000–5500 km s −1 ) relative to the systemic redshift; two spectra also show elevated UV Fe ii emission, and an outflowing component to their (weak) Mg ii emission lines. These properties suggest that WLQs are exotic versions of “wind-dominated” quasars. Their BELRs either have unusual high-ionization components, or their BELRs are in an atypical photoionization state because of an unusually soft continuum

  5. The Legal Cause of Unfair Terms

    Directory of Open Access Journals (Sweden)

    Maximiliano Arango Grajales

    2016-01-01

    Full Text Available Unfair terms are outside the field of abuse. There’s not a potential risk of damage, there’s not an injury caused. Unfair terms belong to the field of the principle of equivalency of the contract. And through it, that the criterion of regulatory imbalance of the contract takes on meaning. The correction of such unfair clauses does not depend on weak parts or abuse but rather the existence of a breach of equivalence: an absence of consideration in the contract.

  6. 46 CFR 175.540 - Equivalents.

    Science.gov (United States)

    2010-10-01

    ... Safety Management (ISM) Code (IMO Resolution A.741(18)) for the purpose of determining that an equivalent... Organization (IMO) “Code of Safety for High Speed Craft” as an equivalent to compliance with applicable...

  7. Weak interactions

    International Nuclear Information System (INIS)

    Bjorken, J.D.

    1978-01-01

    Weak interactions are studied from a phenomenological point of view, by using a minimal number of theoretical hypotheses. Charged-current phenomenology, and then neutral-current phenomenology are discussed. This all is described in terms of a global SU(2) symmetry plus an electromagnetic correction. The intermediate-boson hypothesis is introduced and lower bounds on the range of the weak force are inferred. This phenomenology does not yet reconstruct all the predictions of the conventional SU(2)xU(1) gauge theory. To do that requires an additional assumption of restoration of SU(2) symmetry at asymptotic energies

  8. Wijsman Orlicz Asymptotically Ideal -Statistical Equivalent Sequences

    Directory of Open Access Journals (Sweden)

    Bipan Hazarika

    2013-01-01

    in Wijsman sense and present some definitions which are the natural combination of the definition of asymptotic equivalence, statistical equivalent, -statistical equivalent sequences in Wijsman sense. Finally, we introduce the notion of Cesaro Orlicz asymptotically -equivalent sequences in Wijsman sense and establish their relationship with other classes.

  9. The performance of low pressure tissue-equivalent chambers and a new method for parameterising the dose equivalent

    International Nuclear Information System (INIS)

    Eisen, Y.

    1986-01-01

    The performance of Rossi-type spherical tissue-equivalent chambers with equivalent diameters between 0.5 μm and 2 μm was tested experimentally using monoenergetic and polyenergetic neutron sources in the energy region of 10 keV to 14.5 MeV. In agreement with theoretical predictions both chambers failed to provide LET information at low neutron energies. A dose equivalent algorithm was derived that utilises the event distribution but does not attempt to correlate event size with LET. The algorithm was predicted theoretically and confirmed by experiment. The algorithm that was developed determines the neutron dose equivalent, from the data of the 0.5 μm chamber, to better than +-20% over the energy range of 30 keV to 14.5 MeV. The same algorithm also determines the dose equivalent from the data of the 2 μm chamber to better than +-20% over the energy range of 60 keV to 14.5 MeV. The efficiency of the chambers is 33 counts per μSv, or equivalently about 10 counts s -1 per mSv.h -1 . This efficiency enables the measurement of dose equivalent rates above 1 mSv.h -1 for an integration period of 3 s. Integrated dose equivalents can be measured as low as 1 μSv. (author)

  10. Systematic review: role of acid, weakly acidic and weakly alkaline reflux in gastro-oesophageal reflux disease

    NARCIS (Netherlands)

    Boeckxstaens, G. E.; Smout, A.

    2010-01-01

    The importance of weakly acidic and weakly alkaline reflux in gastro-oesophageal reflux disease (GERD) is gaining recognition. To quantify the proportions of reflux episodes that are acidic (pH <4), weakly acidic (pH 4-7) and weakly alkaline (pH >7) in adult patients with GERD, and to evaluate their

  11. Noncommutative Common Cause Principles in algebraic quantum field theory

    International Nuclear Information System (INIS)

    Hofer-Szabó, Gábor; Vecsernyés, Péter

    2013-01-01

    States in algebraic quantum field theory “typically” establish correlation between spacelike separated events. Reichenbach's Common Cause Principle, generalized to the quantum field theoretical setting, offers an apt tool to causally account for these superluminal correlations. In the paper we motivate first why commutativity between the common cause and the correlating events should be abandoned in the definition of the common cause. Then we show that the Noncommutative Weak Common Cause Principle holds in algebraic quantum field theory with locally finite degrees of freedom. Namely, for any pair of projections A, B supported in spacelike separated regions V A and V B , respectively, there is a local projection C not necessarily commuting with A and B such that C is supported within the union of the backward light cones of V A and V B and the set {C, C ⊥ } screens off the correlation between A and B.

  12. Weakly dynamic dark energy via metric-scalar couplings with torsion

    Energy Technology Data Exchange (ETDEWEB)

    Sur, Sourav; Bhatia, Arshdeep Singh, E-mail: sourav.sur@gmail.com, E-mail: arshdeepsb@gmail.com [Department of Physics and Astrophysics, University of Delhi, New Delhi, 110 007 (India)

    2017-07-01

    We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping them within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.

  13. Equivalence in Bilingual Lexicography: Criticism and Suggestions*

    Directory of Open Access Journals (Sweden)

    Herbert Ernst Wiegand

    2011-10-01

    Full Text Available

    Abstract: A reminder of general problems in the formation of terminology, as illustrated by theGerman Äquivalence (Eng. equivalence and äquivalent (Eng. equivalent, is followed by a critical discussionof the concept of equivalence in contrastive lexicology. It is shown that especially the conceptof partial equivalence is contradictory in its different manifestations. Consequently attemptsare made to give a more precise indication of the concept of equivalence in the metalexicography,with regard to the domain of the nominal lexicon. The problems of especially the metalexicographicconcept of partial equivalence as well as that of divergence are fundamentally expounded.In conclusion the direction is indicated to find more appropriate metalexicographic versions of theconcept of equivalence.

    Keywords: EQUIVALENCE, LEXICOGRAPHIC EQUIVALENT, PARTIAL EQUIVALENCE,CONGRUENCE, DIVERGENCE, CONVERGENCE, POLYDIVERGENCE, SYNTAGM-EQUIVALENCE,ZERO EQUIVALENCE, CORRESPONDENCE

    Abstrakt: Äquivalenz in der zweisprachigen Lexikographie: Kritik und Vorschläge.Nachdem an allgemeine Probleme der Begriffsbildung am Beispiel von dt. Äquivalenzund dt. äquivalent erinnert wurde, wird zunächst auf Äquivalenzbegriffe in der kontrastiven Lexikologiekritisch eingegangen. Es wird gezeigt, dass insbesondere der Begriff der partiellen Äquivalenzin seinen verschiedenen Ausprägungen widersprüchlich ist. Sodann werden Präzisierungenzu den Äquivalenzbegriffen in der Metalexikographie versucht, die sich auf den Bereich der Nennlexikbeziehen. Insbesondere der metalexikographische Begriff der partiellen Äquivalenz sowie derder Divergenz werden grundsätzlich problematisiert. In welche Richtung man gehen kann, umangemessenere metalexikographische Fassungen des Äquivalenzbegriffs zu finden, wird abschließendangedeutet.

    Stichwörter: ÄQUIVALENZ, LEXIKOGRAPHISCHES ÄQUIVALENT, PARTIELLE ÄQUIVALENZ,KONGRUENZ, DIVERGENZ, KONVERGENZ, POLYDIVERGENZ

  14. Controllable surfaces of path interference in the multiphoton ionization of atoms by a weak trichromatic field

    International Nuclear Information System (INIS)

    Mercouris, Theodoros; Nicolaides, Cleanthes A

    2005-01-01

    Multiphoton detachment rates for the H - 1 S ground state irradiated by a weak trichromatic ac field consisting of the fundamental frequency ω 0.272 eV and its second, third or fourth higher harmonics were computed from first principles. The weak intensities are in the range of 10 7 -10 8 W cm -2 . The calculations incorporated systematically electronic structure and electron correlation effects. They were done by implementing a time-independent, nonperturbative many-electron, many-photon theory (MEMPT) which obtains cycle-averaged complex eigenvalues, whose real part gives the field-induced energy shift, Δ, and the imaginary part is the multiphoton ionization rate, Γ. Through analysis, plausible arguments and computation, we show that when the intensities are weak the dependence of Γ on phase differences is simple. Specifically, Γs are depicted in the form of plane surfaces, with minor ripples due to higher order ionization paths, in terms of trigonometric functions of the phase differences. This dependence is likely to be applicable to other atomic systems as well, and to provide a definition of the weak field regime in the trichromatic case. When the field intensities are such that higher order ionization paths become important, these dependences break down and we reach the strong field regime

  15. Weakly infinite-dimensional spaces

    International Nuclear Information System (INIS)

    Fedorchuk, Vitalii V

    2007-01-01

    In this survey article two new classes of spaces are considered: m-C-spaces and w-m-C-spaces, m=2,3,...,∞. They are intermediate between the class of weakly infinite-dimensional spaces in the Alexandroff sense and the class of C-spaces. The classes of 2-C-spaces and w-2-C-spaces coincide with the class of weakly infinite-dimensional spaces, while the compact ∞-C-spaces are exactly the C-compact spaces of Haver. The main results of the theory of weakly infinite-dimensional spaces, including classification via transfinite Lebesgue dimensions and Luzin-Sierpinsky indices, extend to these new classes of spaces. Weak m-C-spaces are characterised by means of essential maps to Henderson's m-compacta. The existence of hereditarily m-strongly infinite-dimensional spaces is proved.

  16. Acute muscular weakness in children

    Directory of Open Access Journals (Sweden)

    Ricardo Pablo Javier Erazo Torricelli

    Full Text Available ABSTRACT Acute muscle weakness in children is a pediatric emergency. During the diagnostic approach, it is crucial to obtain a detailed case history, including: onset of weakness, history of associated febrile states, ingestion of toxic substances/toxins, immunizations, and family history. Neurological examination must be meticulous as well. In this review, we describe the most common diseases related to acute muscle weakness, grouped into the site of origin (from the upper motor neuron to the motor unit. Early detection of hyperCKemia may lead to a myositis diagnosis, and hypokalemia points to the diagnosis of periodic paralysis. Ophthalmoparesis, ptosis and bulbar signs are suggestive of myasthenia gravis or botulism. Distal weakness and hyporeflexia are clinical features of Guillain-Barré syndrome, the most frequent cause of acute muscle weakness. If all studies are normal, a psychogenic cause should be considered. Finding the etiology of acute muscle weakness is essential to execute treatment in a timely manner, improving the prognosis of affected children.

  17. Contesting the Equivalency of Continuous Sedation until Death and Physician-assisted Suicide/Euthanasia: A Commentary on LiPuma.

    Science.gov (United States)

    Raho, Joseph A; Miccinesi, Guido

    2015-10-01

    Patients who are imminently dying sometimes experience symptoms refractory to traditional palliative interventions, and in rare cases, continuous sedation is offered. Samuel H. LiPuma, in a recent article in this Journal, argues that continuous sedation until death is equivalent to physician-assisted suicide/euthanasia based on a higher brain neocortical definition of death. We contest his position that continuous sedation involves killing and offer four objections to the equivalency thesis. First, sedation practices are proportional in a way that physician-assisted suicide/euthanasia is not. Second, continuous sedation may not entirely abolish consciousness. Third, LiPuma's particular version of higher brain neocortical death relies on an implausibly weak construal of irreversibility--a position that is especially problematic in the case of continuous sedation. Finally, we explain why continuous sedation until death is not functionally equivalent to neocortical death and, hence, physician-assisted suicide/euthanasia. Concluding remarks review the differences between these two end-of-life practices. © The Author 2015. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Non-equivalent stringency of ethical review in the Baltic States: a sign of a systematic problem in Europe?

    Science.gov (United States)

    Gefenas, E; Dranseika, V; Cekanauskaite, A; Hug, K; Mezinska, S; Peicius, E; Silis, V; Soosaar, A; Strosberg, M

    2011-01-01

    We analyse the system of ethical review of human research in the Baltic States by introducing the principle of equivalent stringency of ethical review, that is, research projects imposing equal risks and inconveniences on research participants should be subjected to equally stringent review procedures. We examine several examples of non-equivalence or asymmetry in the system of ethical review of human research: (1) the asymmetry between rather strict regulations of clinical drug trials and relatively weaker regulations of other types of clinical biomedical research and (2) gaps in ethical review in the area of non-biomedical human research where some sensitive research projects are not reviewed by research ethics committees at all. We conclude that non-equivalent stringency of ethical review is at least partly linked to the differences in scope and binding character of various international legal instruments that have been shaping the system of ethical review in the Baltic States. Therefore, the Baltic example could also serve as an object lesson to other European countries which might be experiencing similar problems. PMID:20606000

  19. Weak openness and almost openness

    Directory of Open Access Journals (Sweden)

    David A. Rose

    1984-01-01

    Full Text Available Weak openness and almost openness for arbitrary functions between topological spaces are defined as duals to the weak continuity of Levine and the almost continuity of Husain respectively. Independence of these two openness conditions is noted and comparison is made between these and the almost openness of Singal and Singal. Some results dual to those known for weak continuity and almost continuity are obtained. Nearly almost openness is defined and used to obtain an improved link from weak continuity to almost continuity.

  20. Weak values in collision theory

    Science.gov (United States)

    de Castro, Leonardo Andreta; Brasil, Carlos Alexandre; Napolitano, Reginaldo de Jesus

    2018-05-01

    Weak measurements have an increasing number of applications in contemporary quantum mechanics. They were originally described as a weak interaction that slightly entangled the translational degrees of freedom of a particle to its spin, yielding surprising results after post-selection. That description often ignores the kinetic energy of the particle and its movement in three dimensions. Here, we include these elements and re-obtain the weak values within the context of collision theory by two different approaches, and prove that the results are compatible with each other and with the results from the traditional approach. To provide a more complete description, we generalize weak values into weak tensors and use them to provide a more realistic description of the Stern-Gerlach apparatus.

  1. Electromagnetic current in weak interactions

    International Nuclear Information System (INIS)

    Ma, E.

    1983-01-01

    In gauge models which unify weak and electromagnetic interactions, the weak neutral-current interaction also involves the electromagnetic current. The exact nature of such a component can be explored using e + e - experimental data. In recent years, the existence of a new component of the weak interaction has become firmly established, i.e., the neutral-current interaction. As such, it competes with the electromagnetic interaction whenever the particles involved are also charged, but at a very much lower rate because its effective strength is so small. Hence neutrino processes are best for the detection of the neutral-current interaction. However, in any gauge model which unifies weak and electromagnetic interactions, the weak neutral-current interaction also involves the electromagnetic current

  2. SAPONIFICATION EQUIVALENT OF DASAMULA TAILA

    OpenAIRE

    Saxena, R. B.

    1994-01-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  3. Saponification equivalent of dasamula taila.

    Science.gov (United States)

    Saxena, R B

    1994-07-01

    Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.

  4. Some spectral equivalences between Schroedinger operators

    International Nuclear Information System (INIS)

    Dunning, C; Hibberd, K E; Links, J

    2008-01-01

    Spectral equivalences of the quasi-exactly solvable sectors of two classes of Schroedinger operators are established, using Gaudin-type Bethe ansatz equations. In some instances the results can be extended leading to full isospectrality. In this manner we obtain equivalences between PT-symmetric problems and Hermitian problems. We also find equivalences between some classes of Hermitian operators

  5. First principles investigation of interaction between impurity atom (Si, Ge, Sn) and carbon atom in diamond-like carbon system

    International Nuclear Information System (INIS)

    Li, Xiaowei; Wang, Aiying; Lee, Kwang-Ryeol

    2012-01-01

    The interaction between impurity atom (Si, Ge, and Sn) and carbon atom in diamond-like carbon (DLC) system was investigated by the first principles simulation method based on the density functional theory. The tetrahedral configuration was selected as the calculation model for simplicity. When the bond angle varied in a range of 90°–130° from the equivalent state of 109.471°, the distortion energy and the electronic structures including charge density of the highest occupied molecular orbital (HOMO) and partial density of state (PDOS) in the different systems were calculated. The results showed that the addition of Si, Ge and Sn atom into amorphous carbon matrix significantly decreased the distortion energy of the system as the bond angles deviated from the equilibrium one. Further studies of the HOMO and PDOS indicated that the weak covalent bond between Si(Ge, Sn) and C atoms was formed with the decreased strength and directionality, which were influenced by the electronegative difference. These results implied that the electron transfer behavior at the junction of carbon nano-devices could be tailored by the impurity element, and the compressive stress in DLC films could be reduced by the incorporation of Si, Ge and Sn because of the formation of weaker covalent bonds. - Highlights: ►Distortion energy after bond angle distortion was decreased comparing with C-C unit. ►The weak covalent bond was formed between impurity atoms and corner carbon atoms. ►Observed electron transfer behavior affected the strength and directionality of bond. ►Reduction of strength and directionality of bond contributed to small energy change.

  6. Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics

    Science.gov (United States)

    Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.

    2018-04-01

    We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.

  7. Weak interactions with nuclei

    International Nuclear Information System (INIS)

    Walecka, J.D.

    1983-01-01

    Nuclei provide systems where the strong, electomagnetic, and weak interactions are all present. The current picture of the strong interactions is based on quarks and quantum chromodynamics (QCD). The symmetry structure of this theory is SU(3)/sub C/ x SU(2)/sub W/ x U(1)/sub W/. The electroweak interactions in nuclei can be used to probe this structure. Semileptonic weak interactions are considered. The processes under consideration include beta decay, neutrino scattering and weak neutral-current interactions. The starting point in the analysis is the effective Lagrangian of the Standard Model

  8. Gauge equivalence of the Gross Pitaevskii equation and the equivalent Heisenberg spin chain

    Science.gov (United States)

    Radha, R.; Kumar, V. Ramesh

    2007-11-01

    In this paper, we construct an equivalent spin chain for the Gross-Pitaevskii equation with quadratic potential and exponentially varying scattering lengths using gauge equivalence. We have then generated the soliton solutions for the spin components S3 and S-. We find that the spin solitons for S3 and S- can be compressed for exponentially growing eigenvalues while they broaden out for decaying eigenvalues.

  9. Spectroscopic and polarimetric study of radio-quiet weak emission line quasars

    Science.gov (United States)

    Kumar, Parveen; Chand, Hum; Gopal-Krishna; Srianand, Raghunathan; Stalin, Chelliah Subramonian; Petitjean, Patrick

    2018-04-01

    A small subset of optically selected radio-quiet QSOs with weak or no emission lines may turn out to be the elusive radio-quiet BL Lac objects, or simply be radio-quiet QSOs with an infant/shielded broad line region (BLR). High polarisation (p > 3-4%), a hallmark of BL Lacs, can be used to test whether some optically selected ‘radio-quiet weak emission line QSOs’ (RQWLQs) show a fractional polarisation high enough to qualify as radio-quiet analogues of BL Lac objects. To check this possibility, we have made optical spectral and polarisation measurements of a sample of 19 RQWLQs. Out of these, only 9 sources show a non-significant proper motion (hence very likely extragalactic) and only two of them are found to have p > 1%. For these two RQWLQs, namely J142505.59+035336.2 and J154515.77+003235.2, we found the highest polarization to be 1.59±0.53%, which is again too low to classify them as (radio-quiet) BL Lacs, although one may recall that even genuine BL Lacs sometimes appear weakly polarised. We also present a statistical comparison of the optical spectral index, for a sample of 45 RQWLQs with redshift-luminosity matched control samples of 900 QSOs and an equivalent sample of 120 blazars, assembled from the literature. The spectral index distribution of RQWLQs is found to differ, at a high significance level, from that of blazars. This, too, is consistent with the common view that the mechanism of the central engine in RQWLQs, as a population, is close to that operating in normal QSOs and the primary difference between them is related to the BLR.

  10. Lorentz invariance on trial in the weak decay of polarized atoms

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Stefan E., E-mail: s.mueller@kvi.nl [Kernfysisch Versneller Instituut (Netherlands)

    2013-03-15

    One of the most fundamental principles underlying our current understanding of nature is the invariance of the laws of physics under Lorentz transformations. Theories trying to unify the Standard Model with quantum gravity suggest that this invariance may be broken by the presence of Lorentz-violating background fields. Dedicated high-precision experiments at low energies could observe such suppressed signals from the Planck scale. At KVI, a test on Lorentz invariance of the weak interaction is performed searching for a dependence of the decay rate of spin-polarized nuclei on the orientation of their spin with respect to a fixed absolute galactical reference frame. An observation of such a dependence would imply a violation of Lorentz invariance.

  11. Weak C* Hopf Symmetry

    OpenAIRE

    Rehren, K. -H.

    1996-01-01

    Weak C* Hopf algebras can act as global symmetries in low-dimensional quantum field theories, when braid group statistics prevents group symmetries. Possibilities to construct field algebras with weak C* Hopf symmetry from a given theory of local observables are discussed.

  12. A study on lead equivalent

    International Nuclear Information System (INIS)

    Lin Guanxin

    1991-01-01

    A study on the rules in which the lead equivalent of lead glass changes with the energy of X rays or γ ray is described. The reason of this change is discussed and a new testing method of lead equivalent is suggested

  13. Governance of nanotechnology and nanomaterials: principles, regulation, and renegotiating the social contract.

    Science.gov (United States)

    Kimbrell, George A

    2009-01-01

    Good governance for nanotechnology and nanomaterials is predicated on principles of general good governance. This paper discusses on what lessons we can learn from the oversight of past emerging technologies in formulating these principles. Nanotechnology provides us a valuable opportunity to apply these lessons and a duty to avoid repeating past mistakes. To do that will require mandatory regulation, grounded in precaution, that takes into account the uniqueness of nanomaterials. Moreover, this policy dialogue is not taking place in a vacuum. In applying the lessons of the past, nanotechnology provides a window to renegotiate our public's social contract on chemicals, health, the environment, and risks. Emerging technologies illuminate structural weaknesses, providing a crucial chance to ameliorate lingering regulatory inadequacies and provide much needed updates of existing laws.

  14. Does the Equivalence between Gravitational Mass and Energy Survive for a Quantum Body?

    Directory of Open Access Journals (Sweden)

    Lebed A. G.

    2012-10-01

    Full Text Available We consider the simplest quantum composite body, a hydrogen atom, in the presence of a weak external gravitational field. We show that passive gravitational mass operator of the atom in the post-Newtonian approximation of general relativity does not commute with its energy operator, taken in the absence of the field. Nevertheless, the equivalence between the expectations values of passive gravitational mass and energy is shown to survive at a macroscopic level for stationary quantum states. Breakdown of the equiva- lence between passive gravitational mass and energy at a microscopic level for station- ary quantum states can be experimentally detected by studying unusual electromagnetic radiation, emitted by the atoms, supported and moved in the Earth gravitational field with constant velocity, using spacecraft or satellite.

  15. Analytical and numerical construction of equivalent cables.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R; Tucker, G

    2003-08-01

    The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.

  16. Establishing Substantial Equivalence: Transcriptomics

    Science.gov (United States)

    Baudo, María Marcela; Powers, Stephen J.; Mitchell, Rowan A. C.; Shewry, Peter R.

    Regulatory authorities in Western Europe require transgenic crops to be substantially equivalent to conventionally bred forms if they are to be approved for commercial production. One way to establish substantial equivalence is to compare the transcript profiles of developing grain and other tissues of transgenic and conventionally bred lines, in order to identify any unintended effects of the transformation process. We present detailed protocols for transcriptomic comparisons of developing wheat grain and leaf material, and illustrate their use by reference to our own studies of lines transformed to express additional gluten protein genes controlled by their own endosperm-specific promoters. The results show that the transgenes present in these lines (which included those encoding marker genes) did not have any significant unpredicted effects on the expression of endogenous genes and that the transgenic plants were therefore substantially equivalent to the corresponding parental lines.

  17. On uncertainties in definition of dose equivalent

    International Nuclear Information System (INIS)

    Oda, Keiji

    1995-01-01

    The author has entertained always the doubt that in a neutron field, if the measured value of the absorbed dose with a tissue equivalent ionization chamber is 1.02±0.01 mGy, may the dose equivalent be taken as 10.2±0.1 mSv. Should it be 10.2 or 11, but the author considers it is 10 or 20. Even if effort is exerted for the precision measurement of absorbed dose, if the coefficient being multiplied to it is not precise, it is meaningless. [Absorbed dose] x [Radiation quality fctor] = [Dose equivalent] seems peculiar. How accurately can dose equivalent be evaluated ? The descriptions related to uncertainties in the publications of ICRU and ICRP are introduced, which are related to radiation quality factor, the accuracy of measuring dose equivalent and so on. Dose equivalent shows the criterion for the degree of risk, or it is considered only as a controlling quantity. The description in the ICRU report 1973 related to dose equivalent and its unit is cited. It was concluded that dose equivalent can be considered only as the absorbed dose being multiplied by a dimensionless factor. The author presented the questions. (K.I.)

  18. DC cancellation as a method of generating a t2-response and of solving the radial position error in a concentric free-falling two-sphere equivalence-principle experiment in a drag-free satellite

    International Nuclear Information System (INIS)

    Lange, Benjamin

    2010-01-01

    This paper presents a new method for doing a free-fall equivalence-principle (EP) experiment in a satellite at ambient temperature which solves two problems that have previously blocked this approach. By using large masses to change the gravity gradient at the proof masses, the orbit dynamics of a drag-free satellite may be changed in such a way that the experiment can mimic a free-fall experiment in a constant gravitational field on the earth. An experiment using a sphere surrounded by a spherical shell both completely unsupported and free falling has previously been impractical because (1) it is not possible to distinguish between a small EP violation and a slight difference in the semi-major axes of the orbits of the two proof masses and (2) the position difference in the orbit due to an EP violation only grows as t whereas the largest disturbance grows as t 3/2 . Furthermore, it has not been known how to independently measure the positions of a shell and a solid sphere with sufficient accuracy. The measurement problem can be solved by using a two-color transcollimator (see the main text), and since the radial-position-error and t-response problems arise from the earth's gravity gradient and not from its gravity field, one solution is to modify the earth's gravity gradient with local masses fixed in the satellite. Since the gravity gradient at the surface of a sphere, for example, depends only on its density, the gravity gradients of laboratory masses and of the earth unlike their fields are of the same order of magnitude. In a drag-free satellite spinning perpendicular to the orbit plane, two fixed spherical masses whose connecting line parallels the satellite spin axis can generate a dc gravity gradient at test masses located between them which cancels the combined gravity gradient of the earth and differential centrifugal force. With perfect cancellation, the position-error problem vanishes and the response grows as t 2 along a line which always points toward

  19. Spatial evolutionary games with weak selection.

    Science.gov (United States)

    Nanda, Mridu; Durrett, Richard

    2017-06-06

    Recently, a rigorous mathematical theory has been developed for spatial games with weak selection, i.e., when the payoff differences between strategies are small. The key to the analysis is that when space and time are suitably rescaled, the spatial model converges to the solution of a partial differential equation (PDE). This approach can be used to analyze all [Formula: see text] games, but there are a number of [Formula: see text] games for which the behavior of the limiting PDE is not known. In this paper, we give rules for determining the behavior of a large class of [Formula: see text] games and check their validity using simulation. In words, the effect of space is equivalent to making changes in the payoff matrix, and once this is done, the behavior of the spatial game can be predicted from the behavior of the replicator equation for the modified game. We say predicted here because in some cases the behavior of the spatial game is different from that of the replicator equation for the modified game. For example, if a rock-paper-scissors game has a replicator equation that spirals out to the boundary, space stabilizes the system and produces an equilibrium.

  20. Weak incidence algebra and maximal ring of quotients

    Directory of Open Access Journals (Sweden)

    Surjeet Singh

    2004-01-01

    Full Text Available Let X, X′ be two locally finite, preordered sets and let R be any indecomposable commutative ring. The incidence algebra I(X,R, in a sense, represents X, because of the well-known result that if the rings I(X,R and I(X′,R are isomorphic, then X and X′ are isomorphic. In this paper, we consider a preordered set X that need not be locally finite but has the property that each of its equivalence classes of equivalent elements is finite. Define I*(X,R to be the set of all those functions f:X×X→R such that f(x,y=0, whenever x⩽̸y and the set Sf of ordered pairs (x,y with xweak incidence algebra of X over R. In the first part of the paper it is shown that indeed I*(X,R represents X. After this all the essential one-sided ideals of I*(X,R are determined and the maximal right (left ring of quotients of I*(X,R is discussed. It is shown that the results proved can give a large class of rings whose maximal right ring of quotients need not be isomorphic to its maximal left ring of quotients.

  1. A Universe without Weak Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Harnik, Roni; Kribs, Graham D.; Perez, Gilad

    2006-04-07

    A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''Weakless Universe'' is matched to our Universe by simultaneously adjusting Standard Model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the Weakless Universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multi-parameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe.

  2. A Universe without Weak Interactions

    International Nuclear Information System (INIS)

    Harnik, R

    2006-01-01

    A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''Weakless Universe'' is matched to our Universe by simultaneously adjusting Standard Model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the Weakless Universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multi-parameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe

  3. A universe without weak interactions

    International Nuclear Information System (INIS)

    Harnik, Roni; Kribs, Graham D.; Perez, Gilad

    2006-01-01

    A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''weakless universe'' is matched to our Universe by simultaneously adjusting standard model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the weakless universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multiparameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe

  4. Weak Solution and Weakly Uniformly Bounded Solution of Impulsive Heat Equations Containing “Maximum” Temperature

    Directory of Open Access Journals (Sweden)

    Oyelami, Benjamin Oyediran

    2013-09-01

    Full Text Available In this paper, criteria for the existence of weak solutions and uniformly weak bounded solution of impulsive heat equation containing maximum temperature are investigated and results obtained. An example is given for heat flow system with impulsive temperature using maximum temperature simulator and criteria for the uniformly weak bounded of solutions of the system are obtained.

  5. Runaway dilaton and equivalence principle violations

    CERN Document Server

    Damour, Thibault Marie Alban Guillaume; Veneziano, Gabriele; Damour, Thibault; Piazza, Federico; Veneziano, Gabriele

    2002-01-01

    In a recently proposed scenario, where the dilaton decouples while cosmologically attracted towards infinite bare string coupling, its residual interactions can be related to the amplitude of density fluctuations generated during inflation, and are large enough to be detectable through a modest improvement on present tests of free-fall universality. Provided it has significant couplings to either dark matter or dark energy, a runaway dilaton can also induce time-variations of the natural "constants" within the reach of near-future experiments.

  6. Weak hard X-ray emission from broad absorption line quasars: evidence for intrinsic X-ray weakness

    International Nuclear Information System (INIS)

    Luo, B.; Brandt, W. N.; Scott, A. E.; Alexander, D. M.; Gandhi, P.; Stern, D.; Teng, S. H.; Arévalo, P.; Bauer, F. E.; Boggs, S. E.; Craig, W. W.; Christensen, F. E.; Comastri, A.; Farrah, D.; Hailey, C. J.; Harrison, F. A.; Koss, M.; Ogle, P.; Puccetti, S.; Saez, C.

    2014-01-01

    We report NuSTAR observations of a sample of six X-ray weak broad absorption line (BAL) quasars. These targets, at z = 0.148-1.223, are among the optically brightest and most luminous BAL quasars known at z < 1.3. However, their rest-frame ≈2 keV luminosities are 14 to >330 times weaker than expected for typical quasars. Our results from a pilot NuSTAR study of two low-redshift BAL quasars, a Chandra stacking analysis of a sample of high-redshift BAL quasars, and a NuSTAR spectral analysis of the local BAL quasar Mrk 231 have already suggested the existence of intrinsically X-ray weak BAL quasars, i.e., quasars not emitting X-rays at the level expected from their optical/UV emission. The aim of the current program is to extend the search for such extraordinary objects. Three of the six new targets are weakly detected by NuSTAR with ≲ 45 counts in the 3-24 keV band, and the other three are not detected. The hard X-ray (8-24 keV) weakness observed by NuSTAR requires Compton-thick absorption if these objects have nominal underlying X-ray emission. However, a soft stacked effective photon index (Γ eff ≈ 1.8) for this sample disfavors Compton-thick absorption in general. The uniform hard X-ray weakness observed by NuSTAR for this and the pilot samples selected with <10 keV weakness also suggests that the X-ray weakness is intrinsic in at least some of the targets. We conclude that the NuSTAR observations have likely discovered a significant population (≳ 33%) of intrinsically X-ray weak objects among the BAL quasars with significantly weak <10 keV emission. We suggest that intrinsically X-ray weak quasars might be preferentially observed as BAL quasars.

  7. A weak balance: the contribution of muscle weakness to postural instability and falls.

    NARCIS (Netherlands)

    Horlings, G.C.; Engelen, B.G.M. van; Allum, J.H.J.; Bloem, B.R.

    2008-01-01

    Muscle strength is a potentially important factor contributing to postural control. In this article, we consider the influence of muscle weakness on postural instability and falling. We searched the literature for research evaluating muscle weakness as a risk factor for falls in community-dwelling

  8. Equivalent Simplification Method of Micro-Grid

    OpenAIRE

    Cai Changchun; Cao Xiangqin

    2013-01-01

    The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...

  9. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    International Nuclear Information System (INIS)

    2000-01-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance

  10. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.

  11. A Technique to Estimate the Equivalent Loss Resistance of Grid-Tied Converters for Current Control Analysis and Design

    DEFF Research Database (Denmark)

    Vidal, Ana; Yepes, Alejandro G.; Fernandez, Francisco Daniel Freijedo

    2015-01-01

    Rigorous analysis and design of the current control loop in voltage source converters (VSCs) requires an accurate modeling. The loop behavior can be significantly influenced by the VSC working conditions. To consider such effect, converter losses should be included in the model, which can be done...... by means of an equivalent series resistance. This paper proposes a method to identify the VSC equivalent loss resistance for the proper tuning of the current control loop. It is based on analysis of the closed-loop transient response provided by a synchronous proportional-integral current controller......, according to the internal model principle. The method gives a set of loss resistance values linked to working conditions, which can be used to improve the tuning of the current controllers, either by online adaptation of the controller gains or by open-loop adaptive adjustment of them according to prestored...

  12. Equivalence relations and the reinforcement contingency.

    Science.gov (United States)

    Sidman, M

    2000-07-01

    Where do equivalence relations come from? One possible answer is that they arise directly from the reinforcement contingency. That is to say, a reinforcement contingency produces two types of outcome: (a) 2-, 3-, 4-, 5-, or n-term units of analysis that are known, respectively, as operant reinforcement, simple discrimination, conditional discrimination, second-order conditional discrimination, and so on; and (b) equivalence relations that consist of ordered pairs of all positive elements that participate in the contingency. This conception of the origin of equivalence relations leads to a number of new and verifiable ways of conceptualizing equivalence relations and, more generally, the stimulus control of operant behavior. The theory is also capable of experimental disproof.

  13. Nuclear detectors: principles and applications

    International Nuclear Information System (INIS)

    Belhadj, Marouane

    1999-01-01

    Nuclear technology is a vast domain. It has several applications, for instance in hydrology, it is used in the analysis of underground water, dating by carbon 14, Our study consists on representing the nuclear detectors based on their principle of functioning and their electronic constitution. However, because of some technical problems, we have not made a deepen study on their applications that could certainly have a big support on our subject. In spite of the existence of an equipment of high performance and technology in the centre, it remains to resolve the problem of control of instruments. Therefore, the calibration of these equipment remains the best guaranteed of a good quality of the counting. Besides, it allows us to approach the influence of the external and internal parameters on the equipment and the reasons of errors of measurements, to introduce equivalent corrections. (author). 22 refs

  14. Charged weak currents

    International Nuclear Information System (INIS)

    Turlay, R.

    1979-01-01

    In this review of charged weak currents I shall concentrate on inclusive high energy neutrino physics. There are surely still things to learn from the low energy weak interaction but I will not discuss it here. Furthermore B. Tallini will discuss the hadronic final state of neutrino interactions. Since the Tokyo conference a few experimental results have appeared on charged current interaction, I will present them and will also comment on important topics which have been published during the last past year. (orig.)

  15. Symmetries of dynamically equivalent theories

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D.M.; Tyutin, I.V. [Sao Paulo Univ., SP (Brazil). Inst. de Fisica; Lebedev Physics Institute, Moscow (Russian Federation)

    2006-03-15

    A natural and very important development of constrained system theory is a detail study of the relation between the constraint structure in the Hamiltonian formulation with specific features of the theory in the Lagrangian formulation, especially the relation between the constraint structure with the symmetries of the Lagrangian action. An important preliminary step in this direction is a strict demonstration, and this is the aim of the present article, that the symmetry structures of the Hamiltonian action and of the Lagrangian action are the same. This proved, it is sufficient to consider the symmetry structure of the Hamiltonian action. The latter problem is, in some sense, simpler because the Hamiltonian action is a first-order action. At the same time, the study of the symmetry of the Hamiltonian action naturally involves Hamiltonian constraints as basic objects. One can see that the Lagrangian and Hamiltonian actions are dynamically equivalent. This is why, in the present article, we consider from the very beginning a more general problem: how the symmetry structures of dynamically equivalent actions are related. First, we present some necessary notions and relations concerning infinitesimal symmetries in general, as well as a strict definition of dynamically equivalent actions. Finally, we demonstrate that there exists an isomorphism between classes of equivalent symmetries of dynamically equivalent actions. (author)

  16. Weak-interacting holographic QCD

    International Nuclear Information System (INIS)

    Gazit, D.; Yee, H.-U.

    2008-06-01

    We propose a simple prescription for including low-energy weak-interactions into the frame- work of holographic QCD, based on the standard AdS/CFT dictionary of double-trace deformations. As our proposal enables us to calculate various electro-weak observables involving strongly coupled QCD, it opens a new perspective on phenomenological applications of holographic QCD. We illustrate efficiency and usefulness of our method by performing a few exemplar calculations; neutron beta decay, charged pion weak decay, and meson-nucleon parity non-conserving (PNC) couplings. The idea is general enough to be implemented in both Sakai-Sugimoto as well as Hard/Soft Wall models. (author)

  17. Second class weak currents

    International Nuclear Information System (INIS)

    Delorme, J.

    1978-01-01

    The definition and general properties of weak second class currents are recalled and various detection possibilities briefly reviewed. It is shown that the existing data on nuclear beta decay can be consistently analysed in terms of a phenomenological model. Their implication on the fundamental structure of weak interactions is discussed [fr

  18. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  19. Standard and Null Weak Values

    OpenAIRE

    Zilberberg, Oded; Romito, Alessandro; Gefen, Yuval

    2013-01-01

    Weak value (WV) is a quantum mechanical measurement protocol, proposed by Aharonov, Albert, and Vaidman. It consists of a weak measurement, which is weighed in, conditional on the outcome of a later, strong measurement. Here we define another two-step measurement protocol, null weak value (NVW), and point out its advantages as compared to WV. We present two alternative derivations of NWVs and compare them to the corresponding derivations of WVs.

  20. Le Chatelier Principle for Out-of-Equilibrium and Boundary-Driven Systems: Application to Dynamical Phase Transitions.

    Science.gov (United States)

    Shpielberg, O; Akkermans, E

    2016-06-17

    A stability analysis is presented for boundary-driven and out-of-equilibrium systems in the framework of the hydrodynamic macroscopic fluctuation theory. A Hamiltonian description is proposed which allows us to thermodynamically interpret the additivity principle. A necessary and sufficient condition for the validity of the additivity principle is obtained as an extension of the Le Chatelier principle. These stability conditions result from a diagonal quadratic form obtained using the cumulant generating function. This approach allows us to provide a proof for the stability of the weakly asymmetric exclusion process and to reduce the search for stability to the solution of two coupled linear ordinary differential equations instead of nonlinear partial differential equations. Additional potential applications of these results are discussed in the realm of classical and quantum systems.

  1. Le Chatelier Principle for Out-of-Equilibrium and Boundary-Driven Systems: Application to Dynamical Phase Transitions

    Science.gov (United States)

    Shpielberg, O.; Akkermans, E.

    2016-06-01

    A stability analysis is presented for boundary-driven and out-of-equilibrium systems in the framework of the hydrodynamic macroscopic fluctuation theory. A Hamiltonian description is proposed which allows us to thermodynamically interpret the additivity principle. A necessary and sufficient condition for the validity of the additivity principle is obtained as an extension of the Le Chatelier principle. These stability conditions result from a diagonal quadratic form obtained using the cumulant generating function. This approach allows us to provide a proof for the stability of the weakly asymmetric exclusion process and to reduce the search for stability to the solution of two coupled linear ordinary differential equations instead of nonlinear partial differential equations. Additional potential applications of these results are discussed in the realm of classical and quantum systems.

  2. The one-dimensional normalised generalised equivalence theory (NGET) for generating equivalent diffusion theory group constants for PWR reflector regions

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-01-01

    An equivalent diffusion theory PWR reflector model is presented, which has as its basis Smith's generalisation of Koebke's Equivalent Theory. This method is an adaptation, in one-dimensional slab geometry, of the Generalised Equivalence Theory (GET). Since the method involves the renormalisation of the GET discontinuity factors at nodal interfaces, it is called the Normalised Generalised Equivalence Theory (NGET) method. The advantages of the NGET method for modelling the ex-core nodes of a PWR are summarized. 23 refs

  3. First-principles molecular dynamics study of Al/Alq3 interfaces

    Directory of Open Access Journals (Sweden)

    Kousuke Takeuchi et al

    2007-01-01

    Full Text Available We have carried out first-principles molecular dynamics simulations of Al deposition on tris (8-hydroxyquinoline aluminum (Alq3 layers to investigate atomic geometries and electronic properties of Al/Alq3 interfaces. Al atoms were ejected to Alq3 one by one with the kinetic energy of 37.4 kJ/mol, which approximately corresponds to the average kinetic energy of Al at the boiling temperature of metal Al. The first Al atom interacts with two of the three O atoms of meridional Alq3. Following Al atoms interact with Alq3 rather weakly and they tend to aggregate each other to form Al clusters. During the deposition process, Alq3 was not broken and its molecular structure remained essentially intact. At the interface, weak bonds between deposited Al atoms and N and C atoms were formed. The projected density of states (PDOS onto the Alq3 molecular orbitals shows gap states in between the highest occupied molecular orbitals (HOMOs and the lowest unoccupied molecular orbitals (LUMOs, which were experimentally observed by ultraviolet photoelectron spectroscopy (UPS and metastable atom electron spectroscopy (MAES. Our results show that even though the Alq3 molecular structure is retained, weak N–Al and C–Al bonds induce gap states.

  4. Weak interactions

    International Nuclear Information System (INIS)

    Chanda, R.

    1981-01-01

    The theoretical and experimental evidences to form a basis for Lagrangian Quantum field theory for Weak Interactions are discussed. In this context, gauge invariance aspects of such interactions are showed. (L.C.) [pt

  5. The definition of the individual dose equivalent

    International Nuclear Information System (INIS)

    Ehrlich, Margarete

    1986-01-01

    A brief note examines the choice of the present definition of the individual dose equivalent, the new operational dosimetry quantity for external exposure. The consequences of the use of the individual dose equivalent and the danger facing the individual dose equivalent, as currently defined, are briefly discussed. (UK)

  6. ISSUES OF THE ACCOUNTING OF A WEAK NEUROTRANSMITTER COMPONENT IN THE PHARMACOTHERAPY OF POSTCOMATOSE STATES

    Directory of Open Access Journals (Sweden)

    O. S. Zaitsev

    2016-01-01

    Full Text Available The principle in the accounting of a weak neurotransmitter component is considered as one of the most specific and promising ones for the study and practical introduction of therapy for postcomatous states. The paper outlines problems in the accurate determination of the lack and excess of neurotransmitters by up-to-date techniques (biochemical and neurophysiological tests, magnetic resonance spectroscopy. It gives the reasons for clinical doubts and difficulties in the practical use of ideas about the relationship of the clinical picture to one or another disorder of neurotransmitter metabolism and to the feasibilities of its effective correction. It is concluded that the main method for the individualized therapy of postcomatous states is the clinical analysis of neurological and psychiatric symptoms, only upon its completion, the consideration of a weak neurotransmitter component can be taken into account. The main possible and currently preferable ways to correct cholinergic and GABAergic deficiency and redundancy and deficiency in glutamate and dopamine are considered.

  7. Cosmology with weak lensing surveys

    International Nuclear Information System (INIS)

    Munshi, Dipak; Valageas, Patrick; Waerbeke, Ludovic van; Heavens, Alan

    2008-01-01

    Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening matter. The distortions are due to fluctuations in the gravitational potential, and are directly related to the distribution of matter and to the geometry and dynamics of the Universe. As a consequence, weak gravitational lensing offers unique possibilities for probing the Dark Matter and Dark Energy in the Universe. In this review, we summarise the theoretical and observational state of the subject, focussing on the statistical aspects of weak lensing, and consider the prospects for weak lensing surveys in the future. Weak gravitational lensing surveys are complementary to both galaxy surveys and cosmic microwave background (CMB) observations as they probe the unbiased non-linear matter power spectrum at modest redshifts. Most of the cosmological parameters are accurately estimated from CMB and large-scale galaxy surveys, so the focus of attention is shifting to understanding the nature of Dark Matter and Dark Energy. On the theoretical side, recent advances in the use of 3D information of the sources from photometric redshifts promise greater statistical power, and these are further enhanced by the use of statistics beyond two-point quantities such as the power spectrum. The use of 3D information also alleviates difficulties arising from physical effects such as the intrinsic alignment of galaxies, which can mimic weak lensing to some extent. On the observational side, in the next few years weak lensing surveys such as CFHTLS, VST-KIDS and Pan-STARRS, and the planned Dark Energy Survey, will provide the first weak lensing surveys covering very large sky areas and depth. In the long run even more ambitious programmes such as DUNE, the Supernova Anisotropy Probe (SNAP) and Large-aperture Synoptic Survey Telescope (LSST) are planned. Weak lensing of diffuse components such as the CMB and 21 cm emission can also

  8. Cosmology with weak lensing surveys

    Energy Technology Data Exchange (ETDEWEB)

    Munshi, Dipak [Institute of Astronomy, Madingley Road, Cambridge, CB3 OHA (United Kingdom); Astrophysics Group, Cavendish Laboratory, Madingley Road, Cambridge CB3 OHE (United Kingdom)], E-mail: munshi@ast.cam.ac.uk; Valageas, Patrick [Service de Physique Theorique, CEA Saclay, 91191 Gif-sur-Yvette (France); Waerbeke, Ludovic van [University of British Columbia, Department of Physics and Astronomy, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Heavens, Alan [SUPA - Scottish Universities Physics Alliance, Institute for Astronomy, University of Edinburgh, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom)

    2008-06-15

    Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening matter. The distortions are due to fluctuations in the gravitational potential, and are directly related to the distribution of matter and to the geometry and dynamics of the Universe. As a consequence, weak gravitational lensing offers unique possibilities for probing the Dark Matter and Dark Energy in the Universe. In this review, we summarise the theoretical and observational state of the subject, focussing on the statistical aspects of weak lensing, and consider the prospects for weak lensing surveys in the future. Weak gravitational lensing surveys are complementary to both galaxy surveys and cosmic microwave background (CMB) observations as they probe the unbiased non-linear matter power spectrum at modest redshifts. Most of the cosmological parameters are accurately estimated from CMB and large-scale galaxy surveys, so the focus of attention is shifting to understanding the nature of Dark Matter and Dark Energy. On the theoretical side, recent advances in the use of 3D information of the sources from photometric redshifts promise greater statistical power, and these are further enhanced by the use of statistics beyond two-point quantities such as the power spectrum. The use of 3D information also alleviates difficulties arising from physical effects such as the intrinsic alignment of galaxies, which can mimic weak lensing to some extent. On the observational side, in the next few years weak lensing surveys such as CFHTLS, VST-KIDS and Pan-STARRS, and the planned Dark Energy Survey, will provide the first weak lensing surveys covering very large sky areas and depth. In the long run even more ambitious programmes such as DUNE, the Supernova Anisotropy Probe (SNAP) and Large-aperture Synoptic Survey Telescope (LSST) are planned. Weak lensing of diffuse components such as the CMB and 21 cm emission can also

  9. Seismic design principles for the German fast breeder reactor SNR2

    International Nuclear Information System (INIS)

    Rangette, A.M.; Peters, K.A.

    1988-01-01

    The leading aim of a seismic design is, besides protection against seismic impacts, not to enhance the overall risk in the absence of seismic vibrations and, secondly, to avoid competition between operational needs and a seismic structural design. This approach is supported by avoiding overconservatism in the assumption of seismic loads and in the calculation of the structural response. Accordingly the seismic principles are stated as follows: restriction to German or equivalent low seismicity sites with intensities (SSE) lower VIII at frequency lower than 10 -4 /year; best estimate of seismic input-data without further conservatism; no consideration of OBE. The structural design principles are: 1. The secondary character of the seismic excitation is explicitly accounted for; 2. Energy absorption is allowed for by ductility of materials and construction. Accordingly strain criteria are used for failure predictions instead of stress criteria. (author). 1 fig

  10. On the variational principle for the equations of perfect fluid dynamics

    International Nuclear Information System (INIS)

    Serre, D.

    1993-01-01

    One gives a new version of the variational principle δL = 0, L being the usual Lagrangian, for the perfect fluid mechanics. It is formally equivalent to the well-known principle but it gives the first rigorous derivation of the conservation laws (momentum and energy), including the discontinuous case (shock waves, contact discontinuities). Thanks to a new formulation of the constraints, we do not involve any Lagrange multiplier, which in previous works were neither physically relevant, since they do not appear in the Euler equations, nor mathematically relevant. We even give a variational interpretation of the entropy inequality when shock waves occur. Our method covers all aspects of the perfect fluids, including stationary and unstationary motion, compressible and incompressible fluids, axisymmetric case. When the velocity field admits a stream function, the variational principle gives rise to extremal points of the Lagrangian on various infinite dimensional manifolds. For a suitable choice of this manifold, the flow is itself periodic, that is all the fluid particles have a periodic motion with the same period. The flow describes a closed geodesic on some group of diffeomorphisms. (author). 10 refs

  11. 77 FR 32632 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Science.gov (United States)

    2012-06-01

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION... accordance with 40 CFR Part 53, three new equivalent methods: One for measuring concentrations of nitrogen... INFORMATION: In accordance with regulations at 40 CFR Part 53, the EPA evaluates various methods for...

  12. Beyond Language Equivalence on Visibly Pushdown Automata

    DEFF Research Database (Denmark)

    Srba, Jiri

    2009-01-01

    We study (bi)simulation-like preorder/equivalence checking on the class of visibly pushdown automata and its natural subclasses visibly BPA (Basic Process Algebra) and visibly one-counter automata. We describe generic methods for proving complexity upper and lower bounds for a number of studied...... preorders and equivalences like simulation, completed simulation, ready simulation, 2-nested simulation preorders/equivalences and bisimulation equivalence. Our main results are that all the mentioned equivalences and preorders are EXPTIME-complete on visibly pushdown automata, PSPACE-complete on visibly...... one-counter automata and P-complete on visibly BPA. Our PSPACE lower bound for visibly one-counter automata improves also the previously known DP-hardness results for ordinary one-counter automata and one-counter nets. Finally, we study regularity checking problems for visibly pushdown automata...

  13. Three-dimensional attached viscous flow basic principles and theoretical foundations

    CERN Document Server

    Hirschel, Ernst Heinrich; Kordulla, Wilhelm

    2014-01-01

    Viscous flow is usually treated in the frame of boundary-layer theory and as a two-dimensional flow. At best, books on boundary layers provide the describing equations for three-dimensional boundary layers, and solutions only for certain special cases.   This book presents the basic principles and theoretical foundations of three-dimensional attached viscous flows as they apply to aircraft of all kinds. Though the primary flight speed range is that of civil air transport vehicles, flows past other flying vehicles up to hypersonic speeds are also considered. Emphasis is put on general three-dimensional attached viscous flows and not on three-dimensional boundary layers, as this wider scope is necessary in view of the theoretical and practical problems that have to be overcome in practice.   The specific topics covered include weak, strong, and global interaction; the locality principle; properties of three-dimensional viscous flows; thermal surface effects; characteristic properties; wall compatibility con...

  14. Bridging the knowledge gap: An analysis of Albert Einstein's popularized presentation of the equivalence of mass and energy.

    Science.gov (United States)

    Kapon, Shulamit

    2014-11-01

    This article presents an analysis of a scientific article written by Albert Einstein in 1946 for the general public that explains the equivalence of mass and energy and discusses the implications of this principle. It is argued that an intelligent popularization of many advanced ideas in physics requires more than the simple elimination of mathematical formalisms and complicated scientific conceptions. Rather, it is shown that Einstein developed an alternative argument for the general public that bypasses the core of the formal derivation of the equivalence of mass and energy to provide a sense of derivation based on the history of science and the nature of scientific inquiry. This alternative argument is supported and enhanced by variety of explanatory devices orchestrated to coherently support and promote the reader's understanding. The discussion centers on comparisons to other scientific expositions written by Einstein for the general public. © The Author(s) 2013.

  15. Feasibility of isotachochromatography as a method for the preparative separation of weak acids and weak bases. I. Theoretical considerations

    NARCIS (Netherlands)

    Kooistra, C.; Sluyterman, L.A.A.E.

    1988-01-01

    The fundamental equation of isotachochromatography, i.e., isotachophoresis translated into ion-exchange chromatography, has been derived for weak acids and weak bases. Weak acids are separated on strong cation exchangers and weak bases on strong anion exchangers. According to theory, the elution

  16. The equivalence problem for LL- and LR-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus; Gecsec, F.

    It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular

  17. Implementing the Precautionary Principle through Stakeholder Engagement for Product and Service Development

    Directory of Open Access Journals (Sweden)

    Pierre de Coninck

    2007-05-01

    Full Text Available The precautionary principle is a sustainable development principle that attempts to articulate an ethic in decision making since it deals with the notion of uncertainty of harm. Uncertainty becomes a weakness when it has to serve as a predictor by which to take action. Since humans are responsible for their actions, and ethics is based in action, then decisions based in uncertainty require an ethical framework. Beyond the professional deontological responsibility, there is a need to consider the process of conception based on an ethic of the future and therefore to develop a new ethical framework which is more global and fundamental. This will expose the justifications for choices, present these in debates with other stakeholders, and ultimately adopt an axiology of decision making for conception. Responsibility and participative discourse for an equal justice among actors are a basis of such an ethic. By understanding the ethical framework of this principle and applying this knowledge towards design or innovation, the precautionary principle becomes operational. This paper suggests that to move towards sustainability, stakeholders must adopt decision making processes that are precautionary. A commitment to precaution encourages a global perspective and the search for alternatives. Methods such as alternative assessment and precautionary deliberation through stakeholder engagement can assist in this shift towards sustainability.

  18. Monitoring of Non-Ferrous Wear Debris in Hydraulic Oil by Detecting the Equivalent Resistance of Inductive Sensors

    Directory of Open Access Journals (Sweden)

    Lin Zeng

    2018-03-01

    Full Text Available Wear debris in hydraulic oil contains important information on the operation of equipment, which is important for condition monitoring and fault diagnosis in mechanical equipment. A micro inductive sensor based on the inductive coulter principle is presented in this work. It consists of a straight micro-channel and a 3-D solenoid coil wound on the micro-channel. Instead of detecting the inductance change of the inductive sensor, the equivalent resistance change of the inductive sensor is detected for non-ferrous particle (copper particle monitoring. The simulation results show that the resistance change rate caused by the presence of copper particles is greater than the inductance change rate. Copper particles with sizes ranging from 48 μm to 150 μm were used in the experiment, and the experimental results are in good agreement with the simulation results. By detecting the inductive change of the micro inductive sensor, the detection limit of the copper particles only reaches 70 μm. However, the detection limit can be improved to 48 μm by detecting the equivalent resistance of the inductive sensor. The equivalent resistance method was demonstrated to have a higher detection accuracy than conventional inductive detection methods for non-ferrous particle detection in hydraulic oil.

  19. The biologically equivalent dose BED - Is the approach for calculation of this factor really a reliable basis?

    International Nuclear Information System (INIS)

    Jensen, J.M.; Zimmermann, J.

    2000-01-01

    To predict the effect on tumours in radiotherapy, especially relating to irreversible effects, but also to realize the retrospective assessment the so called L-Q-model is relied on at present. Internal specific organ parameters, such as α, β, γ, T p , T k , and ρ, as well as external parameters, so as D, d, n, V, and V ref , were used for determination of the biologically equivalent dose BED. While the external parameters are determinable with small deviations, the internal parameters depend on biological varieties and dispersons: In some cases the lowest value is assumed to be Δ=±25%. This margin of error goes on to the biologically equivalent dose by means of the principle of superposition of errors. In some selected cases (lung, kidney, skin, rectum) these margins of error were calculated exemplarily. The input errors especially of the internal parameters cause a mean error Δ on the biologically equivalent dose and a dispersion of the single fraction dose d dependent on the organ taking into consideration, of approximately 8-30%. Hence it follows only a very critical and cautious application of those L-Q-algorithms in expert proceedings, and in radiotherapy more experienced based decisions are recommended, instead of acting only upon simple two-dimensional mechanistic ideas. (orig.) [de

  20. On the connection between complementarity and uncertainty principles in the Mach–Zehnder interferometric setting

    International Nuclear Information System (INIS)

    Bosyk, G M; Portesi, M; Holik, F; Plastino, A

    2013-01-01

    We revisit the connection between the complementarity and uncertainty principles of quantum mechanics within the framework of Mach–Zehnder interferometry. We focus our attention on the trade-off relation between complementary path information and fringe visibility. This relation is equivalent to the uncertainty relation of Schrödinger and Robertson for a suitably chosen pair of observables. We show that it is equivalent as well to the uncertainty inequality provided by Landau and Pollak. We also study the relationship of this trade-off relation with a family of entropic uncertainty relations based on Rényi entropies. There is no equivalence in this case, but the different values of the entropic parameter do define regimes that provides us with a tool to discriminate between non-trivial states of minimum uncertainty. The existence of such regimes agrees with previous results of Luis (2011 Phys. Rev. A 84 034101), although their meaning was not sufficiently clear. We discuss the origin of these regimes with the intention of gaining a deeper understanding of entropic measures. (paper)

  1. Equivalent damage of loads on pavements

    CSIR Research Space (South Africa)

    Prozzi, JA

    2009-05-26

    Full Text Available This report describes a new methodology for the determination of Equivalent Damage Factors (EDFs) of vehicles with multiple axle and wheel configurations on pavements. The basic premise of this new procedure is that "equivalent pavement response...

  2. Diagonal form factors and heavy-heavy-light three-point functions at weak coupling

    Energy Technology Data Exchange (ETDEWEB)

    Hollo, Laszlo [MTA Lendület Holographic QFT Group, Wigner Research Centre for Physics,H-1525 Budapest 114, P.O.B. 49 (Hungary); Jiang, Yunfeng; Petrovskii, Andrei [Institut de Physique Théorique, DSM, CEA, URA2306 CNRS,Saclay, F-91191 Gif-sur-Yvette (France)

    2015-09-18

    In this paper we consider a special kind of three-point functions of HHL type at weak coupling in N=4 SYM theory and analyze its volume dependence. At strong coupling this kind of three-point functions were studied recently by Bajnok, Janik and Wereszczynski http://dx.doi.org/10.1007/JHEP09(2014)050. The authors considered some cases of HHL correlator in the su(2) sector and, relying on their explicit results, formulated a conjecture about the form of the volume dependence of the symmetric HHL structure constant to be valid at any coupling up to wrapping corrections. In order to test this hypothesis we considered the HHL correlator in su(2) sector at weak coupling and directly showed that, up to one loop, the finite volume dependence has exactly the form proposed in http://dx.doi.org/10.1007/JHEP09(2014)050. Another side of the conjecture suggests that computation of the symmetric structure constant is equivalent to computing the corresponding set of infinite volume form factors, which can be extracted as the coefficients of finite volume expansion. In this sense, extracting appropriate coefficients from our result gives a prediction for the corresponding infinite volume form factors.

  3. Diagonal form factors and heavy-heavy-light three-point functions at weak coupling

    International Nuclear Information System (INIS)

    Hollo, Laszlo; Jiang, Yunfeng; Petrovskii, Andrei

    2015-01-01

    In this paper we consider a special kind of three-point functions of HHL type at weak coupling in N=4 SYM theory and analyze its volume dependence. At strong coupling this kind of three-point functions were studied recently by Bajnok, Janik and Wereszczynski http://dx.doi.org/10.1007/JHEP09(2014)050. The authors considered some cases of HHL correlator in the su(2) sector and, relying on their explicit results, formulated a conjecture about the form of the volume dependence of the symmetric HHL structure constant to be valid at any coupling up to wrapping corrections. In order to test this hypothesis we considered the HHL correlator in su(2) sector at weak coupling and directly showed that, up to one loop, the finite volume dependence has exactly the form proposed in http://dx.doi.org/10.1007/JHEP09(2014)050. Another side of the conjecture suggests that computation of the symmetric structure constant is equivalent to computing the corresponding set of infinite volume form factors, which can be extracted as the coefficients of finite volume expansion. In this sense, extracting appropriate coefficients from our result gives a prediction for the corresponding infinite volume form factors.

  4. AEGIS at CERN: Measuring Antihydrogen Fall

    CERN Document Server

    Giammarchi, Marco G.

    2011-01-01

    The main goal of the AEGIS experiment at the CERN Antiproton Decelerator is the test of fundamental laws such as the Weak Equivalence Principle (WEP) and CPT symmetry. In the first phase of AEGIS, a beam of antihydrogen will be formed whose fall in the gravitational field is measured in a Moire' deflectometer; this will constitute the first test of the WEP with antimatter.

  5. Lexicographic Path Induction

    DEFF Research Database (Denmark)

    Schürmann, Carsten; Sarnat, Jeffrey

    2009-01-01

    Programming languages theory is full of problems that reduce to proving the consistency of a logic, such as the normalization of typed lambda-calculi, the decidability of equality in type theory, equivalence testing of traces in security, etc. Although the principle of transfinite induction......, and weak normalization for Gödel’s T follows indirectly; both have been formalized in a prototypical extension of Twelf....

  6. Riemann Geometric Color-Weak Compensationfor Individual Observers

    OpenAIRE

    Kojima, Takanori; Mochizuki, Rika; Lenz, Reiner; Chao, Jinhui

    2014-01-01

    We extend a method for color weak compensation based on the criterion of preservation of subjective color differences between color normal and color weak observers presented in [2]. We introduce a new algorithm for color weak compensation using local affine maps between color spaces of color normal and color weak observers. We show howto estimate the local affine map and how to determine correspondences between the origins of local coordinates in color spaces of color normal and color weak ob...

  7. A chiral sensor based on weak measurement for the determination of Proline enantiomers in diverse measuring circumstances.

    Science.gov (United States)

    Li, Dongmei; Guan, Tian; He, Yonghong; Liu, Fang; Yang, Anping; He, Qinghua; Shen, Zhiyuan; Xin, Meiguo

    2018-07-01

    A new chiral sensor based on weak measurement to accurately measure the optical rotation (OR) has been developed for the estimation of a trace amount of chiral molecule. With the principle of optical weak measurement in frequency domain, the central wavelength shift of output spectra is quantitatively relative to the angle of preselected polarization. Hence, a chiral molecule (e.g., L-amino acid, or D-amino acid) can be enantioselectively determined by modifying the preselection angle with the OR, which will cause the rotation of a polarization plane. The concentration of the chiral sample, corresponding to its optical activity, is quantitatively analyzed with the central wavelength shift of output spectra, which can be collected in real time. Immune to the refractive index change, the proposed chiral sensor is valid in complicated measuring circumstance. The detections of Proline enantiomer concentration in different solvents were implemented. The results demonstrated that weak measurement acted as a reliable method to chiral recognition of Proline enantiomers in diverse circumstance with the merits of high precision and good robustness. In addition, this real-time monitoring approach plays a crucial part in asymmetric synthesis and biological systems. Copyright © 2018. Published by Elsevier B.V.

  8. 7 CFR 1030.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1030.54 Section 1030.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1030.54 Equivalent price. See § 1000.54. ...

  9. Fundamental quark, lepton correspondence and dynamics with weak decay interactions

    International Nuclear Information System (INIS)

    Van der Spuy, E.

    1977-10-01

    A nonlinear fermion-field equation of motion and its (in principle) exact solutions, making use of the previously developed technique of infinite component free spinor fields, are discussed. It is shown to be essential for the existence of the solutions to introduce the isosymmetry breaking mechanism by coupling the isospin polarization of the domain of the universe of such particle fields to the field isospin. The essential trigger for the isosymmetry breaking mechanism is the existence of the electromagnetic interaction and the photon fields, carrying an infinite range isospin polarization change in the domain. A quartet of proton, neutron, lambda and charmed quark field solutions, with their respective characteristic Regge trajectories and primary isospin quantum numbers, and a quartet of lepton fields electron neutrino, electron, muon, muon nutrino, are shown to emerge naturally. The equations of motion of the quark and lepton propagators are deduced. The complicated charge nature of the quarks and the need for quark confinement is discussed and a correspondence principle is established between the quark and lepton field solutions. The correspondence is such that the dynamics of the leptons on their own appears to be compatible with quantum electrodynamics on the one hand, and on the other hand permits a natural GIM-Cabibbo weak decay interaction with a Cibibbo angle equal to the domain isospin polarization-change phase angle

  10. Bernoulli's Principle

    Science.gov (United States)

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  11. Equivalence in Ventilation and Indoor Air Quality

    Energy Technology Data Exchange (ETDEWEB)

    Sherman, Max; Walker, Iain; Logue, Jennifer

    2011-08-01

    We ventilate buildings to provide acceptable indoor air quality (IAQ). Ventilation standards (such as American Society of Heating, Refrigerating, and Air-Conditioning Enginners [ASHRAE] Standard 62) specify minimum ventilation rates without taking into account the impact of those rates on IAQ. Innovative ventilation management is often a desirable element of reducing energy consumption or improving IAQ or comfort. Variable ventilation is one innovative strategy. To use variable ventilation in a way that meets standards, it is necessary to have a method for determining equivalence in terms of either ventilation or indoor air quality. This study develops methods to calculate either equivalent ventilation or equivalent IAQ. We demonstrate that equivalent ventilation can be used as the basis for dynamic ventilation control, reducing peak load and infiltration of outdoor contaminants. We also show that equivalent IAQ could allow some contaminants to exceed current standards if other contaminants are more stringently controlled.

  12. Weak boson emission in hadron collider processes

    International Nuclear Information System (INIS)

    Baur, U.

    2007-01-01

    The O(α) virtual weak radiative corrections to many hadron collider processes are known to become large and negative at high energies, due to the appearance of Sudakov-like logarithms. At the same order in perturbation theory, weak boson emission diagrams contribute. Since the W and Z bosons are massive, the O(α) virtual weak radiative corrections and the contributions from weak boson emission are separately finite. Thus, unlike in QED or QCD calculations, there is no technical reason for including gauge boson emission diagrams in calculations of electroweak radiative corrections. In most calculations of the O(α) electroweak radiative corrections, weak boson emission diagrams are therefore not taken into account. Another reason for not including these diagrams is that they lead to final states which differ from that of the original process. However, in experiment, one usually considers partially inclusive final states. Weak boson emission diagrams thus should be included in calculations of electroweak radiative corrections. In this paper, I examine the role of weak boson emission in those processes at the Fermilab Tevatron and the CERN LHC for which the one-loop electroweak radiative corrections are known to become large at high energies (inclusive jet, isolated photon, Z+1 jet, Drell-Yan, di-boson, tt, and single top production). In general, I find that the cross section for weak boson emission is substantial at high energies and that weak boson emission and the O(α) virtual weak radiative corrections partially cancel

  13. Orientifold Planar Equivalence: The Chiral Condensate

    DEFF Research Database (Denmark)

    Armoni, Adi; Lucini, Biagio; Patella, Agostino

    2008-01-01

    The recently introduced orientifold planar equivalence is a promising tool for solving non-perturbative problems in QCD. One of the predictions of orientifold planar equivalence is that the chiral condensates of a theory with $N_f$ flavours of Dirac fermions in the symmetric (or antisymmetric...

  14. 7 CFR 1005.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1005.54 Section 1005.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1005.54 Equivalent price. See § 1000.54. Uniform Prices ...

  15. 7 CFR 1126.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1126.54 Section 1126.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1126.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  16. 7 CFR 1001.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1001.54 Section 1001.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1001.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  17. 7 CFR 1032.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1032.54 Section 1032.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1032.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  18. 7 CFR 1033.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1033.54 Section 1033.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1033.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  19. 7 CFR 1131.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1131.54 Section 1131.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1131.54 Equivalent price. See § 1000.54. Uniform Prices ...

  20. 7 CFR 1006.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1006.54 Section 1006.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1006.54 Equivalent price. See § 1000.54. Uniform Prices ...

  1. 7 CFR 1007.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1007.54 Section 1007.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1007.54 Equivalent price. See § 1000.54. Uniform Prices ...

  2. The Complexity of Identifying Large Equivalence Classes

    DEFF Research Database (Denmark)

    Skyum, Sven; Frandsen, Gudmund Skovbjerg; Miltersen, Peter Bro

    1999-01-01

    We prove that at least 3k−4/k(2k−3)(n/2) – O(k)equivalence tests and no more than 2/k (n/2) + O(n) equivalence tests are needed in the worst case to identify the equivalence classes with at least k members in set of n elements. The upper bound is an improvement by a factor 2 compared to known res...

  3. Physical acoustics principles and methods

    CERN Document Server

    Mason, Warren P

    2012-01-01

    Physical Acoustics: Principles and Methods, Volume IV, Part B: Applications to Quantum and Solid State Physics provides an introduction to the various applications of quantum mechanics to acoustics by describing several processes for which such considerations are essential. This book discusses the transmission of sound waves in molten metals. Comprised of seven chapters, this volume starts with an overview of the interactions that can happen between electrons and acoustic waves when magnetic fields are present. This text then describes acoustic and plasma waves in ionized gases wherein oscillations are subject to hydrodynamic as well as electromagnetic forces. Other chapters examine the resonances and relaxations that can take place in polymer systems. This book discusses as well the general theory of the interaction of a weak sinusoidal field with matter. The final chapter describes the sound velocities in the rocks composing the Earth. This book is a valuable resource for physicists and engineers.

  4. Can we observationally test the weak cosmic censorship conjecture?

    International Nuclear Information System (INIS)

    Kong, Lingyao; Malafarina, Daniele; Bambi, Cosimo

    2014-01-01

    In general relativity, gravitational collapse of matter fields ends with the formation of a spacetime singularity, where the matter density becomes infinite and standard physics breaks down. According to the weak cosmic censorship conjecture, singularities produced in the gravitational collapse cannot be seen by distant observers and must be hidden within black holes. The validity of this conjecture is still controversial and at present we cannot exclude that naked singularities can be created in our Universe from regular initial data. In this paper, we study the radiation emitted by a collapsing cloud of dust and check whether it is possible to distinguish the birth of a black hole from the one of a naked singularity. In our simple dust model, we find that the properties of the radiation emitted in the two scenarios is qualitatively similar. That suggests that observational tests of the cosmic censorship conjecture may be very difficult, even in principle. (orig.)

  5. Can we observationally test the weak cosmic censorship conjecture?

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Lingyao; Malafarina, Daniele; Bambi, Cosimo [Fudan University, Department of Physics, Center for Field Theory and Particle Physics, Shanghai (China)

    2014-08-15

    In general relativity, gravitational collapse of matter fields ends with the formation of a spacetime singularity, where the matter density becomes infinite and standard physics breaks down. According to the weak cosmic censorship conjecture, singularities produced in the gravitational collapse cannot be seen by distant observers and must be hidden within black holes. The validity of this conjecture is still controversial and at present we cannot exclude that naked singularities can be created in our Universe from regular initial data. In this paper, we study the radiation emitted by a collapsing cloud of dust and check whether it is possible to distinguish the birth of a black hole from the one of a naked singularity. In our simple dust model, we find that the properties of the radiation emitted in the two scenarios is qualitatively similar. That suggests that observational tests of the cosmic censorship conjecture may be very difficult, even in principle. (orig.)

  6. Classical field approach to quantum weak measurements.

    Science.gov (United States)

    Dressel, Justin; Bliokh, Konstantin Y; Nori, Franco

    2014-03-21

    By generalizing the quantum weak measurement protocol to the case of quantum fields, we show that weak measurements probe an effective classical background field that describes the average field configuration in the spacetime region between pre- and postselection boundary conditions. The classical field is itself a weak value of the corresponding quantum field operator and satisfies equations of motion that extremize an effective action. Weak measurements perturb this effective action, producing measurable changes to the classical field dynamics. As such, weakly measured effects always correspond to an effective classical field. This general result explains why these effects appear to be robust for pre- and postselected ensembles, and why they can also be measured using classical field techniques that are not weak for individual excitations of the field.

  7. 7 CFR 1124.54 - Equivalent price.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1124.54 Section 1124.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Class Prices § 1124.54 Equivalent price. See § 1000.54. Producer Price Differential ...

  8. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  9. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  10. On a variational principle for shape optimization and elliptic free boundary problems

    Directory of Open Access Journals (Sweden)

    Raúl B. González De Paz

    2009-02-01

    Full Text Available A variational principle for several free boundary value problems using a relaxation approach is presented. The relaxed Energy functional is concave and it is defined on a convex set, so that the minimizing points are characteristic functions of sets. As a consequence of the first order optimality conditions, it is shown that the corresponding sets are domains bounded by free boundaries, so that the equivalence of the solution of the relaxed problem with the solution of several free boundary value problem is proved. Keywords: Calculus of variations, optimization, free boundary problems.

  11. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    Science.gov (United States)

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  12. Peripheral facial weakness (Bell's palsy).

    Science.gov (United States)

    Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida

    2013-06-01

    Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.

  13. Weakly oval electron lense

    International Nuclear Information System (INIS)

    Daumenov, T.D.; Alizarovskaya, I.M.; Khizirova, M.A.

    2001-01-01

    The method of the weakly oval electrical field getting generated by the axially-symmetrical field is shown. Such system may be designed with help of the cylindric form coaxial electrodes with the built-in quadrupole duplet. The singularity of the indicated weakly oval lense consists of that it provides the conducting both mechanical and electronic adjustment. Such lense can be useful for elimination of the near-axis astigmatism in the electron-optical system

  14. In-orbit calibration approach of the MICROSCOPE experiment for the test of the equivalence principle at 10-15

    International Nuclear Information System (INIS)

    Pradels, Gregory; Touboul, Pierre

    2003-01-01

    The MICROSCOPE mission is a space experiment of fundamental physics which aims to test the equality between the gravitational and inertial mass with a 10 -15 accuracy. Considering these scientific objectives, very weak accelerations have to be controlled and measured in orbit. By modelling the expected acceleration signals applied to the MICROSCOPE instrument in orbit, the developed analytic model of the mission measurement shows the requirements for instrument calibration. Because of on-ground perturbations, the instrument cannot be calibrated in the laboratory and an in-orbit procedure has to be defined. The proposed approach exploits the drag-free system of the satellite and is an important element of the future data analysis of the MICROSCOPE space experiment

  15. Summary of session C1: experimental gravitation

    International Nuclear Information System (INIS)

    Laemmerzahl, C

    2008-01-01

    The fact that gravity is a metric theory follows from the Einstein equivalence principle. This principle consists of (i) the universality of free fall, (ii) the universality of the gravitational redshift and (iii) the local validity of Lorentz invariance. Many experiments searching for deviations from standard general relativity test the various aspects of the Einstein equivalence principle. Here we report on experiments covering the whole Einstein equivalence principle. Until now all experiments have been in agreement with the Einstein equivalence principle. As a consequence, gravity has to be described by a metric theory. Any metric theory of gravity leads to effects such as perihelion shift, deflection of light, gravitational redshift, gravitational time delay, Lense-Thirring effect, Schiff effect, etc. A particular theory of that sort is Einstein's general relativity. For weak gravitational fields which are asymptotically flat any deviation from Einstein's general relativity can be parametrized by a few constants, the PPN parameters. Many astrophysical observations and space experiments are devoted to a better measurement of the effects and, thus, of the PPN parameters. It is clear that gravity is best tested for intermediate ranges, that is, for distances between 1 m and several astronomical units. It is highly interesting to push forward our domain of experience and to strengthen the experimental foundation of gravity also beyond these scales. This point is underlined by the fact that many quantum gravity and unification-inspired theories suggest deviation from the standard laws of gravity at very small or very large scales. In this session summary we briefly outline the status and report on the talks presented in session C1 about experimental gravitation

  16. Problems of Equivalence in Shona- English Bilingual Dictionaries

    African Journals Online (AJOL)

    rbr

    Page 1 ... translation equivalents in Shona-English dictionaries where lexicographers will be dealing with divergent languages and cultures, traditional practices of lexicography and the absence of reliable ... ideal in translation is to achieve structural and semantic equivalence. Absolute equivalence between any two ...

  17. Weak decays of stable particles

    International Nuclear Information System (INIS)

    Brown, R.M.

    1988-09-01

    In this article we review recent advances in the field of weak decays and consider their implications for quantum chromodynamics (the theory of strong interactions) and electroweak theory (the combined theory of electromagnetic and weak interactions), which together form the ''Standard Model'' of elementary particles. (author)

  18. 10 CFR 474.3 - Petroleum-equivalent fuel economy calculation.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Petroleum-equivalent fuel economy calculation. 474.3..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION § 474.3 Petroleum-equivalent fuel economy calculation. (a) The petroleum-equivalent fuel economy for an electric vehicle is...

  19. Simultaneous determination of equivalence volumes and acid dissociation constants from potentiometric titration data.

    Science.gov (United States)

    Papanastasiou, G; Ziogas, I

    1995-06-01

    New iterative methods for analysis of potentiometric titration data of (a) mixtures of weak monoprotic acids with their conjugate bases, (b) solutions of polyprotic (di- and triprotic) acids, and (c) mixtures of two diprotic acids are presented. These methods, using data exclusively resulting from the acidic region of the titration curve permits the accurate determination of the analytical concentration of one or more acids even if the titration is stopped well before the end point of the titration. For the titration of a solution containing a conjugate acid/base pair, the proposed procedure enables the extraction of the initial composition of the mixture, as well as the dissociation constant of the concerned acid. Thus, it is possible by this type of analysis to distinguish whether a weak acid has been contaminated by a strong base and define the extent of the contamination. On the other hand, for the titration of polyprotic acids, the proposed approach enables the extraction of the accurate values of the equivalence volume and the dissociation constants K(i) even when the ionization stages overlap. Finally, for the titration of a mixture of two diprotic acids the proposed procedure enables the determination of the composition of the mixture even if the sum of the concentrations of the acids is not known. This method can be used in the analysis of solutions containing two diastereoisomeric forms of a weak diprotic acid. The test of the proposed procedures by means of ideal and Monte Carlo simulated data revealed that these methods are fairly applicable even when the titration data are considerably obscured by 'noise' or contain an important systematic error. The proposed procedures were also successfully applied to experimental titration data.

  20. Weak lensing and dark energy

    International Nuclear Information System (INIS)

    Huterer, Dragan

    2002-01-01

    We study the power of upcoming weak lensing surveys to probe dark energy. Dark energy modifies the distance-redshift relation as well as the matter power spectrum, both of which affect the weak lensing convergence power spectrum. Some dark-energy models predict additional clustering on very large scales, but this probably cannot be detected by weak lensing alone due to cosmic variance. With reasonable prior information on other cosmological parameters, we find that a survey covering 1000 sq deg down to a limiting magnitude of R=27 can impose constraints comparable to those expected from upcoming type Ia supernova and number-count surveys. This result, however, is contingent on the control of both observational and theoretical systematics. Concentrating on the latter, we find that the nonlinear power spectrum of matter perturbations and the redshift distribution of source galaxies both need to be determined accurately in order for weak lensing to achieve its full potential. Finally, we discuss the sensitivity of the three-point statistics to dark energy