Do positrons and antiprotons respect the weak equivalence principle?
International Nuclear Information System (INIS)
Hughes, R.J.
1990-01-01
We resolve the difficulties which Morrison identified with energy conservation and the gravitational red-shift when particles of antimatter, such as the positron and antiproton, do not respect the weak equivalence principle. 13 refs
A weak equivalence principle test on a suborbital rocket
Energy Technology Data Exchange (ETDEWEB)
Reasenberg, Robert D; Phillips, James D, E-mail: reasenberg@cfa.harvard.ed [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States)
2010-05-07
We describe a Galilean test of the weak equivalence principle, to be conducted during the free fall portion of a sounding rocket flight. The test of a single pair of substances is aimed at a measurement uncertainty of sigma(eta) < 10{sup -16} after averaging the results of eight separate drops. The weak equivalence principle measurement is made with a set of four laser gauges that are expected to achieve 0.1 pm Hz{sup -1/2}. The discovery of a violation (eta not = 0) would have profound implications for physics, astrophysics and cosmology.
Can quantum probes satisfy the weak equivalence principle?
Energy Technology Data Exchange (ETDEWEB)
Seveso, Luigi, E-mail: luigi.seveso@unimi.it [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); Paris, Matteo G.A. [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); INFN, Sezione di Milano, I-20133 Milano (Italy)
2017-05-15
We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’s mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.
Can quantum probes satisfy the weak equivalence principle?
International Nuclear Information System (INIS)
Seveso, Luigi; Paris, Matteo G.A.
2017-01-01
We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’s mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.
Cosmological equivalence principle and the weak-field limit
International Nuclear Information System (INIS)
Wiltshire, David L.
2008-01-01
The strong equivalence principle is extended in application to averaged dynamical fields in cosmology to include the role of the average density in the determination of inertial frames. The resulting cosmological equivalence principle is applied to the problem of synchronization of clocks in the observed universe. Once density perturbations grow to give density contrasts of order 1 on scales of tens of megaparsecs, the integrated deceleration of the local background regions of voids relative to galaxies must be accounted for in the relative synchronization of clocks of ideal observers who measure an isotropic cosmic microwave background. The relative deceleration of the background can be expected to represent a scale in which weak-field Newtonian dynamics should be modified to account for dynamical gradients in the Ricci scalar curvature of space. This acceleration scale is estimated using the best-fit nonlinear bubble model of the universe with backreaction. At redshifts z -10 ms -2 , is small, when integrated over the lifetime of the universe it amounts to an accumulated relative difference of 38% in the rate of average clocks in galaxies as compared to volume-average clocks in the emptiness of voids. A number of foundational aspects of the cosmological equivalence principle are also discussed, including its relation to Mach's principle, the Weyl curvature hypothesis, and the initial conditions of the universe.
Equivalence principles and electromagnetism
Ni, W.-T.
1977-01-01
The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.
On experimental testing of the weak equivalence principle for the neutron
International Nuclear Information System (INIS)
Pokotilovskij, Yu.N.
1994-01-01
The considerations is presented of the experimental situation with the verification of the weak equivalence principle for the neutron. The direct method is proposed to significantly increase (to ∼ 10 -6 ) the precision of the equivalence principle for the neutron in the Galilei type experiment, which uses the thin-film Fabri-Perot interferometer and precise time-of-flight spectrometry of ultracold neutrons
International Nuclear Information System (INIS)
Chowdhury, P; Majumdar, A S; Sinha, S; Home, D; Mousavi, S V; Mozaffari, M R
2012-01-01
The weak equivalence principle of gravity is examined at the quantum level in two ways. First, the position detection probabilities of particles described by a non-Gaussian wave packet projected upwards against gravity around the classical turning point and also around the point of initial projection are calculated. These probabilities exhibit mass dependence at both these points, thereby reflecting the quantum violation of the weak equivalence principle. Second, the mean arrival time of freely falling particles is calculated using the quantum probability current, which also turns out to be mass dependent. Such a mass dependence is shown to be enhanced by increasing the non-Gaussianity parameter of the wave packet, thus signifying a stronger violation of the weak equivalence principle through a greater departure from Gaussianity of the initial wave packet. The mass dependence of both the position detection probabilities and the mean arrival time vanishes in the limit of large mass. Thus, compatibility between the weak equivalence principle and quantum mechanics is recovered in the macroscopic limit of the latter. A selection of Bohm trajectories is exhibited to illustrate these features in the free fall case. (paper)
Test masses for the G-POEM test of the weak equivalence principle
International Nuclear Information System (INIS)
Reasenberg, Robert D; Phillips, James D; Popescu, Eugeniu M
2011-01-01
We describe the design of the test masses that are used in the 'ground-based principle of equivalence measurement' test of the weak equivalence principle. The main features of the design are the incorporation of corner cubes and the use of mass removal and replacement to create pairs of test masses with different test substances. The corner cubes allow for the vertical separation of the test masses to be measured with picometer accuracy by SAO's unique tracking frequency laser gauge, while the mass removal and replacement operations are arranged so that the test masses incorporating different test substances have nominally identical gravitational properties. (papers)
Quantum Field Theoretic Derivation of the Einstein Weak Equivalence Principle Using Emqg Theory
Ostoma, Tom; Trushyk, Mike
1999-01-01
We provide a quantum field theoretic derivation of Einstein's Weak Equivalence Principle of general relativity using a new quantum gravity theory proposed by the authors called Electro-Magnetic Quantum Gravity or EMQG (ref. 1). EMQG is based on a new theory of inertia (ref. 5) proposed by R. Haisch, A. Rueda, and H. Puthoff (which we modified and called Quantum Inertia). Quantum Inertia states that classical Newtonian Inertia is a property of matter due to the strictly local electrical force ...
Weak principle of equivalence and gauge theory of tetrad aravitational field
International Nuclear Information System (INIS)
Tunyak, V.N.
1978-01-01
It is shown that, unlike the tetrade formulation of the general relativity theory derived from the requirement on the Poincare group localization, the tetrade gravitation theory corresponding to the Trader formulation of the weak equivalence principle, where the nongravitational-matter Lagrangian is the direct covariant generalization of the partial relativistic expression on the Riemann space-time is incompatible with the known method for deriving the calibration theory of the tetrade gravitation field
Galactic Shapiro delay to the Crab pulsar and limit on weak equivalence principle violation
International Nuclear Information System (INIS)
Desai, Shantanu; Kahya, Emre
2018-01-01
We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Weak equivalence principle by using observations of ''nano-shot'' giant pulses from the Crab pulsar with time-delay < 0.4 ns, as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Weak equivalence principle in terms of the PPN parameter Δγ < 2.41 x 10 -15 . From the time-difference between simultaneous optical and radio observations, we get Δγ < 1.54 x 10 -9 . We also point out differences in our calculation of Shapiro delay and that from two recent papers (Yang and Zhang, Phys Rev D 94(10):101501, 2016; Zhang and Gong, Astrophys J 837:134, 2017), which used the same observations to obtain a corresponding limit on Δγ. (orig.)
Galactic Shapiro delay to the Crab pulsar and limit on weak equivalence principle violation
Energy Technology Data Exchange (ETDEWEB)
Desai, Shantanu [Indian Institute of Technology, Department of Physics, Hyderabad, Telangana (India); Kahya, Emre [Istanbul Technical University, Department of Physics, Istanbul (Turkey)
2018-02-15
We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Weak equivalence principle by using observations of ''nano-shot'' giant pulses from the Crab pulsar with time-delay < 0.4 ns, as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Weak equivalence principle in terms of the PPN parameter Δγ < 2.41 x 10{sup -15}. From the time-difference between simultaneous optical and radio observations, we get Δγ < 1.54 x 10{sup -9}. We also point out differences in our calculation of Shapiro delay and that from two recent papers (Yang and Zhang, Phys Rev D 94(10):101501, 2016; Zhang and Gong, Astrophys J 837:134, 2017), which used the same observations to obtain a corresponding limit on Δγ. (orig.)
Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe
2018-04-01
The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10-12 eV (i.e., range larger than a few 1 05 m ), we improve existing constraints by one order of magnitude to |α |baryon number and to |α |baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10-12 eV , the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.
Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe
2018-04-06
The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10^{-12} eV (i.e., range larger than a few 10^{5} m), we improve existing constraints by one order of magnitude to |α|difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10^{-12} eV, the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.
Directory of Open Access Journals (Sweden)
Molin Liu
2017-07-01
Full Text Available About 0.4 s after the Laser Interferometer Gravitational-Wave Observatory (LIGO detected a transient gravitational-wave (GW signal GW150914, the Fermi Gamma-ray Burst Monitor (GBM also found a weak electromagnetic transient (GBM transient 150914. Time and location coincidences favor a possible association between GW150904 and GBM transient 150914. Under this possible association, we adopt Fermi's electromagnetic (EM localization and derive constraints on possible violations of the Weak Equivalence Principle (WEP from the observations of two events. Our calculations are based on four comparisons: (1 The first is the comparison of the initial GWs detected at the two LIGO sites. From the different polarizations of these initial GWs, we obtain a limit on any difference in the parametrized post-Newtonian (PPN parameter Δγ≲10−10. (2 The second is a comparison of GWs and possible EM waves. Using a traditional super-Eddington accretion model for GBM transient 150914, we again obtain an upper limit Δγ≲10−10. Compared with previous results for photons and neutrinos, our limits are five orders of magnitude stronger than those from PeV neutrinos in blazar flares, and seven orders stronger than those from MeV neutrinos in SN1987A. (3 The third is a comparison of GWs with different frequencies in the range [35 Hz, 250 Hz]. (4 The fourth is a comparison of EM waves with different energies in the range [1 keV, 10 MeV]. These last two comparisons lead to an even stronger limit, Δγ≲10−8. Our results highlight the potential of multi-messenger signals exploiting different emission channels to strengthen existing tests of the WEP.
International Nuclear Information System (INIS)
Unnikrishnan, C.S.
1994-01-01
Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs
Fennelly, A. J.
1981-01-01
The TH epsilon mu formalism, used in analyzing equivalence principle experiments of metric and nonmetric gravity theories, is adapted to the description of the electroweak interaction using the Weinberg-Salam unified SU(2) x U(1) model. The use of the TH epsilon mu formalism is thereby extended to the weak interactions, showing how the gravitational field affects W sub mu (+ or -1) and Z sub mu (0) boson propagation and the rates of interactions mediated by them. The possibility of a similar extension to the strong interactions via SU(5) grand unified theories is briefly discussed. Also, using the effects of the potentials on the baryon and lepton wave functions, the effects of gravity on transition mediated in high-A atoms which are electromagnetically forbidden. Three possible experiments to test the equivalence principle in the presence of the weak interactions, which are technologically feasible, are then briefly outline: (1) K-capture by the FE nucleus (counting the emitted X-ray); (2) forbidden absorption transitions in high-A atoms' vapor; and (3) counting the relative Beta-decay rates in a suitable alpha-beta decay chain, assuming the strong interactions obey the equivalence principle.
DEFF Research Database (Denmark)
Kohlenbach, Ulrich Wilhelm
2002-01-01
We show that the so-called weak Markov's principle (WMP) which states that every pseudo-positive real number is positive is underivable in E-HA + AC. Since allows one to formalize (atl eastl arge parts of) Bishop's constructive mathematics, this makes it unlikely that WMP can be proved within...... the framework of Bishop-style mathematics (which has been open for about 20 years). The underivability even holds if the ine.ective schema of full comprehension (in all types) for negated formulas (in particular for -free formulas) is added, which allows one to derive the law of excluded middle...
Quantification of the equivalence principle
International Nuclear Information System (INIS)
Epstein, K.J.
1978-01-01
Quantitative relationships illustrate Einstein's equivalence principle, relating it to Newton's ''fictitious'' forces arising from the use of noninertial frames, and to the form of the relativistic time dilatation in local Lorentz frames. The equivalence principle can be interpreted as the equivalence of general covariance to local Lorentz covariance, in a manner which is characteristic of Riemannian and pseudo-Riemannian geometries
The gauge principle vs. the equivalence principle
International Nuclear Information System (INIS)
Gates, S.J. Jr.
1984-01-01
Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation
International Nuclear Information System (INIS)
Smorodinskij, Ya.A.
1980-01-01
The prerelativistic history of the equivalence principle (EP) is presented briefly. Its role in history of the general relativity theory (G.R.T.) discovery is elucidated. A modern idea states that the ratio of inert and gravitational masses does not differ from 1 at least up to the 12 sign after comma. Attention is paid to the difference of the gravitational field from electromagnetic one. The difference is as follows, the energy of the gravitational field distributed in space is the source of the field. These fields always interact at superposition. Electromagnetic fields from different sources are put together. On the basis of EP it is established the Sun field interact with the Earth gravitational energy in the same way as with any other one. The latter proves the existence of gravitation of the very gravitational field to a heavy body. A problem on gyroscope movement in the Earth gravitational field is presented as a paradox. The calculation has shown that gyroscope at satellite makes a positive precession, and its axis turns in an angle equal to α during a turn of the satellite round the Earth, but because of the space curvature - into the angle two times larger than α. A resulting turn is equal to 3α. It is shown on the EP basis that the polarization plane in any coordinate system does not turn when the ray of light passes in the gravitational field. Together with the historical value of EP noted is the necessity to take into account the requirements claimed by the EP at description of the physical world
Sondag, Andrea; Dittus, Hansjörg
2016-08-01
The Weak Equivalence Principle (WEP) is at the basis of General Relativity - the best theory for gravitation today. It has been and still is tested with different methods and accuracies. In this paper an overview of tests of the Weak Equivalence Principle done in the past, developed in the present and planned for the future is given. The best result up to now is derived from the data of torsion balance experiments by Schlamminger et al. (2008). An intuitive test of the WEP consists of the comparison of the accelerations of two free falling test masses of different composition. This has been carried through by Kuroda & Mio (1989, 1990) with the up to date most precise result for this setup. There is still more potential in this method, especially with a longer free fall time and sensors with a higher resolution. Providing a free fall time of 4.74 s (9.3 s using the catapult) the drop tower of the Center of Applied Space Technology and Microgravity (ZARM) at the University of Bremen is a perfect facility for further improvements. In 2001 a free fall experiment with high sensitive SQUID (Superconductive QUantum Interference Device) sensors tested the WEP with an accuracy of 10-7 (Nietzsche, 2001). For optimal conditions one could reach an accuracy of 10-13 with this setup (Vodel et al., 2001). A description of this experiment and its results is given in the next part of this paper. For the free fall of macroscopic test masses it is important to start with precisely defined starting conditions concerning the positions and velocities of the test masses. An Electrostatic Positioning System (EPS) has been developed to this purpose. It is described in the last part of this paper.
Equivalence Principle, Higgs Boson and Cosmology
Directory of Open Access Journals (Sweden)
Mauro Francaviglia
2013-05-01
Full Text Available We discuss here possible tests for Palatini f(R-theories together with their implications for different formulations of the Equivalence Principle. We shall show that Palatini f(R-theories obey the Weak Equivalence Principle and violate the Strong Equivalence Principle. The violations of the Strong Equivalence Principle vanish in vacuum (and purely electromagnetic solutions as well as on short time scales with respect to the age of the universe. However, we suggest that a framework based on Palatini f(R-theories is more general than standard General Relativity (GR and it sheds light on the interpretation of data and results in a way which is more model independent than standard GR itself.
Violation of Equivalence Principle and Solar Neutrinos
International Nuclear Information System (INIS)
Gago, A.M.; Nunokawa, H.; Zukanovich Funchal, R.
2001-01-01
We have updated the analysis for the solution to the solar neutrino problem by the long-wavelength neutrino oscillations induced by a tiny breakdown of the weak equivalence principle of general relativity, and obtained a very good fit to all the solar neutrino data
Energy Technology Data Exchange (ETDEWEB)
Vodel, W.; Nietzsche, S.; Neubert, R. [Friedrich-Schiller-Universitaet Jena (Germany). Inst. fuer Festkoerperphysik; Dittus, H. [Univ. Bremen (Germany). Zentrum fuer angewandte Raumfahrttechnologie und Mikrogravitation
2003-07-01
The weak equivalence principle is one of the fundamental hypotheses of general relativity and one of the key elements of our physical picture of the world, but since Galileo there has been no satisfactory way of verifying it. The new SQUID technology may offer a solution. The contribution presents the experiments of Jena University. Applications are envisaged, e.g., in the STEP space mission of the NASA/ESA. [German] Das Schwache Aequivalenzprinzip ist eine der grundlegenden Hypothesen der Allgemeinen Relativitaetstheorie und damit einer der Grundpfeiler unseres physikalischen Weltbildes. Obwohl es seit den ersten Experimenten von Galileo Galilei am Schiefen Turm zu Pisa im Jahre 1638 bis heute schon zahlreiche und immer praeziser werdende Messungen zur Ueberpruefung der Aequivalenz von schwerer und traeger Masse gegeben hat, ist die strenge Gueltigkeit dieses fundamentalen Prinzips experimentell vergleichsweise unzureichend bestimmt. Neuere Methoden, wie der Einsatz SQUID-basierter Messtechnik und die Durchfuehrung von Experimenten auf Satelliten, lassen Verbesserungen schon in naher Zukunft erwarten, so dass theoretische Ueberlegungen zur Vereinigung aller uns bekannten physikalischen Wechselwirkungen, die eine Verletzung des Schwachen Aequivalenzprinzips voraussagen, experimentell eingegrenzt werden koennten. Der Beitrag gibt einen Ueberblick ueber die an der Universitaet Jena entwickelte SQUID-basierte Messtechnik zum Test des Aequivalenzprinzips und fasst die bisher bei Freifallversuchen am Fallturm Bremen erzielten experimentellen Ergebnisse zusammen. Ein Ausblick auf die geplante Raumfahrtmission STEP der NASA/ESA zum Praezisionstest des Schwachen Aequivalenzprinzips schliesst den Beitrag ab. (orig.)
Attainment of radiation equivalency principle
International Nuclear Information System (INIS)
Shmelev, A.N.; Apseh, V.A.
2004-01-01
Problems connected with the prospects for long-term development of the nuclear energetics are discussed. Basic principles of the future large-scale nuclear energetics are listed, primary attention is the safety of radioactive waste management of nuclear energetics. The radiation equivalence principle means close of fuel cycle and management of nuclear materials transportation with low losses on spent fuel and waste processing. Two aspects are considered: radiation equivalence in global and local aspects. The necessity of looking for other strategies of fuel cycle management in full-scale nuclear energy on radioactive waste management is supported [ru
Quantum mechanics and the equivalence principle
International Nuclear Information System (INIS)
Davies, P C W
2004-01-01
A quantum particle moving in a gravitational field may penetrate the classically forbidden region of the gravitational potential. This raises the question of whether the time of flight of a quantum particle in a gravitational field might deviate systematically from that of a classical particle due to tunnelling delay, representing a violation of the weak equivalence principle. I investigate this using a model quantum clock to measure the time of flight of a quantum particle in a uniform gravitational field, and show that a violation of the equivalence principle does not occur when the measurement is made far from the turning point of the classical trajectory. The results are then confirmed using the so-called dwell time definition of quantum tunnelling. I conclude with some remarks about the strong equivalence principle in quantum mechanics
Comments on field equivalence principles
DEFF Research Database (Denmark)
Appel-Hansen, Jørgen
1987-01-01
It is pointed Out that often-used arguments based on a short-circuit concept in presentations of field equivalence principles are not correct. An alternative presentation based on the uniqueness theorem is given. It does not contradict the results obtained by using the short-circuit concept...
Weak equivalence classes of complex vector bundles
Czech Academy of Sciences Publication Activity Database
Le, Hong-Van
LXXVII, č. 1 (2008), s. 23-30 ISSN 0862-9544 R&D Projects: GA AV ČR IAA100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : chern classes * complex Grassmannians weak equivalence Subject RIV: BA - General Mathematics
Energy conservation and the principle of equivalence
International Nuclear Information System (INIS)
Haugan, M.P.
1979-01-01
If the equivalence principle is violated, then observers performing local experiments can detect effects due to their position in an external gravitational environment (preferred-location effects) or can detect effects due to their velocity through some preferred frame (preferred frame effects). We show that the principle of energy conservation implies a quantitative connection between such effects and structure-dependence of the gravitational acceleration of test bodies (violation of the Weak Equivalence Principle). We analyze this connection within a general theoretical framework that encompasses both non-gravitational local experiments and test bodies as well as gravitational experiments and test bodies, and we use it to discuss specific experimental tests of the equivalence principle, including non-gravitational tests such as gravitational redshift experiments, Eoetvoes experiments, the Hughes-Drever experiment, and the Turner-Hill experiment, and gravitational tests such as the lunar-laser-ranging ''Eoetvoes'' experiment, and measurements of anisotropies and variations in the gravitational constant. This framework is illustrated by analyses within two theoretical formalisms for studying gravitational theories: the PPN formalism, which deals with the motion of gravitating bodies within metric theories of gravity, and the THepsilonμ formalism that deals with the motion of charged particles within all metric theories and a broad class of non-metric theories of gravity
Cryogenic test of the equivalence principle
International Nuclear Information System (INIS)
Worden, P.W. Jr.
1976-01-01
The weak equivalence principle is the hypothesis that the ratio of internal and passive gravitational mass is the same for all bodies. A greatly improved test of this principle is possible in an orbiting satellite. The most promising experiments for an orbital test are adaptations of the Galilean free-fall experiment and the Eotvos balance. Sensitivity to gravity gradient noise, both from the earth and from the spacecraft, defines a limit to the sensitivity in each case. This limit is generally much worse for an Eotvos balance than for a properly designed free-fall experiment. The difference is related to the difficulty of making a balance sufficiently isoinertial. Cryogenic technology is desirable to take full advantage of the potential sensitivity, but tides in the liquid helium refrigerant may produce a gravity gradient that seriously degrades the ultimate sensitivity. The Eotvos balance appears to have a limiting sensitivity to relative difference of rate of fall of about 2 x 10 -14 in orbit. The free-fall experiment is limited by helium tide to about 10 -15 ; if the tide can be controlled or eliminated the limit may approach 10 -18 . Other limitations to equivalence principle experiments are discussed. An experimental test of some of the concepts involved in the orbital free-fall experiment is continuing. The experiment consists in comparing the motions of test masses levitated in a superconducting magnetic bearing, and is itself a sensitive test of the equivalence principle. At present the levitation magnets, position monitors and control coils have been tested and major noise sources identified. A measurement of the equivalence principle is postponed pending development of a system for digitizing data. The experiment and preliminary results are described
Foundations of gravitation theory: the principle of equivalence
International Nuclear Information System (INIS)
Haugan, M.P.
1978-01-01
A new framework is presented within which to discuss the principle of equivalence and its experimental tests. The framework incorporates a special structure imposed on the equivalence principle by the principle of energy conservation. This structure includes relations among the conceptual components of the equivalence principle as well as quantitative relations among the outcomes of its experimental tests. One of the most striking new results obtained through use of this framework is a connection between the breakdown of local Lorentz invariance and the breakdown of the principle that all bodies fall with the same acceleration in a gravitational field. An extensive discussion of experimental tests of the equivalence principle and their significance is also presented. Within the above framework, theory-independent analyses of a broad range of equivalence principle tests are possible. Gravitational redshift experiments. Doppler-shift experiments, the Turner-Hill and Hughes-Drever experiments, and a number of solar-system tests of gravitation theories are analyzed. Application of the techniques of theoretical nuclear physics to the quantitative interpretation of equivalence principle tests using laboratory materials of different composition yields a number of important results. It is found that current Eotvos experiments significantly demonstrate the compatibility of the weak interactions with the equivalence principle. It is also shown that the Hughes-Drever experiment is the most precise test of local Lorentz invariance yet performed. The work leads to a strong, tightly knit empirical basis for the principle of equivalence, the central pillar of the foundations of gravitation theory
Higher-order gravity and the classical equivalence principle
Accioly, Antonio; Herdy, Wallace
2017-11-01
As is well known, the deflection of any particle by a gravitational field within the context of Einstein’s general relativity — which is a geometrical theory — is, of course, nondispersive. Nevertheless, as we shall show in this paper, the mentioned result will change totally if the bending is analyzed — at the tree level — in the framework of higher-order gravity. Indeed, to first order, the deflection angle corresponding to the scattering of different quantum particles by the gravitational field mentioned above is not only spin dependent, it is also dispersive (energy-dependent). Consequently, it violates the classical equivalence principle (universality of free fall, or equality of inertial and gravitational masses) which is a nonlocal principle. However, contrary to popular belief, it is in agreement with the weak equivalence principle which is nothing but a statement about purely local effects. It is worthy of note that the weak equivalence principle encompasses the classical equivalence principle locally. We also show that the claim that there exists an incompatibility between quantum mechanics and the weak equivalence principle, is incorrect.
Quantum equivalence principle without mass superselection
International Nuclear Information System (INIS)
Hernandez-Coronado, H.; Okon, E.
2013-01-01
The standard argument for the validity of Einstein's equivalence principle in a non-relativistic quantum context involves the application of a mass superselection rule. The objective of this work is to show that, contrary to widespread opinion, the compatibility between the equivalence principle and quantum mechanics does not depend on the introduction of such a restriction. For this purpose, we develop a formalism based on the extended Galileo group, which allows for a consistent handling of superpositions of different masses, and show that, within such scheme, mass superpositions behave as they should in order to obey the equivalence principle. - Highlights: • We propose a formalism for consistently handling, within a non-relativistic quantum context, superpositions of states with different masses. • The formalism utilizes the extended Galileo group, in which mass is a generator. • The proposed formalism allows for the equivalence principle to be satisfied without the need of imposing a mass superselection rule
Dark matter and the equivalence principle
Frieman, Joshua A.; Gradwohl, Ben-Ami
1993-01-01
A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.
Testing the principle of equivalence by solar neutrinos
International Nuclear Information System (INIS)
Minakata, Hisakazu; Washington Univ., Seattle, WA; Nunokawa, Hiroshi; Washington Univ., Seattle, WA
1994-04-01
We discuss the possibility of testing the principle of equivalence with solar neutrinos. If there exists a violation of the equivalence principle quarks and leptons with different flavors may not universally couple with gravity. The method we discuss employs a quantum mechanical phenomenon of neutrino oscillation to probe into the non-university of the gravitational couplings of neutrinos. We develop an appropriate formalism to deal with neutrino propagation under the weak gravitational fields of the sun in the presence of the flavor mixing. We point out that solar neutrino observation by the next generation water Cherenkov detectors can improve the existing bound on violation of the equivalence principle by 3-4 orders of magnitude if the nonadiabatic Mikheyev-Smirnov-Wolfenstein mechanism is the solution to the solar neutrino problem
Testing the principle of equivalence by solar neutrinos
International Nuclear Information System (INIS)
Minakata, H.; Nunokawa, H.
1995-01-01
We discuss the possibility of testing the principle of equivalence with solar neutrinos. If there exists a violation of the equivalence principle, quarks and leptons with different flavors may not universally couple with gravity. The method we discuss employs the quantum mechanical phenomenon of neutrino oscillation to probe into the nonuniversality of the gravitational couplings of neutrinos. We develop an appropriate formalism to deal with neutrino propagation under the weak gravitational fields of the Sun in the presence of the flavor mixing. We point out that solar neutrino observation by the next generation water Cherenkov detectors can place stringent bounds on the violation of the equivalence principle to 1 part in 10 15 --10 16 if the nonadiabatic Mikheyev-Smirnov-Wolfenstein mechanism is the solution to the solar neutrino problem
Free Fall and the Equivalence Principle Revisited
Pendrill, Ann-Marie
2017-01-01
Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton's law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field.…
Jotterand, Fabrice; Wangmo, Tenzin
2014-01-01
In this article we critically examine the principle of equivalence of care in prison medicine. First, we provide an overview of how the principle of equivalence is utilized in various national and international guidelines on health care provision to prisoners. Second, we outline some of the problems associated with its applications, and argue that the principle of equivalence should go beyond equivalence to access and include equivalence of outcomes. However, because of the particular context of the prison environment, third, we contend that the concept of "health" in equivalence of health outcomes needs conceptual clarity; otherwise, it fails to provide a threshold for healthy states among inmates. We accomplish this by examining common understandings of the concepts of health and disease. We conclude our article by showing why the conceptualization of diseases as clinical problems provides a helpful approach in the delivery of health care in prison.
The equivalence principle in a quantum world
DEFF Research Database (Denmark)
Bjerrum-Bohr, N. Emil J.; Donoghue, John F.; El-Menoufi, Basem Kamal
2015-01-01
the energy is small, we now have the tools to address this conflict explicitly. Despite the violation of some classical concepts, the EP continues to provide the core of the quantum gravity framework through the symmetry - general coordinate invariance - that is used to organize the effective field theory......We show how modern methods can be applied to quantum gravity at low energy. We test how quantum corrections challenge the classical framework behind the equivalence principle (EP), for instance through introduction of nonlocality from quantum physics, embodied in the uncertainty principle. When...
The equivalence principle in classical mechanics and quantum mechanics
Mannheim, Philip D.
1998-01-01
We discuss our understanding of the equivalence principle in both classical mechanics and quantum mechanics. We show that not only does the equivalence principle hold for the trajectories of quantum particles in a background gravitational field, but also that it is only because of this that the equivalence principle is even to be expected to hold for classical particles at all.
Equivalence principle and the baryon acoustic peak
Baldauf, Tobias; Mirbabayi, Mehrdad; Simonović, Marko; Zaldarriaga, Matias
2015-08-01
We study the dominant effect of a long wavelength density perturbation δ (λL) on short distance physics. In the nonrelativistic limit, the result is a uniform acceleration, fixed by the equivalence principle, and typically has no effect on statistical averages due to translational invariance. This same reasoning has been formalized to obtain a "consistency condition" on the cosmological correlation functions. In the presence of a feature, such as the acoustic peak at ℓBAO, this naive expectation breaks down for λLexplicitly applied to the one-loop calculation of the power spectrum. Finally, the success of baryon acoustic oscillation reconstruction schemes is argued to be another empirical evidence for the validity of the results.
Equivalence principle implications of modified gravity models
International Nuclear Information System (INIS)
Hui, Lam; Nicolis, Alberto; Stubbs, Christopher W.
2009-01-01
Theories that attempt to explain the observed cosmic acceleration by modifying general relativity all introduce a new scalar degree of freedom that is active on large scales, but is screened on small scales to match experiments. We demonstrate that if such screening occurs via the chameleon mechanism, such as in f(R) theory, it is possible to have order unity violation of the equivalence principle, despite the absence of explicit violation in the microscopic action. Namely, extended objects such as galaxies or constituents thereof do not all fall at the same rate. The chameleon mechanism can screen the scalar charge for large objects but not for small ones (large/small is defined by the depth of the gravitational potential and is controlled by the scalar coupling). This leads to order one fluctuations in the ratio of the inertial mass to gravitational mass. We provide derivations in both Einstein and Jordan frames. In Jordan frame, it is no longer true that all objects move on geodesics; only unscreened ones, such as test particles, do. In contrast, if the scalar screening occurs via strong coupling, such as in the Dvali-Gabadadze-Porrati braneworld model, equivalence principle violation occurs at a much reduced level. We propose several observational tests of the chameleon mechanism: 1. small galaxies should accelerate faster than large galaxies, even in environments where dynamical friction is negligible; 2. voids defined by small galaxies would appear larger compared to standard expectations; 3. stars and diffuse gas in small galaxies should have different velocities, even if they are on the same orbits; 4. lensing and dynamical mass estimates should agree for large galaxies but disagree for small ones. We discuss possible pitfalls in some of these tests. The cleanest is the third one where the mass estimate from HI rotational velocity could exceed that from stars by 30% or more. To avoid blanket screening of all objects, the most promising place to look is in
Possible test of the strong principle of equivalence
International Nuclear Information System (INIS)
Brecher, K.
1978-01-01
We suggest that redshift determinations of X-ray and γ-ray lines produced near the surface of neutron stars which arise from different physical processes could provide a significant test of the strong principle of equivalence for strong gravitational fields. As a complement to both the high-precision weak-field solar-system experiments and the cosmological time variation searches, such observations could further test the hypothesis that physics is locally the same at all times and in all places
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Theoretical aspects of the equivalence principle
International Nuclear Information System (INIS)
Damour, Thibault
2012-01-01
We review several theoretical aspects of the equivalence principle (EP). We emphasize the unsatisfactory fact that the EP maintains the absolute character of the coupling constants of physics, while general relativity and its generalizations (Kaluza–Klein, …, string theory) suggest that all absolute structures should be replaced by dynamical entities. We discuss the EP-violation phenomenology of dilaton-like models, which is likely to be dominated by the linear superposition of two effects: a signal proportional to the nuclear Coulomb energy, related to the variation of the fine-structure constant, and a signal proportional to the surface nuclear binding energy, related to the variation of the light quark masses. We recall various theoretical arguments (including a recently proposed anthropic argument) suggesting that the EP be violated at a small, but not unmeasurably small level. This motivates the need for improved tests of the EP. These tests are probing new territories in physics that are related to deep, and mysterious, issues in fundamental physics. (paper)
Testing the equivalence principle on a trampoline
Reasenberg, Robert D.; Phillips, James D.
2001-07-01
We are developing a Galilean test of the equivalence principle in which two pairs of test mass assemblies (TMA) are in free fall in a comoving vacuum chamber for about 0.9 s. The TMA are tossed upward, and the process repeats at 1.2 s intervals. Each TMA carries a solid quartz retroreflector and a payload mass of about one-third of the total TMA mass. The relative vertical motion of the TMA of each pair is monitored by a laser gauge working in an optical cavity formed by the retroreflectors. Single-toss precision of the relative acceleration of a single pair of TMA is 3.5×10-12 g. The project goal of Δg/g = 10-13 can be reached in a single night's run, but repetition with altered configurations will be required to ensure the correction of systematic error to the nominal accuracy level. Because the measurements can be made quickly, we plan to study several pairs of materials.
Testing the equivalence principle on cosmological scales
Bonvin, Camille; Fleury, Pierre
2018-05-01
The equivalence principle, that is one of the main pillars of general relativity, is very well tested in the Solar system; however, its validity is more uncertain on cosmological scales, or when dark matter is concerned. This article shows that relativistic effects in the large-scale structure can be used to directly test whether dark matter satisfies Euler's equation, i.e. whether its free fall is characterised by geodesic motion, just like baryons and light. After having proposed a general parametrisation for deviations from Euler's equation, we perform Fisher-matrix forecasts for future surveys like DESI and the SKA, and show that such deviations can be constrained with a precision of order 10%. Deviations from Euler's equation cannot be tested directly with standard methods like redshift-space distortions and gravitational lensing, since these observables are not sensitive to the time component of the metric. Our analysis shows therefore that relativistic effects bring new and complementary constraints to alternative theories of gravity.
Equivalence principle violations and couplings of a light dilaton
International Nuclear Information System (INIS)
Damour, Thibault; Donoghue, John F.
2010-01-01
We consider possible violations of the equivalence principle through the exchange of a light 'dilaton-like' scalar field. Using recent work on the quark-mass dependence of nuclear binding, we find that the dilaton-quark-mass coupling induces significant equivalence-principle-violating effects varying like the inverse cubic root of the atomic number - A -1/3 . We provide a general parametrization of the scalar couplings, but argue that two parameters are likely to dominate the equivalence-principle phenomenology. We indicate the implications of this framework for comparing the sensitivities of current and planned experimental tests of the equivalence principle.
Infrared equivalence of strongly and weakly coupled gauge theories
International Nuclear Information System (INIS)
Olesen, P.
1975-10-01
Using the decoupling theorem of Apelquist and Carazzone, it is shown that in terms of Feynman diagrams the pure Yang-Mills theory is equivalent in the infrared limit to a theory (zero-mass renormalized), where the vector mesons are coupled fo fermions, and where the fermions do not decouple. By taking enough fermions it is then shown that even though the pure Yang-Mills theory is characterized by the lack of applicability of perturbation theory, nevertheless the effective coupling in the equivalent fermion description is very weak. The effective mass in the zero-mass renormalization blows up. In the fermion description, diagrams involving only vector mesons are suppressed relative to diagrams containing at least one fermion loop. (Auth.)
Cosmology with equivalence principle breaking in the dark sector
International Nuclear Information System (INIS)
Keselman, Jose Ariel; Nusser, Adi; Peebles, P. J. E.
2010-01-01
A long-range force acting only between nonbaryonic particles would be associated with a large violation of the weak equivalence principle. We explore cosmological consequences of this idea, which we label ReBEL (daRk Breaking Equivalence principLe). A high resolution hydrodynamical simulation of the distributions of baryons and dark matter confirms our previous findings that a ReBEL force of comparable strength to gravity on comoving scales of about 1 h -1 Mpc causes voids between the concentrations of large galaxies to be more nearly empty, suppresses accretion of intergalactic matter onto galaxies at low redshift, and produces an early generation of dense dark-matter halos. A preliminary analysis indicates the ReBEL scenario is consistent with the one-dimensional power spectrum of the Lyman-Alpha forest and the three-dimensional galaxy autocorrelation function. Segregation of baryons and DM in galaxies and systems of galaxies is a strong prediction of ReBEL. ReBEL naturally correlates the baryon mass fraction in groups and clusters of galaxies with the system mass, in agreement with recent measurements.
The equivalence principle and the gravitational constant in experimental relativity
International Nuclear Information System (INIS)
Spallicci, A.D.A.M.
1988-01-01
Fischbach's analysis of the Eotvos experiment, showing an embedded fifth force, has stressed the importance of further tests of the Equivalence Principle (EP). From Galilei and Newton, the EP played the role of a postulate for all gravitational physics and mechanics (weak EP), until Einstein, who extended the validity of the EP to all physics (strong EP). After Fischbach's publication on the fifth force, several experiments have been performed or simply proposed to test the WEP. They are concerned with possible gravitational potential anomalies, depending upon distances or matter composition. While the low level of accuracy with which the gravitational constant G is known has been recognized, experiments have been proposed to test G in the range from few cm until 200 m. This paper highlights the different features of the proposed space experiments. Possible implications on the metric formalism for objects in low potential and slow motion are briefly indicated
Risk measurement with equivalent utility principles
Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.
2006-01-01
Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics
Quantum mechanics from an equivalence principle
International Nuclear Information System (INIS)
Faraggi, A.E.
1997-01-01
The authors show that requiring diffeomorphic equivalence for one-dimensional stationary states implies that the reduced action S 0 satisfies the quantum Hamilton-Jacobi equation with the Planck constant playing the role of a covariantizing parameter. The construction shows the existence of a fundamental initial condition which is strictly related to the Moebius symmetry of the Legendre transform and to its involutive character. The universal nature of the initial condition implies the Schroedinger equation in any dimension
Apparent violation of the principle of equivalence and Killing horizons
International Nuclear Information System (INIS)
Zimmerman, R.L.; Farhoosh, H.; Oregon Univ., Eugene
1980-01-01
By means of the principle of equivalence it is deduced that the qualitative behavior of the Schwarzschild horizon about a uniformly accelerating particle. This result is confirmed for an exact solution of a uniformly accelerating object in the limit of small accelerations. For large accelerations the Schwarzschild horizon appears to violate the qualitative behavior established via the principle of equivalence. When similar arguments are extended to an observable such as the red shift between two observers, there is no departure from the results expected from the principle of equivalence. The resolution of the paradox is brought about by a compensating effect due to the Rindler horizon. (author)
The principle of equivalence and the Trojan asteroids
International Nuclear Information System (INIS)
Orellana, R.; Vucetich, H.
1986-05-01
An analysis of the Trojan asteroids motion has been carried out in order to set limits to possible violations to the principle of equivalence. Preliminary results, in agreement with general relativity, are reported. (author)
Einstein's equivalence principle instead of the inertia forces
International Nuclear Information System (INIS)
Herreros Mateos, F.
1997-01-01
In this article I intend to show that Einstein's equivalence principle substitutes advantageously the inertia forces in the study and resolution of problems in which non-inertial systems appear. (Author) 13 refs
Extended Equivalence Principle: Implications for Gravity, Geometry and Thermodynamics
Sivaram, C.; Arun, Kenath
2012-01-01
The equivalence principle was formulated by Einstein in an attempt to extend the concept of inertial frames to accelerated frames, thereby bringing in gravity. In recent decades, it has been realised that gravity is linked not only with geometry of space-time but also with thermodynamics especially in connection with black hole horizons, vacuum fluctuations, dark energy, etc. In this work we look at how the equivalence principle manifests itself in these different situations where we have str...
Probing Students' Ideas of the Principle of Equivalence
Bandyopadhyay, Atanu; Kumar, Arvind
2011-01-01
The principle of equivalence was the first vital clue to Einstein in his extension of special relativity to general relativity, the modern theory of gravitation. In this paper we investigate in some detail students' understanding of this principle in a variety of contexts, when they are undergoing an introductory course on general relativity. The…
The principle of general covariance and the principle of equivalence: two distinct concepts
International Nuclear Information System (INIS)
Fagundes, H.V.
It is shown how to construct a theory with general covariance but without the equivalence principle. Such a theory is in disagreement with experiment, but it serves to illustrate the independence of the former principle from the latter one [pt
Principle of natural and artificial radioactive series equivalency
International Nuclear Information System (INIS)
Vasilyeva, A.N.; Starkov, O.V.
2001-01-01
In the present paper one approach used under development of radioactive waste management conception is under consideration. This approach is based on the principle of natural and artificial radioactive series radiotoxic equivalency. The radioactivity of natural and artificial radioactive series has been calculated for 10 9 - years period. The toxicity evaluation for natural and artificial series has also been made. The correlation between natural radioactive series and their predecessors - actinides produced in thermal and fast reactors - has been considered. It has been shown that systematized reactor series data had great scientific significance and the principle of differential calculation of radiotoxicity was necessary to realize long-lived radioactive waste and uranium and thorium ore radiotoxicity equivalency conception. The calculations show that the execution of equivalency principle is possible for uranium series (4n+2, 4n+1). It is a problem for thorium. series. This principle is impracticable for neptunium series. (author)
Is a weak violation of the Pauli principle possible?
International Nuclear Information System (INIS)
Ignat'ev, A.Y.; Kuz'min, V.A.
1987-01-01
We examine models in which there is a weak violation of the Pauli principle. A simple algebra of creation and annihilation operators is constructed which contains a parameter β and describes a weak violation of the Pauli principle (when β = 0 the Pauli principle is satisfied exactly). The commutation relations in this algebra turn out to be trilinear. A model based on this algebra is described. It allows transitions in which the Pauli principle is violated, but the probability of these transitions is suppressed by the quantity β 2 (even though the interaction Hamiltonian does not contain small parameters)
Is the Strong Anthropic Principle too weak?
International Nuclear Information System (INIS)
Feoli, A.; Rampone, S.
1999-01-01
The authors discuss Carter's formula about the mankind evolution probability following the derivation proposed by Barrow and Tipler. The authors stress the relation between the existence of billions galaxies and the evolution of at least one intelligent life, whose living time is not trivial, all over the Universe. The authors show that the existence probability and the lifetime of a civilization depend not only on the evolutionary critical steps, but also on the number of places where the life can arise. In the light of these results, are proposed a stronger version of Anthropic Principle
Equivalence of Dirac quantization and Schwinger's action principle quantization
International Nuclear Information System (INIS)
Das, A.; Scherer, W.
1987-01-01
We show that the method of Dirac quantization is equivalent to Schwinger's action principle quantization. The relation between the Lagrange undetermined multipliers in Schwinger's method and Dirac's constraint bracket matrix is established and it is explicitly shown that the two methods yield identical (anti)commutators. This is demonstrated in the non-trivial example of supersymmetric quantum mechanics in superspace. (orig.)
A Weak Comparison Principle for Reaction-Diffusion Systems
Directory of Open Access Journals (Sweden)
José Valero
2012-01-01
Full Text Available We prove a weak comparison principle for a reaction-diffusion system without uniqueness of solutions. We apply the abstract results to the Lotka-Volterra system with diffusion, a generalized logistic equation, and to a model of fractional-order chemical autocatalysis with decay. Moreover, in the case of the Lotka-Volterra system a weak maximum principle is given, and a suitable estimate in the space of essentially bounded functions L∞ is proved for at least one solution of the problem.
Uniformly accelerating charged particles. A threat to the equivalence principle
International Nuclear Information System (INIS)
Lyle, Stephen N.
2008-01-01
There has been a long debate about whether uniformly accelerated charges should radiate electromagnetic energy and how one should describe their worldline through a flat spacetime, i.e., whether the Lorentz-Dirac equation is right. There are related questions in curved spacetimes, e.g., do different varieties of equivalence principle apply to charged particles, and can a static charge in a static spacetime radiate electromagnetic energy? The problems with the LD equation in flat spacetime are spelt out in some detail here, and its extension to curved spacetime is discussed. Different equivalence principles are compared and some vindicated. The key papers are discussed in detail and many of their conclusions are significantly revised by the present solution. (orig.)
Tests of the equivalence principle with neutral kaons
Apostolakis, Alcibiades J; Backenstoss, Gerhard; Bargassa, P; Behnke, O; Benelli, A; Bertin, V; Blanc, F; Bloch, P; Carlson, P J; Carroll, M; Cawley, E; Chardin, G; Chertok, M B; Danielsson, M; Dejardin, M; Derré, J; Ealet, A; Eleftheriadis, C; Faravel, L; Fetscher, W; Fidecaro, Maria; Filipcic, A; Francis, D; Fry, J; Gabathuler, Erwin; Gamet, R; Gerber, H J; Go, A; Haselden, A; Hayman, P J; Henry-Coüannier, F; Hollander, R W; Jon-And, K; Kettle, P R; Kokkas, P; Kreuger, R; Le Gac, R; Leimgruber, F; Mandic, I; Manthos, N; Marel, Gérard; Mikuz, M; Miller, J; Montanet, François; Müller, A; Nakada, Tatsuya; Pagels, B; Papadopoulos, I M; Pavlopoulos, P; Polivka, G; Rickenbach, R; Roberts, B L; Ruf, T; Sakelliou, L; Schäfer, M; Schaller, L A; Schietinger, T; Schopper, A; Tauscher, Ludwig; Thibault, C; Touchard, F; Touramanis, C; van Eijk, C W E; Vlachos, S; Weber, P; Wigger, O; Wolter, M; Zavrtanik, D; Zimmerman, D; Ellis, Jonathan Richard; Mavromatos, Nikolaos E; Nanopoulos, Dimitri V
1999-01-01
We test the Principle of Equivalence for particles and antiparticles, using CPLEAR data on tagged Pkao and Pkab decays into $pi^+ pi^-$. For the first time, we search for possible annual, monthly and diurnal modulations of the observables $|eta_{+-}|$ and $phi _{+-}$, that could be correlated with variations in astrophysical potentials. Within the accuracy of CPLEAR, the measured values of $|eta _{+-}|$ and $phi _{+-}$ are found not to be correlated with changes of the gravitational potential. We analyze data assuming effective scalar, vector and tensor interactions, and we conclude that the Principle of Equivalence between particles and antiparticles holds to a level of $6.5$, $4.3$ and $1.8 imes 10^{-9}$, respectively, for scalar, vector and tensor potentials originating from the Sun with a range much greater than the distance Earth-Sun. We also study energy-dependent effects that might arise from vector or tensor interactions. Finally, we compile upper limits on the gravitational coupling difference betwee...
Einstein's Equivalence Principle and Invalidity of Thorne's Theory for LIGO
Directory of Open Access Journals (Sweden)
Lo C. Y.
2006-04-01
Full Text Available The theoretical foundation of LIGO's design is based on the equation of motion derived by Thorne. His formula, motivated by Einstein's theory of measurement, shows that the gravitational wave-induced displacement of a mass with respect to an object is proportional to the distance from the object. On the other hand, based on the observed bending of light and Einstein's equivalence principle, it is concluded that such induced displacement has nothing to do with the distance from another object. It is shown that the derivation of Thorne's formula has invalid assumptions that make it inapplicable to LIGO. This is a good counter example for those who claimed that Einstein's equivalence principle is not important or even irrelevant.
Relativity and equivalence principles in the gauge theory of gravitation
International Nuclear Information System (INIS)
Ivanenko, D.; Sardanashvili, G.
1981-01-01
Roles of relativity (RP) and equivalence principles (EP) in the gauge theory of gravity are shown. RP in the gravitational theory in formalism of laminations can be formulated as requirement of covariance of equations relative to the GL + (4, R)(X) gauge group. In such case RP turns out to be identical to the gauge principle in the gauge theory of a group of outer symmetries, and the gravitational theory can be directly constructed as the gauge theory. In general relativity theory the equivalence theory adds RP and is intended for description of transition to a special relativity theory in some system of reference. The approach described takes into account that in the gauge theory, besides gauge fields under conditions of spontaneous symmetry breaking, the Goldstone and Higgs fields can also arise, to which the gravitational metric field is related, what is the sequence of taking account of RP in the gauge theory of gravitation [ru
Density matrix in quantum electrodynamics, equivalence principle and Hawking effect
International Nuclear Information System (INIS)
Frolov, V.P.; Gitman, D.M.
1978-01-01
The expression for the density matrix describing particles of one sort (electrons or positrons) created by an external electromagnetic field from the vacuum is obtained. The explicit form of the density matrix is found for the case of constant and uniform electric field. Arguments are given for the presence of a connection between the thermal nature of the density matrix describing particles created by the gravitational field of a black hole and the equivalence principle. (author)
Acceleration Measurements Using Smartphone Sensors: Dealing with the Equivalence Principle
Monteiro, Martín; Cabeza, Cecilia; Martí, Arturo C.
2014-01-01
Acceleration sensors built into smartphones, i-pads or tablets can conveniently be used in the physics laboratory. By virtue of the equivalence principle, a sensor fixed in a non-inertial reference frame cannot discern between a gravitational field and an accelerated system. Accordingly, acceleration values read by these sensors must be corrected for the gravitational component. A physical pendulum was studied by way of example, and absolute acceleration and rotation angle values were derived...
Phenomenology of the Equivalence Principle with Light Scalars
Damour, Thibault; Donoghue, John F.
2010-01-01
Light scalar particles with couplings of sub-gravitational strength, which can generically be called 'dilatons', can produce violations of the equivalence principle. However, in order to understand experimental sensitivities one must know the coupling of these scalars to atomic systems. We report here on a study of the required couplings. We give a general Lagrangian with five independent dilaton parameters and calculate the "dilaton charge" of atomic systems for each of these. Two combinatio...
The Equivalence Principle and Anomalous Magnetic Moment Experiments
Alvarez, C.; Mann, R. B.
1995-01-01
We investigate the possibility of testing of the Einstein Equivalence Principle (EEP) using measurements of anomalous magnetic moments of elementary particles. We compute the one loop correction for the $g-2$ anomaly within the class of non metric theories of gravity described by the \\tmu formalism. We find several novel mechanisms for breaking the EEP whose origin is due purely to radiative corrections. We discuss the possibilities of setting new empirical constraints on these effects.
Equivalence principle and quantum mechanics: quantum simulation with entangled photons.
Longhi, S
2018-01-15
Einstein's equivalence principle (EP) states the complete physical equivalence of a gravitational field and corresponding inertial field in an accelerated reference frame. However, to what extent the EP remains valid in non-relativistic quantum mechanics is a controversial issue. To avoid violation of the EP, Bargmann's superselection rule forbids a coherent superposition of states with different masses. Here we suggest a quantum simulation of non-relativistic Schrödinger particle dynamics in non-inertial reference frames, which is based on the propagation of polarization-entangled photon pairs in curved and birefringent optical waveguides and Hong-Ou-Mandel quantum interference measurement. The photonic simulator can emulate superposition of mass states, which would lead to violation of the EP.
Supersymmetric QED at finite temperature and the principle of equivalence
International Nuclear Information System (INIS)
Robinett, R.W.
1985-01-01
Unbroken supersymmetric QED is examined at finite temperature and it is shown that the scalar and spinor members of a chiral superfield acquire different temperature-dependent inertial masses. By considering the renormalization of the energy-momentum tensor it is also shown that the T-dependent scalar-spinor gravitational masses are also no longer degenerate and, moreover, are different from their T-dependent inertial mass shifts implying a violation of the equivalence principle. The temperature-dependent corrections to the spinor (g-2) are also calculated and found not to vanish
MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle.
Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter
2017-12-08
According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10^{-15} precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ(Ti,Pt)=[-1±9(stat)±9(syst)]×10^{-15} (1σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.
MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle
Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter
2017-12-01
According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10-15 precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ (Ti ,Pt )=[-1 ±9 (stat)±9 (syst)]×10-15 (1 σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.
Test of the Equivalence Principle in an Einstein Elevator
Shapiro, Irwin I.; Glashow, S.; Lorenzini, E. C.; Cosmo, M. L.; Cheimets, P. N.; Finkelstein, N.; Schneps, M.
2005-01-01
This Annual Report illustrates the work carried out during the last grant-year activity on the Test of the Equivalence Principle in an Einstein Elevator. The activity focused on the following main topics: (1) analysis and conceptual design of a detector configuration suitable for the flight tests; (2) development of techniques for extracting a small signal from data strings with colored and white noise; (3) design of the mechanism that spins and releases the instrument package inside the cryostat; and (4) experimental activity carried out by our non-US partners (a summary is shown in this report). The analysis and conceptual design of the flight-detector (point 1) was focused on studying the response of the differential accelerometer during free fall, in the presence of errors and precession dynamics, for various detector's configurations. The goal was to devise a detector configuration in which an Equivalence Principle violation (EPV) signal at the sensitivity threshold level can be successfully measured and resolved out of a much stronger dynamics-related noise and gravity gradient. A detailed analysis and comprehensive simulation effort led us to a detector's design that can accomplish that goal successfully.
High energy cosmic neutrinos and the equivalence principle
International Nuclear Information System (INIS)
Minakata, H.
1996-01-01
Observation of ultra-high energy neutrinos, in particular detection of ν τ , from cosmologically distant sources like active galactic nuclei (AGN) opens new possibilities to search for neutrino flavor conversion. We consider the effects of violation of the equivalence principle (VEP) on propagation of these cosmic neutrinos. In particular, we discuss two effects: (1) the oscillations of neutrinos due to VEP in the gravitational field of our Galaxy and in the intergalactic space; (2) resonance flavor conversion driven by the gravitational potential of AGN. We show that ultra-high energies of the neutrinos as well as cosmological distances to AGN, or strong AGN gravitational potential allow to improve the accuracy of testing of the equivalence principle by 25 orders of magnitude for massless neutrinos (Δf ∼ 10 -41 ) and by 11 orders of magnitude for massive neutrinos (Δf ∼ 10 -28 x (Δm 2 /1eV 2 )). The experimental signatures of the transitions induced by VEP are discussed. (author). 17 refs
A homogeneous static gravitational field and the principle of equivalence
International Nuclear Information System (INIS)
Chernikov, N.A.
2001-01-01
In this paper any gravitational field (both in the Einsteinian case and in the Newtonian case) is described by the connection, called gravitational. A homogeneous static gravitational field is considered in the four-dimensional area z>0 of a space-time with Cartesian coordinates x, y, z, and t. Such field can be created by masses, disposed outside the area z>0 with a density distribution independent of x, y, and t. Remarkably, in the four-dimensional area z>0, together with the primitive background connection, the primitive gravitational connection has been derived. In concordance with the Principle of Equivalence all components of such gravitational connection are equal to zero in the uniformly accelerated frame system, in which the gravitational force of attraction is balanced by the inertial force. However, all components of such background connection are equal to zero in the resting frame system, but not in the accelerated frame system
The short-circuit concept used in field equivalence principles
DEFF Research Database (Denmark)
Appel-Hansen, Jørgen
1990-01-01
In field equivalence principles, electric and magnetic surface currents are specified and considered as impressed currents. Often the currents are placed on perfect conductors. It is shown that these currents can be treated through two approaches. The first approach is decomposition of the total...... field into partial fields caused by the individual impressed currents. When this approach is used, it is shown that, on a perfect electric (magnetic) conductor, impressed electric (magnetic) surface currents are short-circuited. The second approach is to note that, since Maxwell's equations...... and the boundary conditions are satisfied, none of the impressed currents is short-circuited and no currents are induced on the perfect conductors. Since all currents and field quantities are considered at the same time, this approach is referred to as the total-field approach. The partial-field approach leads...
Testing Einstein's Equivalence Principle With Fast Radio Bursts.
Wei, Jun-Jie; Gao, He; Wu, Xue-Feng; Mészáros, Peter
2015-12-31
The accuracy of Einstein's equivalence principle (EEP) can be tested with the observed time delays between correlated particles or photons that are emitted from astronomical sources. Assuming as a lower limit that the time delays are caused mainly by the gravitational potential of the Milky Way, we prove that fast radio bursts (FRBs) of cosmological origin can be used to constrain the EEP with high accuracy. Taking FRB 110220 and two possible FRB/gamma-ray burst (GRB) association systems (FRB/GRB 101011A and FRB/GRB 100704A) as examples, we obtain a strict upper limit on the differences of the parametrized post-Newtonian parameter γ values as low as [γ(1.23 GHz)-γ(1.45 GHz)]radio energies, improving by 1 to 2 orders of magnitude the previous results at other energies based on supernova 1987A and GRBs.
Test of the Equivalence Principle in an Einstein Elevator
Shapiro, Irwin I.; Glashow, S.; Lorenzini, E. C.; Cosmo, M. L.; Cheimets, P. N.; Finkelstein, N.; Schneps, M.
2004-01-01
The scientific goal of the experiment is to test the equality of gravitational and inertial mass (i.e., to test the Principle of Equivalence) by measuring the independence of the rate of fall of bodies from their compositions. The measurement is accomplished by measuring the relative displacement (or equivalently acceleration) of two falling bodies of different materials which are the proof masses of a differential accelerometer spinning about an horizontal axis to modulate a possible violation signal. A non-zero differential acceleration appearing at the signal frequency will indicate a violation of the Equivalence Principle. The goal of the experiment is to measure the Eotvos ratio og/g (differential acceleration/common acceleration) with a targeted accuracy that is about two orders of magnitude better than the state of the art (presently at several parts in 10(exp 13). The analyses carried out during this first grant year have focused on: (1) evaluation of possible shapes for the proof masses to meet the requirements on the higher-order mass moment disturbances generated by the falling capsule; (2) dynamics of the instrument package and differential acceleration measurement in the presence of errors and imperfections; (3) computation of the inertia characteristic of the instrument package that enable a separation of the signal from the dynamics-related noise; (4) a revised thermal analysis of the instrument package in light of the new conceptual design of the cryostat; (5) the development of a dynamic and control model of the capsule attached to the gondola and balloon to define the requirements for the leveling mechanism (6) a conceptual design of the leveling mechanism that keeps the capsule aligned before release from the balloon; and (7) a new conceptual design of the customized cryostat and a preliminary valuation of its cost. The project also involves an international cooperation with the Institute of Space Physics (IFSI) in Rome, Italy. The group at IFSI
Gravitational Lagrangians, Mach's Principle, and the Equivalence Principle in an Expanding Universe
Essén, Hanno
2014-08-01
Gravitational Lagrangians as derived by Fock for the Einstein-Infeld-Hoffmann approach, and by Kennedy assuming only a fourth rank tensor interaction, contain long range interactions. Here we investigate how these affect the local dynamics when integrated over an expanding universe out to the Hubble radius. Taking the cosmic expansion velocity into account in a heuristic manner it is found that these long range interactions imply Mach's principle, provided the universe has the critical density, and that mass is renormalized. Suitable higher order additions to the Lagrangians make the formalism consistent with the equivalence principle.
Testing Einstein's Equivalence Principle With Fast Radio Bursts
Wei, Jun-Jie; Gao, He; Wu, Xue-Feng; Mészáros, Peter
2015-12-01
The accuracy of Einstein's equivalence principle (EEP) can be tested with the observed time delays between correlated particles or photons that are emitted from astronomical sources. Assuming as a lower limit that the time delays are caused mainly by the gravitational potential of the Milky Way, we prove that fast radio bursts (FRBs) of cosmological origin can be used to constrain the EEP with high accuracy. Taking FRB 110220 and two possible FRB/gamma-ray burst (GRB) association systems (FRB/GRB 101011A and FRB/GRB 100704A) as examples, we obtain a strict upper limit on the differences of the parametrized post-Newtonian parameter γ values as low as [γ (1.23 GHz )-γ (1.45 GHz )] <4.36 ×10-9. This provides the most stringent limit up to date on the EEP through the relative differential variations of the γ parameter at radio energies, improving by 1 to 2 orders of magnitude the previous results at other energies based on supernova 1987A and GRBs.
Induction, bounding, weak combinatorial principles, and the homogeneous model theorem
Hirschfeldt, Denis R; Shore, Richard A
2017-01-01
Goncharov and Peretyat'kin independently gave necessary and sufficient conditions for when a set of types of a complete theory T is the type spectrum of some homogeneous model of T. Their result can be stated as a principle of second order arithmetic, which is called the Homogeneous Model Theorem (HMT), and analyzed from the points of view of computability theory and reverse mathematics. Previous computability theoretic results by Lange suggested a close connection between HMT and the Atomic Model Theorem (AMT), which states that every complete atomic theory has an atomic model. The authors show that HMT and AMT are indeed equivalent in the sense of reverse mathematics, as well as in a strong computability theoretic sense and do the same for an analogous result of Peretyat'kin giving necessary and sufficient conditions for when a set of types is the type spectrum of some model.
Solar system and equivalence principle constraints on f(R) gravity by the chameleon approach
International Nuclear Information System (INIS)
Capozziello, Salvatore; Tsujikawa, Shinji
2008-01-01
We study constraints on f(R) dark energy models from solar system experiments combined with experiments on the violation of the equivalence principle. When the mass of an equivalent scalar field degree of freedom is heavy in a region with high density, a spherically symmetric body has a thin shell so that an effective coupling of the fifth force is suppressed through a chameleon mechanism. We place experimental bounds on the cosmologically viable models recently proposed in the literature that have an asymptotic form f(R)=R-λR c [1-(R c /R) 2n ] in the regime R>>R c . From the solar system constraints on the post-Newtonian parameter γ, we derive the bound n>0.5, whereas the constraints from the violations of the weak and strong equivalence principles give the bound n>0.9. This allows a possibility to find the deviation from the Λ-cold dark matter (ΛCDM) cosmological model. For the model f(R)=R-λR c (R/R c ) p with 0 -10 , which shows that this model is hardly distinguishable from the ΛCDM cosmology
The Bohr--Einstein ''weighing-of-energy'' debate and the principle of equivalence
International Nuclear Information System (INIS)
Hughes, R.J.
1990-01-01
The Bohr--Einstein debate over the ''weighing of energy'' and the validity of the time--energy uncertainty relation is reexamined in the context of gravitation theories that do not respect the equivalence principle. Bohr's use of the equivalence principle is shown to be sufficient, but not necessary, to establish the validity of this uncertainty relation in Einstein's ''weighing-of-energy'' gedanken experiment. The uncertainty relation is shown to hold in any energy-conserving theory of gravity, and so a failure of the equivalence principle does not engender a failure of quantum mechanics. The relationship between the gravitational redshift and the equivalence principle is reviewed
A Technique of Teaching the Principle of Equivalence at Ground Level
Lubrica, Joel V.
2016-01-01
This paper presents one way of demonstrating the Principle of Equivalence in the classroom. Teaching the Principle of Equivalence involves someone experiencing acceleration through empty space, juxtaposed with the daily encounter with gravity. This classroom activity is demonstrated with a water-filled bottle containing glass marbles and…
Conditions needed to give meaning to rad-equivalence principle
International Nuclear Information System (INIS)
Latarjet, R.
1980-01-01
To legislate on mutagenic chemical pollution the problem to be faced is similar to that tackled about 30 years ago regarding pollution by ionizing radiations. It would be useful to benefit from the work of these 30 years by establishing equivalences, if possible, between chemical mutagens and radiations. Inevitable mutagenic pollutions are considered here, especially those associated with fuel based energy production. As with radiations the legislation must derive from a compromise between the harmful and beneficial effects of the polluting system. When deciding on tolerance doses it is necessary to safeguard the biosphere without inflicting excessive restrictions on industry and on the economy. The present article discusses the conditions needed to give meaning to the notion of rad-equivalence. Some examples of already established equivalences are given, together with the first practical consequences which emerge [fr
Development of dose equivalent meters based on microdosimetric principles
International Nuclear Information System (INIS)
Booz, J.
1984-01-01
In this paper, the employment of microdosimetric dose-equivalent meters in radiation protection is described considering the advantages of introducing microdosimetric methods into radiation protection, the technical suitability of such instruments for measuring dose equivalent, and finally technical requirements, constraints and solutions together with some examples of instruments and experimental results. The advantage of microdosimetric methods in radiation protection is illustrated with the evaluation of dose-mean quality factors in radiation fields of unknown composition and with the methods of evaluating neutron- and gamma-dose fractions. - It is shown that there is good correlation between dose-mean lineal energy, anti ysub(anti D), and the ICRP quality factor. - Neutron- and gamma-dose fractions of unknown radiation fields can be evaluated with microdosimetric proportional counters without recurrence to other instruments and methods. The problems of separation are discussed. The technical suitability of microdosimetric instruments for measuring dose equivalent is discussed considering the energy response to neutrons and photons and the sensitivity in terms of dose-equivalent rate. Then, considering technical requirements, constraints, and solutions, the problem of the large dynamic range in LET, the large dynamic range in pulse rate, geometry of sensitive volume and electrodes, evaluation of dose-mean quality factors, calibration methods, and uncertainties are discussed. (orig.)
Is weak violation of the Pauli principle possible?
International Nuclear Information System (INIS)
Ignat'ev, A.Yu.; Kuz'min, V.A.
1987-01-01
The question considered in the work is whether there are models which can account for small violation of the Pauli principle. A simple algebra is constructed for the creation-annihilation operators, which contains a parameter β and describe small violation of the Pauli principle (the Pauli principle is valid exactly for β=0). The commutation relations in this algebra are trilinear. A model is presented, basing upon this commutator algebra, which allows transitions violating the Pauli principle, their probability being suppressed by a factor of β 2 (even though the Hamiltonian does not contain small parameters)
Equivalence principle for quantum systems: dephasing and phase shift of free-falling particles
Anastopoulos, C.; Hu, B. L.
2018-02-01
We ask the question of how the (weak) equivalence principle established in classical gravitational physics should be reformulated and interpreted for massive quantum objects that may also have internal degrees of freedom (dof). This inquiry is necessary because even elementary concepts like a classical trajectory are not well defined in quantum physics—trajectories originating from quantum histories become viable entities only under stringent decoherence conditions. From this investigation we posit two logically and operationally distinct statements of the equivalence principle for quantum systems. Version A: the probability distribution of position for a free-falling particle is the same as the probability distribution of a free particle, modulo a mass-independent shift of its mean. Version B: any two particles with the same velocity wave-function behave identically in free fall, irrespective of their masses. Both statements apply to all quantum states, including those without a classical correspondence, and also for composite particles with quantum internal dof. We also investigate the consequences of the interaction between internal and external dof induced by free fall. For a class of initial states, we find dephasing occurs for the translational dof, namely, the suppression of the off-diagonal terms of the density matrix, in the position basis. We also find a gravitational phase shift in the reduced density matrix of the internal dof that does not depend on the particle’s mass. For classical states, the phase shift has a natural classical interpretation in terms of gravitational red-shift and special relativistic time-dilation.
The Satellite Test of the Equivalence Principle (STEP)
2004-01-01
STEP will carry concentric test masses to Earth orbit to test a fundamental assumption underlying Einstein's theory of general relativity: that gravitational mass is equivalent to inertial mass. STEP is a 21st-century version of the test that Galileo is said to have performed by dropping a carnon ball and a musket ball simultaneously from the top of the Leaning Tower of Pisa to compare their accelerations. During the STEP experiment, four pairs of test masses will be falling around the Earth, and their accelerations will be measured by superconducting quantum interference devices (SQUIDS). The extended time sensitivity of the instruments will allow the measurements to be a million times more accurate than those made in modern ground-based tests.
Test of the Equivalence Principle in the Dark sector on galactic scales
International Nuclear Information System (INIS)
Mohapi, N.; Hees, A.; Larena, J.
2016-01-01
The Einstein Equivalence Principle is a fundamental principle of the theory of General Relativity. While this principle has been thoroughly tested with standard matter, the question of its validity in the Dark sector remains open. In this paper, we consider a general tensor-scalar theory that allows to test the equivalence principle in the Dark sector by introducing two different conformal couplings to standard matter and to Dark matter. We constrain these couplings by considering galactic observations of strong lensing and of velocity dispersion. Our analysis shows that, in the case of a violation of the Einstein Equivalence Principle, data favour violations through coupling strengths that are of opposite signs for ordinary and Dark matter. At the same time, our analysis does not show any significant deviations from General Relativity
On a Weak Discrete Maximum Principle for hp-FEM
Czech Academy of Sciences Publication Activity Database
Šolín, Pavel; Vejchodský, Tomáš
-, č. 209 (2007), s. 54-65 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/05/0629 Institutional research plan: CEZ:AV0Z20570509; CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.943, year: 2007
On Independence of Variants of the Weak Pigeonhole Principle
Czech Academy of Sciences Publication Activity Database
Jeřábek, Emil
2007-01-01
Roč. 17, č. 3 (2007), s. 587-604 ISSN 0955-792X R&D Projects: GA AV ČR IAA1019401; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : bounded arithmetic * pigeonhole principle * KPT witnessing Subject RIV: BA - General Mathematics Impact factor: 0.821, year: 2007
International Nuclear Information System (INIS)
Klink, W.H.; Wickramasekara, S.
2014-01-01
In previous work we have developed a formulation of quantum mechanics in non-inertial reference frames. This formulation is grounded in a class of unitary cocycle representations of what we have called the Galilean line group, the generalization of the Galilei group that includes transformations amongst non-inertial reference frames. These representations show that in quantum mechanics, just as is the case in classical mechanics, the transformations to accelerating reference frames give rise to fictitious forces. A special feature of these previously constructed representations is that they all respect the non-relativistic equivalence principle, wherein the fictitious forces associated with linear acceleration can equivalently be described by gravitational forces. In this paper we exhibit a large class of cocycle representations of the Galilean line group that violate the equivalence principle. Nevertheless the classical mechanics analogue of these cocycle representations all respect the equivalence principle. -- Highlights: •A formulation of Galilean quantum mechanics in non-inertial reference frames is given. •The key concept is the Galilean line group, an infinite dimensional group. •A large class of general cocycle representations of the Galilean line group is constructed. •These representations show violations of the equivalence principle at the quantum level. •At the classical limit, no violations of the equivalence principle are detected
International Nuclear Information System (INIS)
Lambiase, G.
2001-01-01
Neutrino oscillations are analyzed in an accelerating and rotating reference frame, assuming that the gravitational coupling of neutrinos is flavor dependent, which implies a violation of the equivalence principle. Unlike the usual studies in which a constant gravitational field is considered, such frames could represent a more suitable framework for testing if a breakdown of the equivalence principle occurs, due to the possibility to modulate the (simulated) gravitational field. The violation of the equivalence principle implies, for the case of a maximal gravitational mixing angle, the presence of an off-diagonal term in the mass matrix. The consequences on the evolution of flavor (mass) eigenstates of such a term are analyzed for solar (oscillations in the vacuum) and atmospheric neutrinos. We calculate the flavor oscillation probability in the non-inertial frame, which does depend on its angular velocity and linear acceleration, as well as on the energy of neutrinos, the mass-squared difference between two mass eigenstates, and on the measure of the degree of violation of the equivalence principle (Δγ). In particular, we find that the energy dependence disappears for vanishing mass-squared difference, unlike the result obtained by Gasperini, Halprin, Leung, and other physical mechanisms proposed as a viable explanation of neutrino oscillations. Estimations on the upper values of Δγ are inferred for a rotating observer (with vanishing linear acceleration) comoving with the earth, hence ω∝7.10 -5 rad/sec, and all other alternative mechanisms generating the oscillation phenomena have been neglected. In this case we find that the constraints on Δγ are given by Δγ≤10 2 for solar neutrinos and Δγ≤10 6 for atmospheric neutrinos. (orig.)
The c equivalence principle and the correct form of writing Maxwell's equations
International Nuclear Information System (INIS)
Heras, Jose A
2010-01-01
It is well known that the speed c u =1/√(ε 0 μ 0 ) is obtained in the process of defining SI units via action-at-a-distance forces, like the force between two static charges and the force between two long and parallel currents. The speed c u is then physically different from the observed speed of propagation c associated with electromagnetic waves in vacuum. However, repeated experiments have led to the numerical equality c u = c, which we have called the c equivalence principle. In this paper we point out that ∇xE=-[1/(ε 0 μ 0 c 2 )]∂B/∂t is the correct form of writing Faraday's law when the c equivalence principle is not assumed. We also discuss the covariant form of Maxwell's equations without assuming the c equivalence principle.
Consistency of the Mach principle and the gravitational-to-inertial mass equivalence principle
International Nuclear Information System (INIS)
Granada, Kh.K.; Chubykalo, A.E.
1990-01-01
Kinematics of the system, composed of two bodies, interacting with each other according to inverse-square law, was investigated. It is shown that the Mach principle, earlier rejected by the general relativity theory, can be used as an alternative for the absolute space concept, if it is proposed, that distant star background dictates both inertial and gravitational mass of a body
Gebauer, Petr; Malá, Zdena; Bocek, Petr
2010-03-01
This contribution introduces a new separation principle in CE which offers focusing of weak nonamphoteric ionogenic species and their inherent transport to the detector. The prerequisite condition for application of this principle is the existence of an inverse electromigration dispersion profile, i.e. a profile where pH is decreasing toward the anode or cathode for focusing of anionic or cationic weak analytes, respectively. The theory presented defines the principal conditions under which an analyte is focused on a profile of this type. Since electromigration dispersion profiles are migrating ones, the new principle offers inherent transport of focused analytes into the detection cell. The focusing principle described utilizes a mechanism different from both CZE (where separation is based on the difference in mobilities) and IEF (where separation is based on difference in pI), and hence, offers another separation dimension in CE. The new principle and its theory presented here are supplemented by convincing experiments as their proof.
Rodrigues, W. A.; Scanavini, M. E. F.; de Alcantara, L. P.
1990-02-01
In this paper a given spacetime theory T is characterized as the theory of a certain species of structure in the sense of Bourbaki [1]. It is then possible to clarify in a rigorous way the concepts of passive and active covariance of T under the action of the manifold mapping group G M . For each T, we define also an invariance group G I T and, in general, G I T ≠ G M . This group is defined once we realize that, for each τ ∈ ModT, each explicit geometrical object defining the structure can be classified as absolute or dynamical [2]. All spacetime theories possess also implicit geometrical objects that do not appear explicitly in the structure. These implicit objects are not absolute nor dynamical. Among them there are the reference frame fields, i.e., “timelike” vector fields X ∈ TU,U subseteq M M, where M is a manifold which is part of ST, a substructure for each τ ∈ ModT, called spacetime. We give a physically motivated definition of equivalent reference frames and introduce the concept of the equivalence group of a class of reference frames of kind X according to T, G X T. We define that T admits a weak principle of relativity (WPR) only if G X T ≠ identity for some X. If G X T = G I T for some X, we say that T admits a strong principle of relativity (PR). The results of this paper generalize and clarify several results obtained by Anderson [2], Scheibe [3], Hiskes [4], Recami and Rodrigues [5], Friedman [6], Fock [7], and Scanavini [8]. Among the novelties here, there is the realization that the definitions of G I T and G X T can be given only when certain boundary conditions for the equations of motion of T can be physically realizable in the domain U U subseteq M M, where a given reference frame is defined. The existence of physically realizable boundary conditions for each τ ∈ ModT (in ∂ U), in contrast with the mathematically possible boundary condition, is then seen to be essential for the validity of a principle of relativity for T
Violation of the equivalence principle for stressed bodies in asynchronous relativity
Energy Technology Data Exchange (ETDEWEB)
Andrade Martins, R. de (Centro de Logica, Epistemologia e Historia da Ciencia, Campinas (Brazil))
1983-12-11
In the recently developed asynchronous formulation of the relativistic theory of extended bodies, the inertial mass of a body does not explicitly depend on its pressure or stress. The detailed analysis of the weight of a box filled with a gas and placed in a weak gravitational field shows that this feature of asynchronous relativity implies a breakdown of the equivalence between inertial and passive gravitational mass for stressed systems.
Energy Technology Data Exchange (ETDEWEB)
Varoquaux, G; Nyman, R A; Geiger, R; Cheinet, P; Bouyer, P [Laboratoire Charles Fabry de l' Institut d' Optique, Campus Polytechnique, RD 128, 91127 Palaiseau (France); Landragin, A [LNE-SYRTE, UMR8630, UPMC, Observatoire de Paris, 61 avenue de l' Observatoire, 75014 Paris (France)], E-mail: philippe.bouyer@institutoptique.fr
2009-11-15
We propose a scheme for testing the weak equivalence principle (universality of free-fall (UFF)) using an atom-interferometric measurement of the local differential acceleration between two atomic species with a large mass ratio as test masses. An apparatus in free fall can be used to track atomic free-fall trajectories over large distances. We show how the differential acceleration can be extracted from the interferometric signal using Bayesian statistical estimation, even in the case of a large mass and laser wavelength difference. We show that this statistical estimation method does not suffer from acceleration noise of the platform and does not require repeatable experimental conditions. We specialize our discussion to a dual potassium/rubidium interferometer and extend our protocol with other atomic mixtures. Finally, we discuss the performance of the UFF test developed for the free-fall (zero-gravity) airplane in the ICE project (http://www.ice-space.fr)
International Nuclear Information System (INIS)
Varoquaux, G; Nyman, R A; Geiger, R; Cheinet, P; Bouyer, P; Landragin, A
2009-01-01
We propose a scheme for testing the weak equivalence principle (universality of free-fall (UFF)) using an atom-interferometric measurement of the local differential acceleration between two atomic species with a large mass ratio as test masses. An apparatus in free fall can be used to track atomic free-fall trajectories over large distances. We show how the differential acceleration can be extracted from the interferometric signal using Bayesian statistical estimation, even in the case of a large mass and laser wavelength difference. We show that this statistical estimation method does not suffer from acceleration noise of the platform and does not require repeatable experimental conditions. We specialize our discussion to a dual potassium/rubidium interferometer and extend our protocol with other atomic mixtures. Finally, we discuss the performance of the UFF test developed for the free-fall (zero-gravity) airplane in the ICE project (http://www.ice-space.fr).
Equivalence principle, CP violations, and the Higgs-like boson mass
International Nuclear Information System (INIS)
Bellucci, S.; Faraoni, V.
1994-01-01
We consider the violation of the equivalence principle induced by a massive gravivector, i.e., the partner of the graviton in N>1 supergravity. The present limits on this violation allow us to obtain a lower bound on the vacuum expectation value of the scalar field that gives the gravivector its mass. We consider also the effective neutral kaon mass difference induced by the gravivector and compare the result with the experimental data on the CP-violation parameter ε
The c equivalence principle and the correct form of writing Maxwell's equations
Energy Technology Data Exchange (ETDEWEB)
Heras, Jose A, E-mail: herasgomez@gmail.co [Universidad Autonoma Metropolitana Unidad Azcapotzalco, Av. San Pablo No. 180, Col. Reynosa, 02200, Mexico DF (Mexico)
2010-09-15
It is well known that the speed c{sub u}=1/{radical}({epsilon}{sub 0{mu}0}) is obtained in the process of defining SI units via action-at-a-distance forces, like the force between two static charges and the force between two long and parallel currents. The speed c{sub u} is then physically different from the observed speed of propagation c associated with electromagnetic waves in vacuum. However, repeated experiments have led to the numerical equality c{sub u} = c, which we have called the c equivalence principle. In this paper we point out that {nabla}xE=-[1/({epsilon}{sub 0}{mu}{sub 0}c{sup 2})]{partial_derivative}B/{partial_derivative}t is the correct form of writing Faraday's law when the c equivalence principle is not assumed. We also discuss the covariant form of Maxwell's equations without assuming the c equivalence principle.
Mars Seasonal Polar Caps as a Test of the Equivalence Principle
Rubincam, Daivd Parry
2011-01-01
The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial to gravitational masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor E6tv6s test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.
Mars seasonal polar caps as a test of the equivalence principle
International Nuclear Information System (INIS)
Rubincam, David Parry
2011-01-01
The seasonal polar caps of Mars can be used to test the equivalence principle in general relativity. The north and south caps, which are composed of carbon dioxide, wax and wane with the seasons. If the ratio of the inertial (passive) to gravitational (active) masses of the caps differs from the same ratio for the rest of Mars, then the equivalence principle fails, Newton's third law fails, and the caps will pull Mars one way and then the other with a force aligned with the planet's spin axis. This leads to a secular change in Mars's along-track position in its orbit about the Sun, and to a secular change in the orbit's semimajor axis. The caps are a poor Eoetvoes test of the equivalence principle, being 4 orders-of-magnitude weaker than laboratory tests and 7 orders-of-magnitude weaker than that found by lunar laser ranging; the reason is the small mass of the caps compared to Mars as a whole. The principal virtue of using Mars is that the caps contain carbon, an element not normally considered in such experiments. The Earth with its seasonal snow cover can also be used for a similar test.
The equivalence of the Dekel-Fudenberg iterative procedure and weakly perfect rationalizability
HERINGS, P. J.-J.; VANNETELBOSCH, Vincent J.
1998-01-01
Two approaches have been proposed in the literature to refine the rationalizability solution concept: either assuming that a player believes that with small probability her opponents choose strategies that are irrational, or assuming that their is a small amount of payoff uncertainty. We show that both approaches lead to the same refinement if strategy perturbations are made according to the concept of weakly perfect rationalizability, and if there is payoff uncertainty as in Dekel and Fudenb...
Discrete maximum principle for the P1 - P0 weak Galerkin finite element approximations
Wang, Junping; Ye, Xiu; Zhai, Qilong; Zhang, Ran
2018-06-01
This paper presents two discrete maximum principles (DMP) for the numerical solution of second order elliptic equations arising from the weak Galerkin finite element method. The results are established by assuming an h-acute angle condition for the underlying finite element triangulations. The mathematical theory is based on the well-known De Giorgi technique adapted in the finite element context. Some numerical results are reported to validate the theory of DMP.
Five-dimensional projective unified theory and the principle of equivalence
International Nuclear Information System (INIS)
De Sabbata, V.; Gasperini, M.
1984-01-01
We investigate the physical consequences of a new five-dimensional projective theory unifying gravitation and electromagnetism. Solving the field equations in the linear approximation and in the static limit, we find that a celestial body would act as a source of a long-range scalar field, and that macroscopic test bodies with different internal structure would accelerate differently in the solar gravitational field; this seems to be in disagreement with the equivalence principle. To avoid this contradiction, we suggest a possible modification of the geometrical structure of the five-dimensional projective space
The kernel G1(x,x') and the quantum equivalence principle
International Nuclear Information System (INIS)
Ceccatto, H.; Foussats, A.; Giacomini, H.; Zandron, O.
1981-01-01
In this paper, it is re-examined the formulation of the quantum equivalence principle (QEP) and its compatibility with the conditions which must be fulfilled by the kernel G 1 (x,x') is discussed. It is also determined the base of solutions which give the particle model in a curved space-time in terms of Cauchy's data for such a kernel. Finally, it is analyzed the creation of particles in this model by studying the time evolution of creation and annihilation operators. This method is an alternative to one that uses Bogoliubov's transformation as a mechanism of creation. (author)
Null result for violation of the equivalence principle with free-fall rotating gyroscopes
International Nuclear Information System (INIS)
Luo, J.; Zhou, Z.B.; Nie, Y.X.; Zhang, Y.Z.
2002-01-01
The differential acceleration between a rotating mechanical gyroscope and a nonrotating one is directly measured by using a double free-fall interferometer, and no apparent differential acceleration has been observed at the relative level of 2x10 -6 . It means that the equivalence principle is still valid for rotating extended bodies, i.e., the spin-gravity interaction between the extended bodies has not been observed at this level. Also, to the limit of our experimental sensitivity, there is no observed asymmetrical effect or antigravity of the rotating gyroscopes as reported by Hayasaka et al
Effective Inertial Frame in an Atom Interferometric Test of the Equivalence Principle
Overstreet, Chris; Asenbaum, Peter; Kovachy, Tim; Notermans, Remy; Hogan, Jason M.; Kasevich, Mark A.
2018-05-01
In an ideal test of the equivalence principle, the test masses fall in a common inertial frame. A real experiment is affected by gravity gradients, which introduce systematic errors by coupling to initial kinematic differences between the test masses. Here we demonstrate a method that reduces the sensitivity of a dual-species atom interferometer to initial kinematics by using a frequency shift of the mirror pulse to create an effective inertial frame for both atomic species. Using this method, we suppress the gravity-gradient-induced dependence of the differential phase on initial kinematic differences by 2 orders of magnitude and precisely measure these differences. We realize a relative precision of Δ g /g ≈6 ×10-11 per shot, which improves on the best previous result for a dual-species atom interferometer by more than 3 orders of magnitude. By reducing gravity gradient systematic errors to one part in 1 013 , these results pave the way for an atomic test of the equivalence principle at an accuracy comparable with state-of-the-art classical tests.
Testing the Equivalence Principle and Lorentz Invariance with PeV Neutrinos from Blazar Flares.
Wang, Zi-Yi; Liu, Ruo-Yu; Wang, Xiang-Yu
2016-04-15
It was recently proposed that a giant flare of the blazar PKS B1424-418 at redshift z=1.522 is in association with a PeV-energy neutrino event detected by IceCube. Based on this association we here suggest that the flight time difference between the PeV neutrino and gamma-ray photons from blazar flares can be used to constrain the violations of equivalence principle and the Lorentz invariance for neutrinos. From the calculated Shapiro delay due to clusters or superclusters in the nearby universe, we find that violation of the equivalence principle for neutrinos and photons is constrained to an accuracy of at least 10^{-5}, which is 2 orders of magnitude tighter than the constraint placed by MeV neutrinos from supernova 1987A. Lorentz invariance violation (LIV) arises in various quantum-gravity theories, which predicts an energy-dependent velocity of propagation in vacuum for particles. We find that the association of the PeV neutrino with the gamma-ray outburst set limits on the energy scale of possible LIV to >0.01E_{pl} for linear LIV models and >6×10^{-8}E_{pl} for quadratic order LIV models, where E_{pl} is the Planck energy scale. These are the most stringent constraints on neutrino LIV for subluminal neutrinos.
On the relativity and equivalence principles in the gauge theory of gravitation
International Nuclear Information System (INIS)
Ivanenko, D.; Sardanashvily, G.
1981-01-01
One sees the basic ideas of the gauge gravitation theory still not generally accepted in spite of more than twenty years of its history. The chief reason lies in the fact that the gauge character of gravity is connected with the whole complex of problems of Einstein General Relativity: about the reference system definition, on the (3+1)-splitting, on the presence (or absence) of symmetries in GR, on the necessity (or triviality) of general covariance, on the meaning of equivalence principle, which led Einstein from Special to General Relativity |1|. The real actuality of this complex of interconnected problems is demonstrated by the well-known work of V. Fock, who saw no symmetries in General Relativity, declared the unnecessary Equivalence principle and proposed even to substitute the designation ''chronogeometry'' instead of ''general relativity'' (see also P. Havas). Developing this line, H. Bondi quite recently also expressed doubts about the ''relativity'' in Einstein theory of gravitation. All proposed versions of the gauge gravitation theory must clarify the discrepancy between Einstein gravitational field being a pseudo-Riemannian metric field, and the gauge potentials representing connections on some fiber bundles and there exists no group, whose gauging would lead to the purely gravitational part of connection (Christoffel symbols or Fock-Ivenenko-Weyl spinorial coefficients). (author)
Bruijn, de N.G.
1972-01-01
Recently A. W. Joseph described an algorithm providing combinatorial insight into E. Sparre Andersen's so-called Principle of Equivalence in mathematical statistics. In the present paper such algorithms are discussed systematically.
Energy Technology Data Exchange (ETDEWEB)
Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)
2014-06-01
The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.
Tidal tails test the equivalence principle in the dark-matter sector
International Nuclear Information System (INIS)
Kesden, Michael; Kamionkowski, Marc
2006-01-01
Satellite galaxies currently undergoing tidal disruption offer a unique opportunity to constrain an effective violation of the equivalence principle in the dark sector. While dark matter in the standard scenario interacts solely through gravity on large scales, a new long-range force between dark-matter particles may naturally arise in theories in which the dark matter couples to a light scalar field. An inverse-square-law force of this kind would manifest itself as a violation of the equivalence principle in the dynamics of dark matter compared to baryons in the form of gas or stars. In a previous paper, we showed that an attractive force would displace stars outwards from the bottom of the satellite's gravitational potential well, leading to a higher fraction of stars being disrupted from the tidal bulge further from the Galactic center. Since stars disrupted from the far (near) side of the satellite go on to form the trailing (leading) tidal stream, an attractive dark-matter force will produce a relative enhancement of the trailing stream compared to the leading stream. This distinctive signature of a dark-matter force might be detected through detailed observations of the tidal tails of a disrupting satellite, such as those recently performed by the Two-Micron All-Sky Survey (2MASS) and Sloan Digital Sky Survey (SDSS) on the Sagittarius (Sgr) dwarf galaxy. Here we show that this signature is robust to changes in our models for both the satellite and Milky Way, suggesting that we might hope to search for a dark-matter force in the tidal features of other recently discovered satellite galaxies in addition to the Sgr dwarf
Zhao, Xiaoyan; Qin, Renjia
2015-04-01
This paper makes persuasive demonstrations on some problems about the human ear sound transmission principle in existing physiological textbooks and reference books, and puts forward the authors' view to make up for its literature. Exerting the knowledge of lever in physics and the acoustics theory, we come up with an equivalent simplified model of manubrium mallei which is to meet the requirements as the long arm of the lever. We also set up an equivalent simplified model of ossicular chain--a combination of levers of ossicular chain. We disassemble the model into two simple levers, and make full analysis and demonstration on them. Through the calculation and comparison of displacement amplitudes in both external auditory canal air and internal ear lymph, we may draw a conclusion that the key reason, which the sound displacement amplitude is to be decreased to adapt to the endurance limit of the basement membrane, is that the density and sound speed in lymph is much higher than those in the air.
International Nuclear Information System (INIS)
Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C.; Pino, M.; Rocha, C.I.S.A.; Wietersheim, M. von
2015-01-01
Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w 0 . Moreover, in these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints
Energy Technology Data Exchange (ETDEWEB)
Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C. [Centro de Astrofísica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Pino, M. [Institut Domènech i Montaner, C/Maspujols 21-23, 43206 Reus (Spain); Rocha, C.I.S.A. [Externato Ribadouro, Rua de Santa Catarina 1346, 4000-447 Porto (Portugal); Wietersheim, M. von, E-mail: Carlos.Martins@astro.up.pt, E-mail: Ana.Pinho@astro.up.pt, E-mail: up201106579@fc.up.pt, E-mail: mpc_97@yahoo.com, E-mail: cisar97@hotmail.com, E-mail: maxivonw@gmail.com [Institut Manuel Sales i Ferré, Avinguda de les Escoles 6, 43550 Ulldecona (Spain)
2015-08-01
Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w{sub 0}. Moreover, in these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints.
Weak Galilean invariance as a selection principle for coarse-grained diffusive models.
Cairoli, Andrea; Klages, Rainer; Baule, Adrian
2018-05-29
How does the mathematical description of a system change in different reference frames? Galilei first addressed this fundamental question by formulating the famous principle of Galilean invariance. It prescribes that the equations of motion of closed systems remain the same in different inertial frames related by Galilean transformations, thus imposing strong constraints on the dynamical rules. However, real world systems are often described by coarse-grained models integrating complex internal and external interactions indistinguishably as friction and stochastic forces. Since Galilean invariance is then violated, there is seemingly no alternative principle to assess a priori the physical consistency of a given stochastic model in different inertial frames. Here, starting from the Kac-Zwanzig Hamiltonian model generating Brownian motion, we show how Galilean invariance is broken during the coarse-graining procedure when deriving stochastic equations. Our analysis leads to a set of rules characterizing systems in different inertial frames that have to be satisfied by general stochastic models, which we call "weak Galilean invariance." Several well-known stochastic processes are invariant in these terms, except the continuous-time random walk for which we derive the correct invariant description. Our results are particularly relevant for the modeling of biological systems, as they provide a theoretical principle to select physically consistent stochastic models before a validation against experimental data.
Expanded solar-system limits on violations of the equivalence principle
International Nuclear Information System (INIS)
Overduin, James; Mitcham, Jack; Warecki, Zoey
2014-01-01
Most attempts to unify general relativity with the standard model of particle physics predict violations of the equivalence principle associated in some way with the composition of the test masses. We test this idea by using observational uncertainties in the positions and motions of solar-system bodies to set upper limits on the relative difference Δ between gravitational and inertial mass for each body. For suitable pairs of objects, it is possible to constrain three different linear combinations of Δ using Kepler’s third law, the migration of stable Lagrange points, and orbital polarization (the Nordtvedt effect). Limits of order 10 −10 –10 −6 on Δ for individual bodies can then be derived from planetary and lunar ephemerides, Cassini observations of the Saturn system, and observations of Jupiter’s Trojan asteroids as well as recently discovered Trojan companions around the Earth, Mars, Neptune, and Saturnian moons. These results can be combined with models for elemental abundances in each body to test for composition-dependent violations of the universality of free fall in the solar system. The resulting limits are weaker than those from laboratory experiments, but span a larger volume in composition space. (paper)
Current research efforts at JILA to test the equivalence principle at short ranges
International Nuclear Information System (INIS)
Faller, J.E.; Niebauer, T.M.; McHugh, M.P.; Van Baak, D.A.
1988-01-01
We are presently engaged in three different experiments to search for a possible breakdown of the equivalence principle at short ranges. The first of these experiments, which has been completed, is our so-called Galilean test in which the differential free-fall of two objects of differing composition was measured using laser interferometry. We observed that the differential acceleration of two test bodies was less than 5 parts in 10 billion. This experiment set new limits on a suggested baryon dependent ''Fifth Force'' at ranges longer than 1 km. With a second experiment, we are investigating substance dependent interactions primarily for ranges up to 10 meters using a fluid supported torsion balance; this apparatus has been built and is now undergoing laboratory tests. Finally, a proposal has been made to measure the gravitational signal associated with the changing water level at a large pumped storage facility in Ludington, Michigan. Measuring the gravitational signal above and below the pond will yield the value of the gravitational constant, G, at ranges from 10-100 m. These measurements will serve as an independent check on other geophysical measurements of G
Hubbard, Dorthy (Technical Monitor); Lorenzini, E. C.; Shapiro, I. I.; Cosmo, M. L.; Ashenberg, J.; Parzianello, G.; Iafolla, V.; Nozzoli, S.
2003-01-01
We discuss specific, recent advances in the analysis of an experiment to test the Equivalence Principle (EP) in free fall. A differential accelerometer detector with two proof masses of different materials free falls inside an evacuated capsule previously released from a stratospheric balloon. The detector spins slowly about its horizontal axis during the fall. An EP violation signal (if present) will manifest itself at the rotational frequency of the detector. The detector operates in a quiet environment as it slowly moves with respect to the co-moving capsule. There are, however, gravitational and dynamical noise contributions that need to be evaluated in order to define key requirements for this experiment. Specifically, higher-order mass moments of the capsule contribute errors to the differential acceleration output with components at the spin frequency which need to be minimized. The dynamics of the free falling detector (in its present design) has been simulated in order to estimate the tolerable errors at release which, in turn, define the release mechanism requirements. Moreover, the study of the higher-order mass moments for a worst-case position of the detector package relative to the cryostat has led to the definition of requirements on the shape and size of the proof masses.
Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission
Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G.; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.
2018-01-01
The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10-5. By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10-5 and the Sun's gravitational oblateness, J2⊙J2⊙ = (2.246 ± 0.022) × 10-7. Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, GM⊙°/GM⊙GM⊙°/GM⊙ = (-6.13 ± 1.47) × 10-14, which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain ∣∣G°∣∣/GG°/G to be <4 × 10-14 per year.
Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission.
Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G; Neumann, Gregory A; Smith, David E; Zuber, Maria T
2018-01-18
The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10 -5 . By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10 -5 and the Sun's gravitational oblateness, [Formula: see text] = (2.246 ± 0.022) × 10 -7 . Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, [Formula: see text] = (-6.13 ± 1.47) × 10 -14 , which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain [Formula: see text] to be <4 × 10 -14 per year.
Violations of the equivalence principle in a dilaton-runaway scenario
Damour, Thibault Marie Alban Guillaume; Veneziano, Gabriele
2002-01-01
We explore a version of the cosmological dilaton-fixing and decoupling mechanism in which the dilaton-dependence of the low-energy effective action is extremized for infinitely large values of the bare string coupling $g_s^2 = e^{\\phi}$. We study the efficiency with which the dilaton $\\phi$ runs away towards its ``fixed point'' at infinity during a primordial inflationary stage, and thereby approximately decouples from matter. The residual dilaton couplings are found to be related to the amplitude of the density fluctuations generated during inflation. For the simplest inflationary potential, $V (\\chi) = {1/2} m_{\\chi}^2 (\\phi) \\chi^2$, the residual dilaton couplings are shown to predict violations of the universality of gravitational acceleration near the $\\Delta a / a \\sim 10^{-12}$ level. This suggests that a modest improvement in the precision of equivalence principle tests might be able to detect the effect of such a runaway dilaton. Under some assumptions about the coupling of the dilaton to dark matter...
Gravitational quadrupolar coupling to equivalence principle test masses: the general case
Lockerbie, N A
2002-01-01
This paper discusses the significance of the quadrupolar gravitational force in the context of test masses destined for use in equivalence principle (EP) experiments, such as STEP and MICROSCOPE. The relationship between quadrupolar gravity and rotational inertia for an arbitrary body is analysed, and the special, gravitational, role of a body's principal axes of inertia is revealed. From these considerations the gravitational quadrupolar force acting on a cylindrically symmetrical body, due to a point-like attracting source mass, is derived in terms of the body's mass quadrupole tensor. The result is shown to be in agreement with that obtained from MacCullagh's formula (as the starting point). The theory is then extended to cover the case of a completely arbitrary solid body, and a compact formulation for the quadrupolar force on such a body is derived. A numerical example of a dumb-bell's attraction to a local point-like gravitational source is analysed using this theory. Close agreement is found between th...
Gravitational quadrupolar coupling to equivalence principle test masses: the general case
International Nuclear Information System (INIS)
Lockerbie, N A
2002-01-01
This paper discusses the significance of the quadrupolar gravitational force in the context of test masses destined for use in equivalence principle (EP) experiments, such as STEP and MICROSCOPE. The relationship between quadrupolar gravity and rotational inertia for an arbitrary body is analysed, and the special, gravitational, role of a body's principal axes of inertia is revealed. From these considerations the gravitational quadrupolar force acting on a cylindrically symmetrical body, due to a point-like attracting source mass, is derived in terms of the body's mass quadrupole tensor. The result is shown to be in agreement with that obtained from MacCullagh's formula (as the starting point). The theory is then extended to cover the case of a completely arbitrary solid body, and a compact formulation for the quadrupolar force on such a body is derived. A numerical example of a dumb-bell's attraction to a local point-like gravitational source is analysed using this theory. Close agreement is found between the resulting quadrupolar force on the body and the difference between the net and the monopolar forces acting on it, underscoring the utility of the approach. A dynamical technique for experimentally obtaining the mass quadrupole tensors of EP test masses is discussed, and a means of validating the results is noted
The Nome law compromise: the limits of a market system with weak economic principles
International Nuclear Information System (INIS)
Finon, D.
2010-01-01
The NOME law aims for two principle objectives in terms of competition: one objective is to increase the share of the market for the rivals of the historic suppliers, and the other is to develop retail competition that will lead to competitive prices, consistent with the current cost of nuclear kWh. This is brought about through a regulation of prices and of the quantity of wholesale trades by allocating drawing rights on nuclear power to alternatives, and through control mechanisms that dissuade the buyers of these rights om arbitrating the European wholesale market. We show then that it is necessary to leave the canonic running of the electricity retail market to succeed in decoupling retail prices om wholesale prices. We identify the importance of the historic suppliers' role as a linchpin that in practice defines retail prices, as well as handling market distribution between themselves and the alternatives. We note the special nature of retail prices as coming not from market balance, but rather as being a rice defined under political injunction, which is therefore implicitly regulated. With weak economic foundations, the system can be pushed of course by the effect of competition alone, in particular when we reach the allocation limit of a quarter of nuclear energy production. It has an equally weak legal basis with respect to European case law. That raises doubt about its sustainability. (author)
Development of a superconducting position sensor for the Satellite Test of the Equivalence Principle
Clavier, Odile Helene
The Satellite Test of the Equivalence Principle (STEP) is a joint NASA/ESA mission that proposes to measure the differential acceleration of two cylindrical test masses orbiting the earth in a drag-free satellite to a precision of 10-18 g. Such an experiment would conceptually reproduce Galileo's tower of Pisa experiment with a much longer time of fall and greatly reduced disturbances. The superconducting test masses are constrained in all degrees of freedom except their axial direction (the sensitive axis) using superconducting bearings. The STEP accelerometer measures the differential position of the masses in their sensitive direction using superconducting inductive pickup coils coupled to an extremely sensitive magnetometer called a DC-SQUID (Superconducting Quantum Interference Device). Position sensor development involves the design, manufacture and calibration of pickup coils that will meet the acceleration sensitivity requirement. Acceleration sensitivity depends on both the displacement sensitivity and stiffness of the position sensor. The stiffness must kept small while maintaining stability of the accelerometer. Using a model for the inductance of the pickup coils versus displacement of the test masses, a computer simulation calculates the sensitivity and stiffness of the accelerometer in its axial direction. This simulation produced a design of pickup coils for the four STEP accelerometers. Manufacture of the pickup coils involves standard photolithography techniques modified for superconducting thin-films. A single-turn pickup coil was manufactured and produced a successful superconducting coil using thin-film Niobium. A low-temperature apparatus was developed with a precision position sensor to measure the displacement of a superconducting plate (acting as a mock test mass) facing the coil. The position sensor was designed to detect five degrees of freedom so that coupling could be taken into account when measuring the translation of the plate
An Abstract Approach to Process Equivalence and a Coinduction Principle for Traces
DEFF Research Database (Denmark)
Klin, Bartek
2004-01-01
An abstract coalgebraic approach to well-structured relations on processes is presented, based on notions of tests and test suites. Preorders and equivalences on processes are modelled as coalgebras for behaviour endofunctors lifted to a category of test suites. The general framework is specializ...
Energy Technology Data Exchange (ETDEWEB)
Endrestoel, G O [Institutt for Atomenergi, Kjeller (Norway)
1979-01-01
The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion an estimate of the precision that can be obtained by these methods is given.
Energy Technology Data Exchange (ETDEWEB)
Endrestol, G O
1979-01-01
The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion estimates of the precision that can be obtained by these methods are given.
On the role of the equivalence principle in the general relativity theory
International Nuclear Information System (INIS)
Gertsenshtein, M.E.; Stanyukovich, K.P.; Pogosyan, V.A.
1977-01-01
The conditions under which the solutions of the general relativity theory equations satisfy the correspondence principle are considered. It is shown that in general relativity theory, as in a plane space any systems of coordinates satisfying the topological requirements of continuity and uniqueness are admissible. The coordinate transformations must be mutually unique, and the following requirements must be met: the transformations of the coordinates xsup(i)=xsup(i)(anti xsup(k)) must preserve the class of the function, while the transformation jacobian must be finite and nonzero. The admissible metrics in the Tolmen problem for a vacuum are considered. A prohibition of the vacuum solution of the Tolmen problem is obtained from the correspondence principle. The correspondence principle is applied to the solution of the Friedmann problem by constructing a spherical symmetric self-similar solution, in which replacement of compression by expansion occurs at a finite density. The examples adduced convince that the application of the correspondence principle makes it possible to discard physically inadmissible solutions and obtained new physical results
International Nuclear Information System (INIS)
Drexler, G.; Williams, G.
1985-01-01
The application of the effective dose equivalent, Hsub(E), concept for radiological protection assessments of occupationally exposed persons is justifiable by the practicability thus achieved with regard to the limiting principles. Nevertheless, it would be proper logic to further use as the basic limiting quantity the real physical dose equivalent of homogeneous whole-body exposure, and for inhomogeneous whole-body irradiation the Hsub(E) value, calculated by means of the concept of the effective dose equivalent. For then the required concepts, models and calculations would not be connected with a basic radiation protection quantity. Application of the effective dose equivalent for radiation protection assessments for patients is misleading and is not practical with regard to assessing an individual or collective radiation risk of patients. The quantity of expected harm would be better suited to this purpose. There is no need to express the radiation risk by a dose quantity, which means careless handling of good information. (orig./WU) [de
International Nuclear Information System (INIS)
Gasperini, M.; Istituto Nazionale di Fisica Nucleare, Sezione di Torino, Torino, Italy)
1989-01-01
The negative results of the oscillations experiments are discussed with the hypothesis that the various neutrino types are not universally coupled to gravity. In this case the transition probabiltiy between two different flavor eigenstates may be affected by the local gravitational field present in a terrestrial laboratory, and the contribution of gravity can interfere, in general, with the mass contribution to the oscillation process. In particular it is shown that even a strong violation of the equivalence principle could be compatible with the experimental data, provided the gravity-induced energy splitting is balanced by a suitable neutrino mass difference
International Nuclear Information System (INIS)
Drexler, G.; Williams, G.; Zankl, M.
1985-01-01
Since the introduction of the quantity ''effective dose equivalent'' within the framework of new radiation concepts, the meaning and interpretation of the quantity is often discussed and debated. Because of its adoption as a limiting quantity in many international and national laws, it is necessary to be able to interpret this main radiation protection quantity. Examples of organ doses and the related Hsub(E) values in occupational and medical exposures are presented and the meaning of the quantity is considered for whole body exposures to external and internal photon sources, as well as for partial body external exposures to photons. (author)
Testing the strong equivalence principle with the triple pulsar PSR J 0337 +1715
Shao, Lijing
2016-04-01
Three conceptually different masses appear in equations of motion for objects under gravity, namely, the inertial mass, mI , the passive gravitational mass, mP, and the active gravitational mass, mA. It is assumed that, for any objects, mI=mP=mA in the Newtonian gravity, and mI=mP in the Einsteinian gravity, oblivious to objects' sophisticated internal structure. Empirical examination of the equivalence probes deep into gravity theories. We study the possibility of carrying out new tests based on pulsar timing of the stellar triple system, PSR J 0337 +1715 . Various machine-precision three-body simulations are performed, from which, the equivalence-violating parameters are extracted with Markov chain Monte Carlo sampling that takes full correlations into account. We show that the difference in masses could be probed to 3 ×1 0-8 , improving the current constraints from lunar laser ranging on the post-Newtonian parameters that govern violations of mP=mI and mA=mP by thousands and millions, respectively. The test of mP=mA would represent the first test of Newton's third law with compact objects.
Comment on ''Modified photon equation of motion as a test for the principle of equivalence''
International Nuclear Information System (INIS)
Nityananda, R.
1992-01-01
In a recent paper, a modification of the geodesic equation was proposed for spinning photons containing a spin-curvature coupling term. The difference in arrival times of opposite circular polarizations starting simultaneously from a source was computed, obtaining a result linear in the coupling parameter. It is pointed out here that this linear term violates causality and, more generally, Fermat's principle, implying calculational errors. Even if these are corrected, there is a violation of covariance in the way the photon spin was introduced. Rectifying this makes the effect computed vanish entirely
Kapotis, Efstratios; Kalkanis, George
2016-10-01
According to the principle of equivalence, it is impossible to distinguish between gravity and inertial forces that a noninertial observer experiences in his own frame of reference. For example, let's consider an elevator in space that is being accelerated in one direction. An observer inside it would feel as if there was gravity force pulling him toward the opposite direction. The same holds for a person in a stationary elevator located in Earth's gravitational field. No experiment enables us to distinguish between the accelerating elevator in space and the motionless elevator near Earth's surface. Strictly speaking, when the gravitational field is non-uniform (like Earth's), the equivalence principle holds only for experiments in elevators that are small enough and that take place over a short enough period of time (Fig. 1). However, performing an experiment in an elevator in space is impractical. On the other hand, it is easy to combine both forces on the same observer, i.e., gravity and a fictitious inertial force due to acceleration. Imagine an observer in an elevator that falls freely within Earth's gravitational field. The observer experiences gravity pulling him down while it might be said that the inertial force due to gravity acceleration g pulls him up. Gravity and inertial force cancel each other, (mis)leading the observer to believe there is no gravitational field. This study outlines our implementation of a self-construction idea that we have found useful in teaching introductory physics students (undergraduate, non-majors).
International Nuclear Information System (INIS)
Poluektov, P.P.; Lopatkin, A.V.; Nikipelov, B.V.; Rachkov, V.I.; Sukhanov, L.P.; Voloshin, S.V.
2005-01-01
The errors and uncertainties arising in the determination of radionuclide escape from the RW burial require the use of extremely conservative estimates. In the limit, the nuclide concentrations in the waste may be used as estimates of their concentrations in underground waters. On this basis, it is possible to evaluate the corresponding radio-toxicities (by normalizing to the interference levels) of individual components and radioactive waste as a whole or the effective radio-toxicities (by dividing the radionuclide radio-toxicities into the retardation factors for the nuclide transfer with underground waters). This completely coincides with the procedure of performing the limiting conservative estimate according to the traditional approach with the use of scenarios, escape models, and the corresponding codes. A comparison of radio-toxicities for waste with those for natural uranium consumed for producing a required fuel results in the notion of radiation-migration equivalence for individual waste components and radioactive waste as a whole. Therefore, the radiation-migration equivalence corresponds to the limiting conservative estimate in the traditional approach to the determination of RW disposal safety in comparison with the radiotoxicity of natural uranium. The amounts of radionuclides in fragments (and actinides) and the corresponding weight of heavy metal in the fuel are compared with due regard for the hazard (according to the NRB-99 standards), the nuclide mobility (through the sorption retardation factors), the retention of radioactive waste by the solid matrix, and the contribution from the chains of uranium fission products. It was noted above that the RME principle is aimed at ensuring the radiological safety of the present and future generations and the environment through the minimization of radioactive waste upon reprocessing. This is attended by reaching a reasonably achievable, low level of radiological action in the context of modern science, i
International Nuclear Information System (INIS)
Barbarouxa, J.M.; Guillot, J.C.
2009-01-01
We study the spectral properties of a Hamiltonian describing the weak decay of spin 1 massive bosons into the full family of leptons. We prove that the considered Hamiltonian is self-adjoint, with a unique ground state and we derive a Mourre estimate and a limiting absorption principle above the ground state energy and below the first threshold, for a sufficiently small coupling constant. As a corollary, we prove absence of eigenvalues and absolute continuity of the energy spectrum in the same spectral interval. (authors)
Mohindra, R
2009-05-01
The difficulty in discovering a difference between killing and letting die has led many philosophers to deny the distinction. This paper seeks to develop an argument defending the distinction between killing and letting die. In relation to Rachels' cases, the argument is that (a) even accepting that Smith and Jones may select equally heinous options from the choices they have available to them, (b) the fact that the choices available to them are different is morally relevant, and (c) this difference in available choices can be used to distinguish between the agents in certain circumstances. It is the principle of justice, as espoused by Aristotle, which requires that equal things are treated equally and that unequal things are treated unequally that creates a presumption that Smith and Jones should be treated differently. The magnitude of this difference can be amplified by other premises, making the distinction morally relevant in practical reality.
Weak interaction: past answers, present questions
International Nuclear Information System (INIS)
Ne'eman, Y.
1977-02-01
A historical sketch of the weak interaction is presented. From beta ray to pion decay, the V-A theory of Marshak and Sudarshan, CVC principle of equivalence, universality as an algebraic condition, PCAC, renormalized weak Hamiltonian in the rehabilitation of field theory, and some current issues are considered in this review. 47 references
International Nuclear Information System (INIS)
Boyer, T.H.
1984-01-01
A derivation of Planck's spectrum including zero-point radiation is given within classical physics from recent results involving the thermal effects of acceleration through classical electromagnetic zero-point radiation. A harmonic electric-dipole oscillator undergoing a uniform acceleration a through classical electromagnetic zero-point radiation responds as would the same oscillator in an inertial frame when not in zero-point radiation but in a different spectrum of random classical radiation. Since the equivalence principle tells us that the oscillator supported in a gravitational field g = -a will respond in the same way, we see that in a gravitational field we can construct a perpetual-motion machine based on this different spectrum unless the different spectrum corresponds to that of thermal equilibrium at a finite temperature. Therefore, assuming the absence of perpetual-motion machines of the first kind in a gravitational field, we conclude that the response of an oscillator accelerating through classical zero-point radiation must be that of a thermal system. This then determines the blackbody radiation spectrum in an inertial frame which turns out to be exactly Planck's spectrum including zero-point radiation
Majumdar, D; Sil, A; Majumdar, Debasish; Raychaudhuri, Amitava; Sil, Arunansu
2001-01-01
Violation of the Equivalence Principle (VEP) can lead to neutrino oscillation through the non-diagonal coupling of neutrino flavor eigenstates with the gravitational field. The neutrino energy dependence of this oscillation probability is different from that of the usual mass-mixing neutrino oscillations. In this work we explore, in detail, the viability of the VEP hypothesis as a solution to the solar neutrino problem in a two generation scenario with both the active and sterile neutrino alternatives, choosing these states to be massless. To obtain the best-fit values of the oscillation parameters we perform a chi square analysis for the total rates of solar neutrinos seen at the Chlorine (Homestake), Gallium (Gallex and SAGE), Kamiokande, and SuperKamiokande (SK) experiments. We find that the goodness of these fits is never satisfactory. It markedly improves if the Chlorine data is excluded from the analysis, especially for VEP transformation to sterile neutrinos. The 1117-day SK data for recoil electron sp...
Silverman-Retana, Omar; Servan-Mori, Edson; Lopez-Ridaura, Ruy; Bautista-Arredondo, Sergio
2016-07-01
To document the performance of diabetes and hypertension care in two large male prisons in Mexico City. We analyzed data from a cross-sectional study carried out during July-September 2010, including 496 prisoners with hypertension or diabetes in Mexico City. Bivariate and multivariable logistic regressions were used to assess process-of-care indicators and disease control status. Hypertension and diabetes prevalence were estimated on 2.1 and 1.4 %, respectively. Among prisoners with diabetes 22.7 % (n = 62) had hypertension as comorbidity. Low achievement of process-of-care indicators-follow-up visits, blood pressure and laboratory assessments-were observed during incarceration compared to the same prisoners in the year prior to incarceration. In contrast to nonimprisoned diabetes population from Mexico City and from the lowest quintile of socioeconomic status at the national level, prisoners with diabetes had the lowest performance on process-of-care indicators. Continuity of care for chronic diseases, coupled with the equivalence of care principle, should provide the basis for designing chronic disease health policy for prisoners, with the goal of consistent transition of care from community to prison and vice versa.
Takahashi, Masae; Ishikawa, Yoichi; Ito, Hiromasa
2013-03-01
A weak hydrogen bond (WHB) such as CH-O is very important for the structure, function, and dynamics in a chemical and biological system WHB stretching vibration is in a terahertz (THz) frequency region Very recently, the reasonable performance of dispersion-corrected first-principles to WHB has been proven. In this lecture, we report dispersion-corrected first-principles calculation of the vibrational absorption of some organic crystals, and low-temperature THz spectral measurement, in order to clarify WHB stretching vibration. The THz frequency calculation of a WHB crystal has extremely improved by dispersion correction. Moreover, the discrepancy in frequency between an experiment and calculation and is 10 1/cm or less. Dispersion correction is especially effective for intermolecular mode. The very sharp peak appearing at 4 K is assigned to the intermolecular translational mode that corresponds to WHB stretching vibration. It is difficult to detect and control the WHB formation in a crystal because the binding energy is very small. With the help of the latest intense development of experimental and theoretical technique and its careful use, we reveal solid-state WHB stretching vibration as evidence for the WHB formation that differs in respective WHB networks The research was supported by the Ministry of Education, Culture, Sports, Science and Technology of Japan (Grant No. 22550003).
Directory of Open Access Journals (Sweden)
Shaobo Xie
2017-09-01
Full Text Available When developing a real-time energy management strategy for a plug-in hybrid electric vehicle, it is still a challenge for the Equivalent Consumption Minimum Strategy to achieve near-optimal energy consumption, because the optimal equivalence factor is not readily available without the trip information. With the help of realistic speeding profiles sampled from a plug-in hybrid electric bus running on a fixed commuting line, this paper proposes a convenient and effective approach of determining the equivalence factor for an adaptive Equivalent Consumption Minimum Strategy. Firstly, with the adaptive law based on the feedback of battery SOC, the equivalence factor is described as a combination of the major component and tuning component. In particular, the major part defined as a constant is applied to the inherent consistency of regular speeding profiles, while the second part including a proportional and integral term can slightly tune the equivalence factor to satisfy the disparity of daily running cycles. Moreover, Pontryagin’s Minimum Principle is employed and solved by using the shooting method to capture the co-state dynamics, in which the Secant method is introduced to adjust the initial co-state value. And then the initial co-state value in last shooting is taken as the optimal stable constant of equivalence factor. Finally, altogether ten successive driving profiles are selected with different initial SOC levels to evaluate the proposed method, and the results demonstrate the excellent fuel economy compared with the dynamic programming and PMP method.
International Nuclear Information System (INIS)
Hojman, S.
1982-01-01
We present a review of the inverse problem of the Calculus of Variations, emphasizing the ambiguities which appear due to the existence of equivalent Lagrangians for a given classical system. In particular, we analyze the properties of equivalent Lagrangians in the multidimensional case, we study the conditions for the existence of a variational principle for (second as well as first order) equations of motion and their solutions, we consider the inverse problem of the Calculus of Variations for singular systems, we state the ambiguities which emerge in the relationship between symmetries and conserved quantities in the case of equivalent Lagrangians, we discuss the problems which appear in trying to quantize classical systems which have different equivalent Lagrangians, we describe the situation which arises in the study of equivalent Lagrangians in field theory and finally, we present some unsolved problems and discussion topics related to the content of this article. (author)
International Nuclear Information System (INIS)
Ackroyd, R.T.
1982-01-01
Some minimum and maximum variational principles for even-parity neutron transport are reviewed and the corresponding principles for odd-parity transport are derived by a simple method to show why the essential boundary conditions associated with these maximum principles have to be imposed. The method also shows why both the essential and some of the natural boundary conditions associated with these minimum principles have to be imposed. These imposed boundary conditions for trial functions in the variational principles limit the choice of the finite element used to represent trial functions. The reasons for the boundary conditions imposed on the principles for even- and odd-parity transport point the way to a treatment of composite neutron transport, for which completely boundary-free maximum and minimum principles are derived from a functional identity. In general a trial function is used for each parity in the composite neutron transport, but this can be reduced to one without any boundary conditions having to be imposed. (author)
CSIR Research Space (South Africa)
Robertson Lain, L
2014-07-01
Full Text Available (PFT) analysis. To these ends, an initial validation of a new model of Equivalent Algal Populations (EAP) is presented here. This paper makes a first order comparison of two prominent phytoplankton Inherent Optical Property (IOP) models with the EAP...
International Nuclear Information System (INIS)
Comandi, G.L.; Toncelli, R.; Chiofalo, M.L.; Bramanti, D.; Nobili, A.M.
2006-01-01
'Galileo Galilei on the ground' (GGG) is a fast rotating differential accelerometer designed to test the equivalence principle (EP). Its sensitivity to differential effects, such as the effect of an EP violation, depends crucially on the capability of the accelerometer to reject all effects acting in common mode. By applying the theoretical and simulation methods reported in Part I of this work, and tested therein against experimental data, we predict the occurrence of an enhanced common mode rejection of the GGG accelerometer. We demonstrate that the best rejection of common mode disturbances can be tuned in a controlled way by varying the spin frequency of the GGG rotor
International Nuclear Information System (INIS)
Pradels, Gregory; Touboul, Pierre
2003-01-01
The MICROSCOPE mission is a space experiment of fundamental physics which aims to test the equality between the gravitational and inertial mass with a 10 -15 accuracy. Considering these scientific objectives, very weak accelerations have to be controlled and measured in orbit. By modelling the expected acceleration signals applied to the MICROSCOPE instrument in orbit, the developed analytic model of the mission measurement shows the requirements for instrument calibration. Because of on-ground perturbations, the instrument cannot be calibrated in the laboratory and an in-orbit procedure has to be defined. The proposed approach exploits the drag-free system of the satellite and is an important element of the future data analysis of the MICROSCOPE space experiment
Mezzasalma, Stefano A
2007-03-15
The theoretical basis of a recent theory of Brownian relativity for polymer solutions is deepened and reexamined. After the problem of relative diffusion in polymer solutions is addressed, its two postulates are formulated in all generality. The former builds a statistical equivalence between (uncorrelated) timelike and shapelike reference frames, that is, among dynamical trajectories of liquid molecules and static configurations of polymer chains. The latter defines the "diffusive horizon" as the invariant quantity to work with in the special version of the theory. Particularly, the concept of universality in polymer physics corresponds in Brownian relativity to that of covariance in the Einstein formulation. Here, a "universal" law consists of a privileged observation, performed from the laboratory rest frame and agreeing with any diffusive reference system. From the joint lack of covariance and simultaneity implied by the Brownian Lorentz-Poincaré transforms, a relative uncertainty arises, in a certain analogy with quantum mechanics. It is driven by the difference between local diffusion coefficients in the liquid solution. The same transformation class can be used to infer Fick's second law of diffusion, playing here the role of a gauge invariance preserving covariance of the spacetime increments. An overall, noteworthy conclusion emerging from this view concerns the statistics of (i) static macromolecular configurations and (ii) the motion of liquid molecules, which would be much more related than expected.
Robertson Lain, L; Bernard, S; Evers-King, H
2014-07-14
There is a pressing need for improved bio-optical models of high biomass waters as eutrophication of coastal and inland waters becomes an increasing problem. Seasonal boom conditions in the Southern Benguela and persistent harmful algal production in various inland waters in Southern Africa present valuable opportunities for the development of such modelling capabilities. The phytoplankton-dominated signal of these waters additionally addresses an increased interest in Phytoplankton Functional Type (PFT) analysis. To these ends, an initial validation of a new model of Equivalent Algal Populations (EAP) is presented here. This paper makes a first order comparison of two prominent phytoplankton Inherent Optical Property (IOP) models with the EAP model, which places emphasis on explicit bio-physical modelling of the phytoplankton population as a holistic determinant of inherent optical properties. This emphasis is shown to have an impact on the ability to retrieve the detailed phytoplankton spectral scattering information necessary for PFT applications and to successfully simulate reflectance across wide ranges of physical environments, biomass, and assemblage characteristics.
Selleri, Franco
2015-01-01
Weak Relativity is an equivalent theory to Special Relativity according to Reichenbach’s definition, where the parameter epsilon equals to 0. It formulates a Neo-Lorentzian approach by replacing the Lorentz transformations with a new set named “Inertial Transformations”, thus explaining the Sagnac effect, the twin paradox and the trip from the future to the past in an easy and elegant way. The cosmic microwave background is suggested as a possible privileged reference system. Most importantly, being a theory based on experimental proofs, rather than mutual consensus, it offers a physical description of reality independent of the human observation.
International Nuclear Information System (INIS)
Comandi, G.L.; Chiofalo, M.L.; Toncelli, R.; Bramanti, D.; Polacco, E.; Nobili, A.M.
2006-01-01
Recent theoretical work suggests that violation of the equivalence principle might be revealed in a measurement of the fractional differential acceleration η between two test bodies-of different compositions, falling in the gravitational field of a source mass--if the measurement is made to the level of η≅10 -13 or better. This being within the reach of ground based experiments gives them a new impetus. However, while slowly rotating torsion balances in ground laboratories are close to reaching this level, only an experiment performed in a low orbit around the Earth is likely to provide a much better accuracy. We report on the progress made with the 'Galileo Galilei on the ground' (GGG) experiment, which aims to compete with torsion balances using an instrument design also capable of being converted into a much higher sensitivity space test. In the present and following articles (Part I and Part II), we demonstrate that the dynamical response of the GGG differential accelerometer set into supercritical rotation-in particular, its normal modes (Part I) and rejection of common mode effects (Part II)-can be predicted by means of a simple but effective model that embodies all the relevant physics. Analytical solutions are obtained under special limits, which provide the theoretical understanding. A simulation environment is set up, obtaining a quantitative agreement with the available experimental data on the frequencies of the normal modes and on the whirling behavior. This is a needed and reliable tool for controlling and separating perturbative effects from the expected signal, as well as for planning the optimization of the apparatus
Directory of Open Access Journals (Sweden)
John A.E. Vervaele
2005-12-01
Full Text Available The deepening and widening of European integration has led to an increase in transborder crime. Concurrent prosecution and sanctioning by several Member States is not only a problem in inter-state relations and an obstacle in the European integration process, but also a violation of the ne bis in idem principle, defined as a transnational human right in a common judicial area. This article analyzes whether and to what extent the ECHR has contributed and may continue to contribute to the development of such a common ne bis in idem standard in Europe. It is also examined whether the application of the ne bis in idem principle in classic inter-state judicial cooperation in criminal matters in the framework of the Council of Europe may make such a contribution as well. The transnational function of the ne bis in idem principle is discussed in the light of the Court of Justice’s case law on ne bis in idem in the framework of the area of Freedom, Security and Justice. Finally the inherent tension between mutual recognition and the protection of human rights in transnational justice is analyzed by looking at the insertion of the ne bis in idem principle in the Framework Decision on the European arrest warrant.
Molenaar, Peter C M
2017-01-01
Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.
Sundar, Bhuvanesh; Hamilton, Alasdair C; Courtial, Johannes
2009-02-01
We derive a formal description of local light-ray rotation in terms of complex refractive indices. We show that Fermat's principle holds, and we derive an extended Snell's law. The change in the angle of a light ray with respect to the normal of a refractive index interface is described by the modulus of the refractive index ratio; the rotation around the interface normal is described by the argument of the refractive index ratio.
Energy Technology Data Exchange (ETDEWEB)
Yu, H.L. [Science and Technology on Surface Physics and Chemistry Laboratory, P.O. Box 718-35, Mianyang 621907 (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Deng, X.D. [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Li, G.; Lai, X.C. [Science and Technology on Surface Physics and Chemistry Laboratory, P.O. Box 718-35, Mianyang 621907 (China); Meng, D.Q., E-mail: yuhuilong2002@126.com [Science and Technology on Surface Physics and Chemistry Laboratory, P.O. Box 718-35, Mianyang 621907 (China)
2014-10-15
Highlights: • The CO{sub 2} adsorption on PuO{sub 2} (1 1 0) surface was studied by GGA + U. • Both weak and strong adsorptions exist between CO{sub 2} and the PuO{sub 2} (1 1 0) surface. • Electrostatic interactions were involved in the weak interactions. • Covalent bonding was developed in the strong adsorptions. - Abstract: The CO{sub 2} adsorption on plutonium dioxide (PuO{sub 2}) (1 1 0) surface was studied using projector-augmented wave (PAW) method based on density-functional theory corrected for onsite Coulombic interactions (GGA + U). It is found that CO{sub 2} has several different adsorption features on PuO{sub 2} (1 1 0) surface. Both weak and strong adsorptions exist between CO{sub 2} and the PuO{sub 2} (1 1 0) surface. Further investigation of partial density of states (PDOS) and charge density difference on two typical absorption sites reveal that electrostatic interactions were involved in the weak interactions, while covalent bonding was developed in the strong adsorptions.
Slyusarenko, Yurii V.; Sliusarenko, Oleksii Yu.
2017-11-01
We develop a microscopic approach to the construction of the kinetic theory of dilute weakly ionized gas of hydrogen-like atoms. The approach is based on the statements of the second quantization method in the presence of bound states of particles. The basis of the derivation of kinetic equations is the method of reduced description of relaxation processes. Within the framework of the proposed approach, a system of common kinetic equations for the Wigner distribution functions of free oppositely charged fermions of two kinds (electrons and cores) and their bound states—hydrogen-like atoms— is obtained. Kinetic equations are used to study the spectra of elementary excitations in the system when all its components are non-degenerate. It is shown that in such a system, in addition to the typical plasma waves, there are longitudinal waves of matter polarization and the transverse ones with a behavior characteristic of plasmon polaritons. The expressions for the dependence of the frequencies and Landau damping coefficients on the wave vector for all branches of the oscillations discovered are obtained. Numerical evaluation of the elementary perturbation parameters in the system on an example of a weakly ionized dilute gas of the 23Na atoms using the D2-line characteristics of the natrium atom is given. We note the possibility of using the results of the developed theory to describe the properties of a Bose condensate of photons in the diluted weakly ionized gas of hydrogen-like atoms.
Energy Technology Data Exchange (ETDEWEB)
Speranskaya, G.V.; Tyutyunnikov, Yu.B.; Erkin, L.I.; Nefedov, P.Ya.; Sheptovitskii, M.S.; Toryanik, E.I.
1987-01-01
Coking coal resources of the USSR are nonuniformly distributed among major coal basins (Donetsk 26%, Pechora 7%, Kizelovsk and L'vov-Volyn' 0.5% each, Kuznetsk 48.2%, Karaganda 7%, South Yakutiya 4.4%, others 7.4%). Only one-third of the resources are available in the European area of the USSR where the demand for blast-furnace coke is greater. The use of weakly baking and nonbaking coals for the production of metallurgy-grade formed coke has been found to be the simplest way to avoid transportation of fat components of coking blends and to cut the cost of pig iron production under Soviet circumstances. Commercial production of the formed coke should enable the blast-furnace coke production to be raised by 15 Mio t/a now and by 20-22 Mio t/a in the nearest future without the structure of the Soviet coal production being significantly affected. The book describes technical properties of the gas, weakly baking and long-flame coals (G, SS and D types, respectively) from Donetsk, Kuznetsk, Irkutsk and Karaganda coal basins used as coking blend components, discusses many scientific and technological aspects of the industrial-scale process (i.e. thermal pretreatment of coal with a gaseous heat-carrier, effect of pressure on the plastic layer formation in weakly baking coal blends, coke oven construction), and reviews technical properties of formed coke (shape and size of coke lumps, drum strength, macro- and microstructure, thermal stability, reactivity) used in the blast-furnace process. 122 refs., 118 figs., 87 tabs.
International Nuclear Information System (INIS)
Leite Lopes, J.
1976-01-01
A survey of the fundamental ideas on weak currents such as CVC and PCAC and a presentation of the Cabibbo current and the neutral weak currents according to the Salam-Weinberg model and the Glashow-Iliopoulos-Miami model are given [fr
Introduction to weak interactions
International Nuclear Information System (INIS)
Leite Lopes, J.
An account is first given of the electromagnetic interactions of complex, scalar, vector and spinor fields. It is shown that the electromagnetic field may be considered as a gauge field. Yang-Mills fields and the field theory invariant with respect to the non-Abelian gauge transformation group are then described. The construction, owing to this invariance principle, of conserved isospin currents associated with gauge fields is also demonstrated. This is followed by a historical survey of the development of the weak interaction theory, established at first to describe beta disintegration processes by analogy with electrodynamics. The various stages are mentioned from the discovery of principles and rules and violation of principles, such as those of invariance with respect to spatial reflection and charge conjugation to the formulation of the effective current-current Lagrangian and research on the structure of weak currents [fr
International Nuclear Information System (INIS)
Wojcicki, S.
1978-11-01
Lectures are given on weak decays from a phenomenological point of view, emphasizing new results and ideas and the relation of recent results to the new standard theoretical model. The general framework within which the weak decay is viewed and relevant fundamental questions, weak decays of noncharmed hadrons, decays of muons and the tau, and the decays of charmed particles are covered. Limitation is made to the discussion of those topics that either have received recent experimental attention or are relevant to the new physics. (JFP) 178 references
International Nuclear Information System (INIS)
Parra, Felix I; Catto, Peter J
2009-01-01
We compare two different derivations of the gyrokinetic equation: the Hamiltonian approach in Dubin D H E et al (1983 Phys. Fluids 26 3524) and the recursive methodology in Parra F I and Catto P J (2008 Plasma Phys. Control. Fusion 50 065014). We prove that both approaches yield the same result at least to second order in a Larmor radius over macroscopic length expansion. There are subtle differences in the definitions of some of the functions that need to be taken into account to prove the equivalence.
International Nuclear Information System (INIS)
Ogava, S.; Savada, S.; Nakagava, M.
1983-01-01
The problem of the use of weak interaction laws to study models of elementary particles is discussed. The most typical examples of weak interaction is beta-decay of nucleons and muons. Beta-interaction is presented by quark currents in the form of universal interaction of the V-A type. Universality of weak interactions is well confirmed using as examples e- and μ-channels of pion decay. Hypothesis on partial preservation of axial current is applicable to the analysis of processes with pion participation. In the framework of the model with four flavours lepton decays of hadrons are considered. Weak interaction without lepton participation are also considered. Properties of neutral currents are described briefly
International Nuclear Information System (INIS)
Chanda, R.
1981-01-01
The theoretical and experimental evidences to form a basis for Lagrangian Quantum field theory for Weak Interactions are discussed. In this context, gauge invariance aspects of such interactions are showed. (L.C.) [pt
Determination of dose equivalent with tissue-equivalent proportional counters
International Nuclear Information System (INIS)
Dietze, G.; Schuhmacher, H.; Menzel, H.G.
1989-01-01
Low pressure tissue-equivalent proportional counters (TEPC) are instruments based on the cavity chamber principle and provide spectral information on the energy loss of single charged particles crossing the cavity. Hence such detectors measure absorbed dose or kerma and are able to provide estimates on radiation quality. During recent years TEPC based instruments have been developed for radiation protection applications in photon and neutron fields. This was mainly based on the expectation that the energy dependence of their dose equivalent response is smaller than that of other instruments in use. Recently, such instruments have been investigated by intercomparison measurements in various neutron and photon fields. Although their principles of measurements are more closely related to the definition of dose equivalent quantities than those of other existing dosemeters, there are distinct differences and limitations with respect to the irradiation geometry and the determination of the quality factor. The application of such instruments for measuring ambient dose equivalent is discussed. (author)
Rotating model for the equivalence principle paradox
International Nuclear Information System (INIS)
Wilkins, D.C.
1975-01-01
An idealized system is described in which two inertial frames rotate relative to one another. When a (scalar) dipole is locally at rest in one frame, a paradox arises as to whether or not it will radiate. Fluxes of energy and angular momentum and the time development of the system are discussed. Resolution of the paradox involves several unusual features, including (i) radiation by an unmoving charge, an effect discussed by Chitre, Price, and Sandberg, (ii) different power seen by relatively accelerated inertial observers, and (iii) radiation reaction due to gravitational backscattering of radiation, in agreement with the work of C. and B. DeWitt. These results are obtained, for the most part, without the complications of curved space--time
The strong equivalence principle and its violation
International Nuclear Information System (INIS)
Canuto, V.M.; Goldman, I.
1983-01-01
In this paper, the authors discuss theoretical and observational aspects of an SEP violation. They present a two-times theory as a possible framework to handle an SEP violation and summarize the tests performed to check the compatibility of such violation with a host of data ranging from nucleosynthesis to geophysics. They also discuss the dynamical equations needed to analyze radar ranging data to reveal an SEP violation and in particular the method employed by Shapiro and Reasenberg. (Auth.)
Runaway dilaton and equivalence principle violations
Damour, Thibault Marie Alban Guillaume; Veneziano, Gabriele; Damour, Thibault; Piazza, Federico; Veneziano, Gabriele
2002-01-01
In a recently proposed scenario, where the dilaton decouples while cosmologically attracted towards infinite bare string coupling, its residual interactions can be related to the amplitude of density fluctuations generated during inflation, and are large enough to be detectable through a modest improvement on present tests of free-fall universality. Provided it has significant couplings to either dark matter or dark energy, a runaway dilaton can also induce time-variations of the natural "constants" within the reach of near-future experiments.
Floyd's principle, correctness theories and program equivalence
Bergstra, J.A.; Tiuryn, J.; Tucker, J.V.
1982-01-01
A programming system is a language made from a fixed class of data abstractions and a selection of familiar deterministic control and assignment constructs. It is shown that the sets of all ‘before-after’ first-order assertions which are true of programs in any such language can uniquely determine
International Nuclear Information System (INIS)
Bjorken, J.D.
1978-01-01
Weak interactions are studied from a phenomenological point of view, by using a minimal number of theoretical hypotheses. Charged-current phenomenology, and then neutral-current phenomenology are discussed. This all is described in terms of a global SU(2) symmetry plus an electromagnetic correction. The intermediate-boson hypothesis is introduced and lower bounds on the range of the weak force are inferred. This phenomenology does not yet reconstruct all the predictions of the conventional SU(2)xU(1) gauge theory. To do that requires an additional assumption of restoration of SU(2) symmetry at asymptotic energies
International Nuclear Information System (INIS)
Lange, Benjamin
2010-01-01
This paper presents a new method for doing a free-fall equivalence-principle (EP) experiment in a satellite at ambient temperature which solves two problems that have previously blocked this approach. By using large masses to change the gravity gradient at the proof masses, the orbit dynamics of a drag-free satellite may be changed in such a way that the experiment can mimic a free-fall experiment in a constant gravitational field on the earth. An experiment using a sphere surrounded by a spherical shell both completely unsupported and free falling has previously been impractical because (1) it is not possible to distinguish between a small EP violation and a slight difference in the semi-major axes of the orbits of the two proof masses and (2) the position difference in the orbit due to an EP violation only grows as t whereas the largest disturbance grows as t 3/2 . Furthermore, it has not been known how to independently measure the positions of a shell and a solid sphere with sufficient accuracy. The measurement problem can be solved by using a two-color transcollimator (see the main text), and since the radial-position-error and t-response problems arise from the earth's gravity gradient and not from its gravity field, one solution is to modify the earth's gravity gradient with local masses fixed in the satellite. Since the gravity gradient at the surface of a sphere, for example, depends only on its density, the gravity gradients of laboratory masses and of the earth unlike their fields are of the same order of magnitude. In a drag-free satellite spinning perpendicular to the orbit plane, two fixed spherical masses whose connecting line parallels the satellite spin axis can generate a dc gravity gradient at test masses located between them which cancels the combined gravity gradient of the earth and differential centrifugal force. With perfect cancellation, the position-error problem vanishes and the response grows as t 2 along a line which always points toward
Measurement of weak radioactivity
Theodorsson , P
1996-01-01
This book is intended for scientists engaged in the measurement of weak alpha, beta, and gamma active samples; in health physics, environmental control, nuclear geophysics, tracer work, radiocarbon dating etc. It describes the underlying principles of radiation measurement and the detectors used. It also covers the sources of background, analyzes their effect on the detector and discusses economic ways to reduce the background. The most important types of low-level counting systems and the measurement of some of the more important radioisotopes are described here. In cases where more than one type can be used, the selection of the most suitable system is shown.
DEFF Research Database (Denmark)
Gonzalez Eiras, Martin; Niepelt, Dirk
2015-01-01
Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime and a st......Traditional "economic equivalence'' results, like the Ricardian equivalence proposition, define equivalence classes over exogenous policies. We derive "politico-economic equivalence" conditions that apply in environments where policy is endogenous and chosen sequentially. A policy regime...... their use in the context of several applications, relating to social security reform, tax-smoothing policies and measures to correct externalities....
Quantum Action Principle with Generalized Uncertainty Principle
Gu, Jie
2013-01-01
One of the common features in all promising candidates of quantum gravity is the existence of a minimal length scale, which naturally emerges with a generalized uncertainty principle, or equivalently a modified commutation relation. Schwinger's quantum action principle was modified to incorporate this modification, and was applied to the calculation of the kernel of a free particle, partly recovering the result previously studied using path integral.
Reconstructing weak values without weak measurements
International Nuclear Information System (INIS)
Johansen, Lars M.
2007-01-01
I propose a scheme for reconstructing the weak value of an observable without the need for weak measurements. The post-selection in weak measurements is replaced by an initial projector measurement. The observable can be measured using any form of interaction, including projective measurements. The reconstruction is effected by measuring the change in the expectation value of the observable due to the projector measurement. The weak value may take nonclassical values if the projector measurement disturbs the expectation value of the observable
Hypernuclear weak decay puzzle
International Nuclear Information System (INIS)
Barbero, C.; Horvat, D.; Narancic, Z.; Krmpotic, F.; Kuo, T.T.S.; Tadic, D.
2002-01-01
A general shell model formalism for the nonmesonic weak decay of the hypernuclei has been developed. It involves a partial wave expansion of the emitted nucleon waves, preserves naturally the antisymmetrization between the escaping particles and the residual core, and contains as a particular case the weak Λ-core coupling formalism. The extreme particle-hole model and the quasiparticle Tamm-Dancoff approximation are explicitly worked out. It is shown that the nuclear structure manifests itself basically through the Pauli principle, and a very simple expression is derived for the neutron- and proton-induced decays rates Γ n and Γ p , which does not involve the spectroscopic factors. We use the standard strangeness-changing weak ΛN→NN transition potential which comprises the exchange of the complete pseudoscalar and vector meson octets (π,η,K,ρ,ω,K * ), taking into account some important parity-violating transition operators that are systematically omitted in the literature. The interplay between different mesons in the decay of Λ 12 C is carefully analyzed. With the commonly used parametrization in the one-meson-exchange model (OMEM), the calculated rate Γ NM =Γ n +Γ p is of the order of the free Λ decay rate Γ 0 (Γ NM th congruent with Γ 0 ) and is consistent with experiments. Yet the measurements of Γ n/p =Γ n /Γ p and of Γ p are not well accounted for by the theory (Γ n/p th p th > or approx. 0.60Γ 0 ). It is suggested that, unless additional degrees of freedom are incorporated, the OMEM parameters should be radically modified
The Thermodynamical Arrow and the Historical Arrow; Are They Equivalent?
Directory of Open Access Journals (Sweden)
Martin Tamm
2017-08-01
Full Text Available In this paper, the relationship between the thermodynamic and historical arrows of time is studied. In the context of a simple combinatorial model, their definitions are made more precise and in particular strong versions (which are not compatible with time symmetric microscopic laws and weak versions (which can be compatible with time symmetric microscopic laws are given. This is part of a larger project that aims to explain the arrows as consequences of a common time symmetric principle in the set of all possible universes. However, even if we accept that both arrows may have the same origin, this does not imply that they are equivalent, and it is argued that there can be situations where one arrow may be well-defined but the other is not.
Decompositional equivalence: A fundamental symmetry underlying quantum theory
Fields, Chris
2014-01-01
Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?
International Nuclear Information System (INIS)
Huyskens, C.J.; Passchier, W.F.
1988-01-01
The effective dose equivalent is a quantity which is used in the daily practice of radiation protection as well as in the radiation hygienic rules as measure for the health risks. In this contribution it is worked out upon which assumptions this quantity is based and in which cases the effective dose equivalent can be used more or less well. (H.W.)
Characterization of revenue equivalence
Heydenreich, B.; Müller, R.; Uetz, Marc Jochen; Vohra, R.
2009-01-01
The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called revenue equivalence. We give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The characterization holds
Characterization of Revenue Equivalence
Heydenreich, Birgit; Müller, Rudolf; Uetz, Marc Jochen; Vohra, Rakesh
2008-01-01
The property of an allocation rule to be implementable in dominant strategies by a unique payment scheme is called \\emph{revenue equivalence}. In this paper we give a characterization of revenue equivalence based on a graph theoretic interpretation of the incentive compatibility constraints. The
International Nuclear Information System (INIS)
Grenet, G.; Kibler, M.
1978-06-01
A closed polynomial formula for the qth component of the diagonal operator equivalent of order k is derived in terms of angular momentum operators. The interest in various fields of molecular and solid state physics of using such a formula in connection with symmetry adapted operator equivalents is outlined
Cosmological principles. II. Physical principles
International Nuclear Information System (INIS)
Harrison, E.R.
1974-01-01
The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)
Energy Technology Data Exchange (ETDEWEB)
Barbarouxa, J.M. [Centre de Physique Theorique, 13 - Marseille (France); Toulon-Var Univ. du Sud, Dept. de Mathematiques, 83 - La Garde (France); Guillot, J.C. [Centre de Mathematiques Appliquees, UMR 7641, Ecole Polytechnique - CNRS, 91 - Palaiseau (France)
2009-09-15
We study the spectral properties of a Hamiltonian describing the weak decay of spin 1 massive bosons into the full family of leptons. We prove that the considered Hamiltonian is self-adjoint, with a unique ground state and we derive a Mourre estimate and a limiting absorption principle above the ground state energy and below the first threshold, for a sufficiently small coupling constant. As a corollary, we prove absence of eigenvalues and absolute continuity of the energy spectrum in the same spectral interval. (authors)
Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe
2014-08-01
Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored, these include: battery state of charge (SoC), battery state of health (capcity fade determination, SoH), and state of function (power fade determination, SoF). In a series of two papers, we propose a system of algorithms based on a weighted recursive least quadratic squares parameter estimator, that is able to determine the battery impedance and diffusion parameters for accurate state estimation. The functionality was proven on different battery chemistries with different aging conditions. The first paper investigates the general requirements on BMS for HEV/EV applications. In parallel, the commonly used methods for battery monitoring are reviewed to elaborate their strength and weaknesses in terms of the identified requirements for on-line applications. Special emphasis will be placed on real-time capability and memory optimized code for cost-sensitive industrial or automotive applications in which low-cost microcontrollers must be used. Therefore, a battery model is presented which includes the influence of the Butler-Volmer kinetics on the charge-transfer process. Lastly, the mass transport process inside the battery is modeled in a novel state-space representation.
Weak MSO: automata and expressiveness modulo bisimilarity
Carreiro, F.; Facchini, A.; Venema, Y.; Zanasi, F.
2014-01-01
We prove that the bisimulation-invariant fragment of weak monadic second-order logic (WMSO) is equivalent to the fragment of the modal μ-calculus where the application of the least fixpoint operator μp.φ is restricted to formulas φ that are continuous in p. Our proof is automata-theoretic in nature;
Reducing Weak to Strong Bisimilarity in CCP
Directory of Open Access Journals (Sweden)
Andrés Aristizábal
2012-12-01
Full Text Available Concurrent constraint programming (ccp is a well-established model for concurrency that singles out the fundamental aspects of asynchronous systems whose agents (or processes evolve by posting and querying (partial information in a global medium. Bisimilarity is a standard behavioural equivalence in concurrency theory. However, only recently a well-behaved notion of bisimilarity for ccp, and a ccp partition refinement algorithm for deciding the strong version of this equivalence have been proposed. Weak bisimiliarity is a central behavioural equivalence in process calculi and it is obtained from the strong case by taking into account only the actions that are observable in the system. Typically, the standard partition refinement can also be used for deciding weak bisimilarity simply by using Milner's reduction from weak to strong bisimilarity; a technique referred to as saturation. In this paper we demonstrate that, because of its involved labeled transitions, the above-mentioned saturation technique does not work for ccp. We give an alternative reduction from weak ccp bisimilarity to the strong one that allows us to use the ccp partition refinement algorithm for deciding this equivalence.
International Nuclear Information System (INIS)
Son, Mi Jung; Park, Jin Han; Lim, Ki Moon
2007-01-01
We introduce a new class of functions called weakly clopen function which includes the class of almost clopen functions due to Ekici [Ekici E. Generalization of perfectly continuous, regular set-connected and clopen functions. Acta Math Hungar 2005;107:193-206] and is included in the class of weakly continuous functions due to Levine [Levine N. A decomposition of continuity in topological spaces. Am Math Mon 1961;68:44-6]. Some characterizations and several properties concerning weakly clopenness are obtained. Furthermore, relationships among weak clopenness, almost clopenness, clopenness and weak continuity are investigated
Vaidman, L.
2017-10-01
Recent controversy regarding the meaning and usefulness of weak values is reviewed. It is argued that in spite of recent statistical arguments by Ferrie and Combes, experiments with anomalous weak values provide useful amplification techniques for precision measurements of small effects in many realistic situations. The statistical nature of weak values is questioned. Although measuring weak values requires an ensemble, it is argued that the weak value, similarly to an eigenvalue, is a property of a single pre- and post-selected quantum system. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Establishing Substantial Equivalence: Transcriptomics
Baudo, María Marcela; Powers, Stephen J.; Mitchell, Rowan A. C.; Shewry, Peter R.
Regulatory authorities in Western Europe require transgenic crops to be substantially equivalent to conventionally bred forms if they are to be approved for commercial production. One way to establish substantial equivalence is to compare the transcript profiles of developing grain and other tissues of transgenic and conventionally bred lines, in order to identify any unintended effects of the transformation process. We present detailed protocols for transcriptomic comparisons of developing wheat grain and leaf material, and illustrate their use by reference to our own studies of lines transformed to express additional gluten protein genes controlled by their own endosperm-specific promoters. The results show that the transgenes present in these lines (which included those encoding marker genes) did not have any significant unpredicted effects on the expression of endogenous genes and that the transgenic plants were therefore substantially equivalent to the corresponding parental lines.
Hewitt, Paul G.
2004-01-01
Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…
The Principle of General Tovariance
Heunen, C.; Landsman, N. P.; Spitters, B.
2008-06-01
We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.
International Nuclear Information System (INIS)
Orlowski, S.; Schaller, K.H.
1990-01-01
The report reviews, for the Member States of the European Community, possible situations in which an equivalence concept for radioactive waste may be used, analyses the various factors involved, and suggests guidelines for the implementation of such a concept. Only safety and technical aspects are covered. Other aspects such as commercial ones are excluded. Situations where the need for an equivalence concept has been identified are processes where impurities are added as a consequence of the treatment and conditioning process, the substitution of wastes from similar waste streams due to the treatment process, and exchange of waste belonging to different waste categories. The analysis of factors involved and possible ways for equivalence evaluation, taking into account in particular the chemical, physical and radiological characteristics of the waste package, and the potential risks of the waste form, shows that no simple all-encompassing equivalence formula may be derived. Consequently, a step-by-step approach is suggested, which avoids complex evaluations in the case of simple exchanges
Equivalent Colorings with "Maple"
Cecil, David R.; Wang, Rongdong
2005-01-01
Many counting problems can be modeled as "colorings" and solved by considering symmetries and Polya's cycle index polynomial. This paper presents a "Maple 7" program link http://users.tamuk.edu/kfdrc00/ that, given Polya's cycle index polynomial, determines all possible associated colorings and their partitioning into equivalence classes. These…
Correspondences. Equivalence relations
International Nuclear Information System (INIS)
Bouligand, G.M.
1978-03-01
We comment on sections paragraph 3 'Correspondences' and paragraph 6 'Equivalence Relations' in chapter II of 'Elements de mathematique' by N. Bourbaki in order to simplify their comprehension. Paragraph 3 exposes the ideas of a graph, correspondence and map or of function, and their composition laws. We draw attention to the following points: 1) Adopting the convention of writting from left to right, the composition law for two correspondences (A,F,B), (U,G,V) of graphs F, G is written in full generality (A,F,B)o(U,G,V) = (A,FoG,V). It is not therefore assumed that the co-domain B of the first correspondence is identical to the domain U of the second (EII.13 D.7), (1970). 2) The axiom of choice consists of creating the Hilbert terms from the only relations admitting a graph. 3) The statement of the existence theorem of a function h such that f = goh, where f and g are two given maps having the same domain (of definition), is completed if h is more precisely an injection. Paragraph 6 considers the generalisation of equality: First, by 'the equivalence relation associated with a map f of a set E identical to (x is a member of the set E and y is a member of the set E and x:f = y:f). Consequently, every relation R(x,y) which is equivalent to this is an equivalence relation in E (symmetrical, transitive, reflexive); then R admits a graph included in E x E, etc. Secondly, by means of the Hilbert term of a relation R submitted to the equivalence. In this last case, if R(x,y) is separately collectivizing in x and y, theta(x) is not the class of objects equivalent to x for R (EII.47.9), (1970). The interest of bringing together these two subjects, apart from this logical order, resides also in the fact that the theorem mentioned in 3) can be expressed by means of the equivalence relations associated with the functions f and g. The solutions of the examples proposed reveal their simplicity [fr
International Nuclear Information System (INIS)
Delorme, J.
1978-01-01
The definition and general properties of weak second class currents are recalled and various detection possibilities briefly reviewed. It is shown that the existing data on nuclear beta decay can be consistently analysed in terms of a phenomenological model. Their implication on the fundamental structure of weak interactions is discussed [fr
Rehren, K. -H.
1996-01-01
Weak C* Hopf algebras can act as global symmetries in low-dimensional quantum field theories, when braid group statistics prevents group symmetries. Possibilities to construct field algebras with weak C* Hopf symmetry from a given theory of local observables are discussed.
DEFF Research Database (Denmark)
Lukas, Manuel; Hillebrand, Eric
Relations between economic variables can often not be exploited for forecasting, suggesting that predictors are weak in the sense that estimation uncertainty is larger than bias from ignoring the relation. In this paper, we propose a novel bagging predictor designed for such weak predictor variab...
Applicability of ambient dose equivalent H*(d) in mixed radiation fields - a critical discussion
International Nuclear Information System (INIS)
Hajek, M.; Vana, N.
2004-01-01
For purposes of routine radiation protection, it is desirable to characterize the potential irradiation of individuals in terms of a single dose equivalent quantity that would exist in a phantom approximating the human body. The phantom of choice is the ICRU sphere made of 30 cm diameter tissue-equivalent plastic with a density of 1 g.cm-3 and a mass composition of 76.2 % O, 11.1 % C, 10.1 % H and 2.6 % N. Ambient dose equivalent, H*(d), was defined in ICRU report 51 as the dose equivalent that would be produced by an expanded and aligned radiation field at a depth d in the ICRU sphere. The recommended reference depths are 10 mm for strongly penetrating radiation and 0.07 mm for weakly penetrating radiation, respectively. As an operational quantity in radiation protection, H*(d) shall serve as a conservative and directly measurable estimate of protection quantities, e.g. effective dose E, which in turn are intended to give an indication of the risk associated with radiation exposure. The situation attains increased complexity in radiation environments being composed of a variety of charged and uncharged particles in a broad energetic spectrum. Radiation fields of similarly complex nature are, for example, encountered onboard aircraft and in space. Dose equivalent was assessed as a function of depth in quasi tissue-equivalent spheres by means of thermoluminescent dosemeters evaluated according to the high-temperature ratio (HTR) method. The presented experiments were performed both onboard aircraft and the Russian space station Mir. As a result of interaction processes within the phantom body, the incident primary spectrum may be significantly modified with increasing depth. For the radiation field at aviation altitudes we found the maximum of dose equivalent in a depth of 60 mm which conflicts with the 10 mm value recommended by ICRU. Contrary, for the space radiation environment the maximum dose equivalent was found at the surface of the sphere. This suggests that
Applicability of Ambient Dose Equivalent H (d) in Mixed Radiation Fields - A Critical Discussion
International Nuclear Information System (INIS)
Vana, R.; Hajek, M.; Bergerm, T.
2004-01-01
For purposes of routine radiation protection, it is desirable to characterize the potential irradiation of individuals in terms of a single dose equivalent quantity that would exist in a phantom approximating the human body. The phantom of choice is the ICRU sphere made of 30 cm diameter tissue-equivalent plastic with a density of 1 g/cm3 and a mass composition of 76.2% O, 11.1% C, 10.1% H and 2.6% N. Ambient dose equivalent, H(d), was defined in ICRU report 51 as the dose equivalent that would be produced by an expanded and aligned radiation field at a depth d in the ICRU sphere. The recommended reference depths are 10 mm for strongly penetrating radiation and 0.07 mm for weakly penetrating radiation, respectively. As an operational quantity in radiation protection, H(d) shall serve as a conservative and directly measurable estimate of protection quantities, e.g. effective dose E, which in turn are intended to give an indication of the risk associated with radiation exposure. The situation attains increased complexity in radiation environments being composed of a variety of charged and uncharged particles in a broad energetic spectrum. Radiation fields of similarly complex nature are, for example, encountered onboard aircraft and in space. Dose equivalent was assessed as a function of depth in quasi tissue-equivalent spheres by means of thermoluminescent dosemeters evaluated according to the high-temperature ratio (HTR) method. The presented experiments were performed both onboard aircraft and the Russian space station Mir. As a result of interaction processes within the phantom body, the incident primary spectrum may be significantly modified with increasing depth. For the radiation field at aviation altitudes we found the maximum of dose equivalent in a depth of 60 mm which conflicts with the 10 mm value recommended by ICRU. Contrary, for the space radiation environment the maximum dose equivalent was found at the surface of the sphere. This suggests that skin
Nikkelen, A.L.J.M.; Meurs, van W.L.; Ohrn, M.A.K.
1997-01-01
We evaluated the mathematical equivalence between the two-compartment pharmacokinetic model of the neuromuscular blocking agent atracurium and a hydraulic analogue that includes harmacodynamic principles.
Nonextensive entropies derived from Gauss' principle
International Nuclear Information System (INIS)
Wada, Tatsuaki
2011-01-01
Gauss' principle in statistical mechanics is generalized for a q-exponential distribution in nonextensive statistical mechanics. It determines the associated stochastic and statistical nonextensive entropies which satisfy Greene-Callen principle concerning on the equivalence between microcanonical and canonical ensembles. - Highlights: → Nonextensive entropies are derived from Gauss' principle and ensemble equivalence. → Gauss' principle is generalized for a q-exponential distribution. → I have found the condition for satisfying Greene-Callen principle. → The associated statistical q-entropy is found to be normalized Tsallis entropy.
International Nuclear Information System (INIS)
Veltman, H.
1990-01-01
The equivalence theorem states that, at an energy E much larger than the vector-boson mass M, the leading order of the amplitude with longitudinally polarized vector bosons on mass shell is given by the amplitude in which these vector bosons are replaced by the corresponding Higgs ghosts. We prove the equivalence theorem and show its validity in every order in perturbation theory. We first derive the renormalized Ward identities by using the diagrammatic method. Only the Feynman-- 't Hooft gauge is discussed. The last step of the proof includes the power-counting method evaluated in the large-Higgs-boson-mass limit, needed to estimate the leading energy behavior of the amplitudes involved. We derive expressions for the amplitudes involving longitudinally polarized vector bosons for all orders in perturbation theory. The fermion mass has not been neglected and everything is evaluated in the region m f ∼M much-lt E much-lt m Higgs
International Nuclear Information System (INIS)
Deshpande, N.G.
1980-01-01
By electro-weak theory is meant the unified field theory that describes both weak and electro-magnetic interactions. The development of a unified electro-weak theory is certainly the most dramatic achievement in theoretical physics to occur in the second half of this century. It puts weak interactions on the same sound theoretical footing as quantum elecrodynamics. Many theorists have contributed to this development, which culminated in the works of Glashow, Weinberg and Salam, who were jointly awarded the 1979 Nobel Prize in physics. Some of the important ideas that contributed to this development are the theory of beta decay formulated by Fermi, Parity violation suggested by Lee and Yang, and incorporated into immensely successful V-A theory of weak interactions by Sudarshan and Marshak. At the same time ideas of gauge invariance were applied to weak interaction by Schwinger, Bludman and Glashow. Weinberg and Salam then went one step further and wrote a theory that is renormalizable, i.e., all higher order corrections are finite, no mean feat for a quantum field theory. The theory had to await the development of the quark model of hadrons for its completion. A description of the electro-weak theory is given
International Nuclear Information System (INIS)
Walecka, J.D.
1983-01-01
Nuclei provide systems where the strong, electomagnetic, and weak interactions are all present. The current picture of the strong interactions is based on quarks and quantum chromodynamics (QCD). The symmetry structure of this theory is SU(3)/sub C/ x SU(2)/sub W/ x U(1)/sub W/. The electroweak interactions in nuclei can be used to probe this structure. Semileptonic weak interactions are considered. The processes under consideration include beta decay, neutrino scattering and weak neutral-current interactions. The starting point in the analysis is the effective Lagrangian of the Standard Model
New test of the equivalence principle from lunar laser ranging
Williams, J. G.; Dicke, R. H.; Bender, P. L.; Alley, C. O.; Currie, D. G.; Carter, W. E.; Eckhardt, D. H.
1976-01-01
An analysis of six years of lunar-laser-ranging data gives a zero amplitude for the Nordtvedt term in the earth-moon distance yielding the Nordtvedt parameter eta = 0.00 plus or minus 0.03. Thus, earth's gravitational self-energy contributes equally, plus or minus 3%, to its inertial mass and passive gravitational mass. At the 70% confidence level this result is only consistent with the Brans-Dicke theory for omega greater than 29. We obtain the absolute value of beta - 1 less than about 0.02 to 0.05 for five-parameter parametrized post-Newtonian theories of gravitation with energy-momentum conservation.
Ambiguity of the equivalence principle and Hawking's temperature
Hooft, G. 't
1984-01-01
There are two inequivalent ways in which the laws of physics in a gravitational field can be related to the laws in an inertial frame, when quantum mechanical effects are taken into account. This leads to an ambiguity in the derivation of Hawking's radiation temperature for a black hole: it could be
Principle of equivalence and a theory of gravitation
International Nuclear Information System (INIS)
Shelupsky, D.
1985-01-01
We examine a well-known thought experiment often used to explain why we should expect a ray of light to be bent by gravity; according to this the light bends downward in the gravitational field because this is just what an observer would see if there were no field and he were accelerating upward instead. We show that this description of the action of Newtonian gravity in a flat space-time corresponds to an old two-index symmetric tensor field theory of gravitation
Lee, T. D.
1970-07-01
While the phenomenon of beta-decay was discovered near the end of the last century, the notion that the weak interaction forms a separate field of physical forces evolved rather gradually. This became clear only after the experimental discoveries of other weak reactions such as muon-decay, muon-capture, etc., and the theoretical observation that all these reactions can be described by approximately the same coupling constant, thus giving rise to the notion of a universal weak interaction. Only then did one slowly recognize that the weak interaction force forms an independent field, perhaps on the same footing as the gravitational force, the electromagnetic force, and the strong nuclear and sub-nuclear forces.
International Nuclear Information System (INIS)
Anon.
1979-01-01
The possibility of the production of weak bosons in the proton-antiproton colliding beam facilities which are currently being developed, is discussed. The production, decay and predicted properties of these particles are described. (W.D.L.).
International Nuclear Information System (INIS)
Turlay, R.
1979-01-01
In this review of charged weak currents I shall concentrate on inclusive high energy neutrino physics. There are surely still things to learn from the low energy weak interaction but I will not discuss it here. Furthermore B. Tallini will discuss the hadronic final state of neutrino interactions. Since the Tokyo conference a few experimental results have appeared on charged current interaction, I will present them and will also comment on important topics which have been published during the last past year. (orig.)
International Nuclear Information System (INIS)
Daumenov, T.D.; Alizarovskaya, I.M.; Khizirova, M.A.
2001-01-01
The method of the weakly oval electrical field getting generated by the axially-symmetrical field is shown. Such system may be designed with help of the cylindric form coaxial electrodes with the built-in quadrupole duplet. The singularity of the indicated weakly oval lense consists of that it provides the conducting both mechanical and electronic adjustment. Such lense can be useful for elimination of the near-axis astigmatism in the electron-optical system
Equivalence, commensurability, value
DEFF Research Database (Denmark)
Albertsen, Niels
2017-01-01
Deriving value in Capital Marx uses three commensurability arguments (CA1-3). CA1 establishes equivalence in exchange as exchangeability with the same third commodity. CA2 establishes value as common denominator in commodities: embodied abstract labour. CA3 establishes value substance...... as commonality of labour: physiological labour. Tensions between these logics have permeated Marxist interpretations of value. Some have supported value as embodied labour (CA2, 3), others a monetary theory of value and value as ‘pure’ societal abstraction (ultimately CA1). They all are grounded in Marx....
Evaluation of 1cm dose equivalent rate using a NaI(Tl) scintilation spectrometer
International Nuclear Information System (INIS)
Matsuda, Hideharu
1990-01-01
A method for evaluating 1 cm dose equivalent rates from a pulse height distribution obtained by a 76.2mmφ spherical NaI(Tl) scintillation spectrometer was described. Weak leakage radiation from nuclear facilities were also measured and dose equivalent conversion factor and effective energy of leakage radiation were evaluated from 1 cm dose equivalent rate and exposure rate. (author)
Introduction to unification of electromagnetic and weak interactions
International Nuclear Information System (INIS)
Martin, F.
1980-01-01
After reviewing the present status of weak interaction phenomenology we discuss the basic principles of gauge theories. Then we show how Higgs mechanism can give massive quanta of interaction. The so-called 'Weinberg-Salam' model, which unifies electromagnetic and weak interactions, is described. We conclude with a few words on unification with strong interactions and gravity [fr
Waste Determination Equivalency - 12172
Energy Technology Data Exchange (ETDEWEB)
Freeman, Rebecca D. [Savannah River Remediation (United States)
2012-07-01
by the Secretary of Energy in January of 2006 based on proposed processing techniques with the expectation that it could be revised as new processing capabilities became viable. Once signed, however, it became evident that any changes would require lengthy review and another determination signed by the Secretary of Energy. With the maturation of additional salt removal technologies and the extension of the SWPF start-up date, it becomes necessary to define 'equivalency' to the processes laid out in the original determination. For the purposes of SRS, any waste not processed through Interim Salt Processing must be processed through SWPF or an equivalent process, and therefore a clear statement of the requirements for a process to be equivalent to SWPF becomes necessary. (authors)
International Nuclear Information System (INIS)
Roberts, B.L.; Booth, E.C.; Gall, K.P.; McIntyre, E.K.; Miller, J.P.; Whitehouse, D.A.; Bassalleck, B.; Hall, J.R.; Larson, K.D.; Wolfe, D.M.; Fickinger, W.J.; Robinson, D.K.; Hallin, A.L.; Hasinoff, M.D.; Measday, D.F.; Noble, A.J.; Waltham, C.E.; Hessey, N.P.; Lowe, J.; Horvath, D.; Salomon, M.
1990-01-01
New measurements of the Σ + and Λ weak radiative decays are discussed. The hyperons were produced at rest by the reaction K - p → Yπ where Y = Σ + or Λ. The monoenergetic pion was used to tag the hyperon production, and the branching ratios were determined from the relative amplitudes of Σ + → pγ to Σ + → pπ 0 and Λ → nγ to Λ → nπ 0 . The photons from weak radiative decays and from π 0 decays were detected with modular NaI arrays. (orig.)
Establishing Substantial Equivalence: Proteomics
Lovegrove, Alison; Salt, Louise; Shewry, Peter R.
Wheat is a major crop in world agriculture and is consumed after processing into a range of food products. It is therefore of great importance to determine the consequences (intended and unintended) of transgenesis in wheat and whether genetically modified lines are substantially equivalent to those produced by conventional plant breeding. Proteomic analysis is one of several approaches which can be used to address these questions. Two-dimensional PAGE (2D PAGE) remains the most widely available method for proteomic analysis, but is notoriously difficult to reproduce between laboratories. We therefore describe methods which have been developed as standard operating procedures in our laboratory to ensure the reproducibility of proteomic analyses of wheat using 2D PAGE analysis of grain proteins.
Tree Resolution Proofs of the Weak Pigeon-Hole Principle
DEFF Research Database (Denmark)
Dantchev, Stefan Stajanov; Riis, Søren
2001-01-01
We prove that any optimal tree resolution proof of PHPn m is of size 2&thetas;(n log n), independently from m, even if it is infinity. So far, only a 2Ω(n) lower bound has been known in the general case. We also show that any, not necessarily optimal, regular tree resolution proof PHPn m is bound...
Equivalent and Alternative Forms for BF Gravity with Immirzi Parameter
Directory of Open Access Journals (Sweden)
Merced Montesinos
2011-11-01
Full Text Available A detailed analysis of the BF formulation for general relativity given by Capovilla, Montesinos, Prieto, and Rojas is performed. The action principle of this formulation is written in an equivalent form by doing a transformation of the fields of which the action depends functionally on. The transformed action principle involves two BF terms and the two Lorentz invariants that appear in the original action principle generically. As an application of this formalism, the action principle used by Engle, Pereira, and Rovelli in their spin foam model for gravity is recovered and the coupling of the cosmological constant in such a formulation is obtained.
Startpoints via weak contractions
Agyingi, Collins Amburo; Gaba, Yaé Ulrich
2018-01-01
Startpoints (resp. endpoints) can be defined as "oriented fixed points". They arise naturally in the study of fixed for multi-valued maps defined on quasi-metric spaces. In this article, we give a new result in the startpoint theory for quasi-pseudometric spaces. The result we present is obtained via a generalized weakly contractive set-valued map.
Hadi, Inaam M. A.; Al-aeashi, Shukur N.
2018-05-01
If R is a ring with identity and M is a unitary right R-module. Here we introduce the class of weakly coretractable module. Some basic properties are investigated and some relationships between these modules and other related one are introduced.
Deep inelastic inclusive weak and electromagnetic interactions
International Nuclear Information System (INIS)
Adler, S.L.
1976-01-01
The theory of deep inelastic inclusive interactions is reviewed, emphasizing applications to electromagnetic and weak charged current processes. The following reactions are considered: e + N → e + X, ν + N → μ - + X, anti ν + N → μ + + X where X denotes a summation over all final state hadrons and the ν's are muon neutrinos. After a discussion of scaling, the quark-parton model is invoked to explain the principle experimental features of deep inelastic inclusive reactions
Chen, Huaizhen; Pan, Xinpeng; Ji, Yuxin; Zhang, Guangzhi
2017-08-01
A system of aligned vertical fractures and fine horizontal shale layers combine to form equivalent orthorhombic media. Weak anisotropy parameters and fracture weaknesses play an important role in the description of orthorhombic anisotropy (OA). We propose a novel approach of utilizing seismic reflection amplitudes to estimate weak anisotropy parameters and fracture weaknesses from observed seismic data, based on azimuthal elastic impedance (EI). We first propose perturbation in stiffness matrix in terms of weak anisotropy parameters and fracture weaknesses, and using the perturbation and scattering function, we derive PP-wave reflection coefficient and azimuthal EI for the case of an interface separating two OA media. Then we demonstrate an approach to first use a model constrained damped least-squares algorithm to estimate azimuthal EI from partially incidence-phase-angle-stack seismic reflection data at different azimuths, and then extract weak anisotropy parameters and fracture weaknesses from the estimated azimuthal EI using a Bayesian Markov Chain Monte Carlo inversion method. In addition, a new procedure to construct rock physics effective model is presented to estimate weak anisotropy parameters and fracture weaknesses from well log interpretation results (minerals and their volumes, porosity, saturation, fracture density, etc.). Tests on synthetic and real data indicate that unknown parameters including elastic properties (P- and S-wave impedances and density), weak anisotropy parameters and fracture weaknesses can be estimated stably in the case of seismic data containing a moderate noise, and our approach can make a reasonable estimation of anisotropy in a fractured shale reservoir.
New recommendations for dose equivalent
International Nuclear Information System (INIS)
Bengtsson, G.
1985-01-01
In its report 39, the International Commission on Radiation Units and Measurements (ICRU), has defined four new quantities for the determination of dose equivalents from external sources: the ambient dose equivalent, the directional dose equivalent, the individual dose equivalent, penetrating and the individual dose equivalent, superficial. The rationale behind these concepts and their practical application are discussed. Reference is made to numerical values of these quantities which will be the subject of a coming publication from the International Commission on Radiological Protection, ICRP. (Author)
System equivalent model mixing
Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis
2018-05-01
This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.
Experimental method research on neutron equal dose-equivalent detection
International Nuclear Information System (INIS)
Ji Changsong
1995-10-01
The design principles of neutron dose-equivalent meter for neutron biological equi-effect detection are studied. Two traditional principles 'absorption net principle' and 'multi-detector principle' are discussed, and on the basis of which a new theoretical principle for neutron biological equi-effect detection--'absorption stick principle' has been put forward to place high hope on both increasing neutron sensitivity of this type of meters and overcoming the shortages of the two traditional methods. In accordance with this new principle a brand-new model of neutron dose-equivalent meter BH3105 has been developed. Its neutron sensitivity reaches 10 cps/(μSv·h -1 ), 18∼40 times higher than that of all the same kinds of meters 0.23∼0.56 cps/(μSv·h -1 ), available today at home and abroad and the specifications of the newly developed meter reach or surpass the levels of the same kind of meters. Therefore the new theoretical principle of neutron biological equi-effect detection--'absorption stick principle' is proved to be scientific, advanced and useful by experiments. (3 refs., 3 figs., 2 tabs.)
Gravitational leptogenesis, C, CP and strong equivalence
International Nuclear Information System (INIS)
McDonald, Jamie I.; Shore, Graham M.
2015-01-01
The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.
Equivalent Circuit Modeling of Hysteresis Motors
Energy Technology Data Exchange (ETDEWEB)
Nitao, J J; Scharlemann, E T; Kirkendall, B A
2009-08-31
We performed a literature review and found that many equivalent circuit models of hysteresis motors in use today are incorrect. The model by Miyairi and Kataoka (1965) is the correct one. We extended the model by transforming it to quadrature coordinates, amenable to circuit or digital simulation. 'Hunting' is an oscillatory phenomenon often observed in hysteresis motors. While several works have attempted to model the phenomenon with some partial success, we present a new complete model that predicts hunting from first principles.
Gravitational leptogenesis, C, CP and strong equivalence
Energy Technology Data Exchange (ETDEWEB)
McDonald, Jamie I.; Shore, Graham M. [Department of Physics, Swansea University,Swansea, SA2 8PP (United Kingdom)
2015-02-12
The origin of matter-antimatter asymmetry is one of the most important outstanding problems at the interface of particle physics and cosmology. Gravitational leptogenesis (baryogenesis) provides a possible mechanism through explicit couplings of spacetime curvature to appropriate lepton (or baryon) currents. In this paper, the idea that these strong equivalence principle violating interactions could be generated automatically through quantum loop effects in curved spacetime is explored, focusing on the realisation of the discrete symmetries C, CP and CPT which must be broken to induce matter-antimatter asymmetry. The related issue of quantum corrections to the dispersion relation for neutrino propagation in curved spacetime is considered within a fully covariant framework.
A Community Standard: Equivalency of Healthcare in Australian Immigration Detention.
Essex, Ryan
2017-08-01
The Australian government has long maintained that the standard of healthcare provided in its immigration detention centres is broadly comparable with health services available within the Australian community. Drawing on the literature from prison healthcare, this article examines (1) whether the principle of equivalency is being applied in Australian immigration detention and (2) whether this standard of care is achievable given Australia's current policies. This article argues that the principle of equivalency is not being applied and that this standard of health and healthcare will remain unachievable in Australian immigration detention without significant reform. Alternate approaches to addressing the well documented issues related to health and healthcare in Australian immigration detention are discussed.
Rakipi, Albert
2006-01-01
Cataloged from PDF version of article. Although the weak 1 failing states have often been deseribed as the single most important problem for the international order s ince the en d of Cold W ar (F .Fukuyaına 2004:92) several dimensions of this phenomenon still remain unexplored. While this phenomenon has been present in the international politics even earlier, only the post Cold W ar period accentuated its relationship with security issues. Following the Cold W ar' s "peacef...
Energy Technology Data Exchange (ETDEWEB)
Suzuki, M.
1988-04-01
Dynamical mechanism of composite W and Z is studied in a 1/N field theory model with four-fermion interactions in which global weak SU(2) symmetry is broken explicitly by electromagnetic interaction. Issues involved in such a model are discussed in detail. Deviation from gauge coupling due to compositeness and higher order loop corrections are examined to show that this class of models are consistent not only theoretically but also experimentally.
Nee, Sean
2018-05-01
Survival analysis in biology and reliability theory in engineering concern the dynamical functioning of bio/electro/mechanical units. Here we incorporate effects of chaotic dynamics into the classical theory. Dynamical systems theory now distinguishes strong and weak chaos. Strong chaos generates Type II survivorship curves entirely as a result of the internal operation of the system, without any age-independent, external, random forces of mortality. Weak chaos exhibits (a) intermittency and (b) Type III survivorship, defined as a decreasing per capita mortality rate: engineering explicitly defines this pattern of decreasing hazard as 'infant mortality'. Weak chaos generates two phenomena from the normal functioning of the same system. First, infant mortality- sensu engineering-without any external explanatory factors, such as manufacturing defects, which is followed by increased average longevity of survivors. Second, sudden failure of units during their normal period of operation, before the onset of age-dependent mortality arising from senescence. The relevance of these phenomena encompasses, for example: no-fault-found failure of electronic devices; high rates of human early spontaneous miscarriage/abortion; runaway pacemakers; sudden cardiac death in young adults; bipolar disorder; and epilepsy.
The Application of Equivalence Theory to Advertising Translation
Institute of Scientific and Technical Information of China (English)
张颖
2017-01-01
Through analyzing equivalence theory, the author tries to find a solution to the problems arising in the process of ad?vertising translation. These problems include cultural diversity, language diversity and special requirement of advertisement. The author declares that Nida''s functional equivalence is one of the most appropriate theories to deal with these problems. In this pa?per, the author introduces the principles of advertising translation and culture divergences in advertising translation, and then gives some advertising translation practices to explain and analyze how to create good advertising translation by using functional equivalence. At last, the author introduces some strategies in advertising translation.
Logically automorphically equivalent knowledge bases
Aladova, Elena; Plotkin, Tatjana
2017-01-01
Knowledge bases theory provide an important example of the field where applications of universal algebra and algebraic logic look very natural, and their interaction with practical problems arising in computer science might be very productive. In this paper we study the equivalence problem for knowledge bases. Our interest is to find out how the informational equivalence is related to the logical description of knowledge. Studying various equivalences of knowledge bases allows us to compare d...
Testing statistical hypotheses of equivalence
Wellek, Stefan
2010-01-01
Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the
Quantum Groups, Property (T), and Weak Mixing
Brannan, Michael; Kerr, David
2018-06-01
For second countable discrete quantum groups, and more generally second countable locally compact quantum groups with trivial scaling group, we show that property (T) is equivalent to every weakly mixing unitary representation not having almost invariant vectors. This is a generalization of a theorem of Bekka and Valette from the group setting and was previously established in the case of low dual by Daws, Skalski, and Viselter. Our approach uses spectral techniques and is completely different from those of Bekka-Valette and Daws-Skalski-Viselter. By a separate argument we furthermore extend the result to second countable nonunimodular locally compact quantum groups, which are shown in particular not to have property (T), generalizing a theorem of Fima from the discrete setting. We also obtain quantum group versions of characterizations of property (T) of Kerr and Pichot in terms of the Baire category theory of weak mixing representations and of Connes and Weiss in terms of the prevalence of strongly ergodic actions.
International Nuclear Information System (INIS)
Sugarbaker, E.
1995-01-01
I review available techniques for extraction of weak interaction rates in nuclei. The case for using hadron charge exchange reactions to estimate such rates is presented and contrasted with alternate methods. Limitations of the (p,n) reaction as a probe of Gamow-Teller strength are considered. Review of recent comparisons between beta-decay studies and (p,n) is made, leading to cautious optimism regarding the final usefulness of (p,n)- derived GT strengths to the field of astrophysics. copyright 1995 American Institute of Physics
A Universe without Weak Interactions
Energy Technology Data Exchange (ETDEWEB)
Harnik, Roni; Kribs, Graham D.; Perez, Gilad
2006-04-07
A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''Weakless Universe'' is matched to our Universe by simultaneously adjusting Standard Model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the Weakless Universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multi-parameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe.
A Universe without Weak Interactions
International Nuclear Information System (INIS)
Harnik, R
2006-01-01
A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''Weakless Universe'' is matched to our Universe by simultaneously adjusting Standard Model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the Weakless Universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multi-parameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe
A universe without weak interactions
International Nuclear Information System (INIS)
Harnik, Roni; Kribs, Graham D.; Perez, Gilad
2006-01-01
A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical ''weakless universe'' is matched to our Universe by simultaneously adjusting standard model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the weakless universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multiparameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe
Weak values in a classical theory with an epistemic restriction
International Nuclear Information System (INIS)
Karanjai, Angela; Cavalcanti, Eric G; Bartlett, Stephen D; Rudolph, Terry
2015-01-01
Weak measurement of a quantum system followed by postselection based on a subsequent strong measurement gives rise to a quantity called the weak value: a complex number for which the interpretation has long been debated. We analyse the procedure of weak measurement and postselection, and the interpretation of the associated weak value, using a theory of classical mechanics supplemented by an epistemic restriction that is known to be operationally equivalent to a subtheory of quantum mechanics. Both the real and imaginary components of the weak value appear as phase space displacements in the postselected expectation values of the measurement device's position and momentum distributions, and we recover the same displacements as in the quantum case by studying the corresponding evolution in our theory of classical mechanics with an epistemic restriction. By using this epistemically restricted theory, we gain insight into the appearance of the weak value as a result of the statistical effects of post selection, and this provides us with an operational interpretation of the weak value, both its real and imaginary parts. We find that the imaginary part of the weak value is a measure of how much postselection biases the mean phase space distribution for a given amount of measurement disturbance. All such biases proportional to the imaginary part of the weak value vanish in the limit where disturbance due to measurement goes to zero. Our analysis also offers intuitive insight into how measurement disturbance can be minimized and the limits of weak measurement. (paper)
Planck Constant Determination from Power Equivalence
Newell, David B.
2000-04-01
Equating mechanical to electrical power links the kilogram, the meter, and the second to the practical realizations of the ohm and the volt derived from the quantum Hall and the Josephson effects, yielding an SI determination of the Planck constant. The NIST watt balance uses this power equivalence principle, and in 1998 measured the Planck constant with a combined relative standard uncertainty of 8.7 x 10-8, the most accurate determination to date. The next generation of the NIST watt balance is now being assembled. Modification to the experimental facilities have been made to reduce the uncertainty components from vibrations and electromagnetic interference. A vacuum chamber has been installed to reduce the uncertainty components associated with performing the experiment in air. Most of the apparatus is in place and diagnostic testing of the balance should begin this year. Once a combined relative standard uncertainty of one part in 10-8 has been reached, the power equivalence principle can be used to monitor the possible drift in the artifact mass standard, the kilogram, and provide an accurate alternative definition of mass in terms of fundamental constants. *Electricity Division, Electronics and Electrical Engineering Laboratory, Technology Administration, U.S. Department of Commerce. Contribution of the National Institute of Standards and Technology, not subject to copyright in the U.S.
Weakly Supervised Dictionary Learning
You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub
2018-05-01
We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.
Moiseiwitsch, B L
2004-01-01
This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha
SAPONIFICATION EQUIVALENT OF DASAMULA TAILA
Saxena, R. B.
1994-01-01
Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.
Saponification equivalent of dasamula taila.
Saxena, R B
1994-07-01
Saponification equivalent values of Dasamula taila are very useful for the technical and analytical work. It gives the mean molecular weight of the glycerides and acids present in Dasamula Taila. Saponification equivalent values of Dasamula taila are reported in different packings.
International Nuclear Information System (INIS)
Lin Guanxin
1991-01-01
A study on the rules in which the lead equivalent of lead glass changes with the energy of X rays or γ ray is described. The reason of this change is discussed and a new testing method of lead equivalent is suggested
Directory of Open Access Journals (Sweden)
V. A. Grinenko
2011-06-01
Full Text Available The offered material in the article is picked up so that the reader could have a complete representation about concept “safety”, intrinsic characteristics and formalization possibilities. Principles and possible strategy of safety are considered. A material of the article is destined for the experts who are taking up the problems of safety.
Energy Technology Data Exchange (ETDEWEB)
Levine, R.B.; Stassi, J.; Karasick, D.
1985-04-01
Anterior displacement of the tibial tubercle is a well-accepted orthopedic procedure in the treatment of certain patellofemoral disorders. The radiologic appearance of surgical procedures utilizing the Maquet principle has not been described in the radiologic literature. Familiarity with the physiologic and biochemical basis for the procedure and its postoperative appearance is necessary for appropriate roentgenographic evaluation and the radiographic recognition of complications.
International Nuclear Information System (INIS)
Wesson, P.S.
1979-01-01
The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution
Bisimulation Meet PCTL Equivalences for Probabilistic Automata (Journal Version)
DEFF Research Database (Denmark)
Song, Lei; Zhang, Lijun; Godskesen, Jens Christian
2013-01-01
Probabilistic automata (PA) have been successfully applied in the formal verification of concurrent and stochastic systems. Efficient model checking algorithms have been studied, where the most often used logics for expressing properties are based on PCTL and its extension PCTL*. Various behavioral...... equivalences are proposed for PAs, as a powerful tool for abstraction and compositional minimization for PAs. Unfortunately, the behavioral equivalences are well-known to be strictly stronger than the logical equivalences induced by PCTL or PCTL*. This paper introduces novel notions of strong bisimulation...... relations, which characterizes PCTL and PCTL* exactly. We also extend weak bisimulations characterizing PCTL and PCTL* without next operator, respectively. Thus, our paper bridges the gap between logical and behavioral equivalences in this setting....
Progress in classical and quantum variational principles
International Nuclear Information System (INIS)
Gray, C G; Karl, G; Novikov, V A
2004-01-01
We review the development and practical uses of a generalized Maupertuis least action principle in classical mechanics in which the action is varied under the constraint of fixed mean energy for the trial trajectory. The original Maupertuis (Euler-Lagrange) principle constrains the energy at every point along the trajectory. The generalized Maupertuis principle is equivalent to Hamilton's principle. Reciprocal principles are also derived for both the generalized Maupertuis and the Hamilton principles. The reciprocal Maupertuis principle is the classical limit of Schroedinger's variational principle of wave mechanics and is also very useful to solve practical problems in both classical and semiclassical mechanics, in complete analogy with the quantum Rayleigh-Ritz method. Classical, semiclassical and quantum variational calculations are carried out for a number of systems, and the results are compared. Pedagogical as well as research problems are used as examples, which include nonconservative as well as relativistic systems. '... the most beautiful and important discovery of Mechanics.' Lagrange to Maupertuis (November 1756)
Classical and Weak Solutions for Two Models in Mathematical Finance
Gyulov, Tihomir B.; Valkov, Radoslav L.
2011-12-01
We study two mathematical models, arising in financial mathematics. These models are one-dimensional analogues of the famous Black-Scholes equation on finite interval. The main difficulty is the degeneration at the both ends of the space interval. First, classical solutions are studied. Positivity and convexity properties of the solutions are discussed. Variational formulation in weighted Sobolev spaces is introduced and existence and uniqueness of the weak solution is proved. Maximum principle for weak solution is discussed.
What is correct: equivalent dose or dose equivalent
International Nuclear Information System (INIS)
Franic, Z.
1994-01-01
In Croatian language some physical quantities in radiation protection dosimetry have not precise names. Consequently, in practice either terms in English or mathematical formulas are used. The situation is even worse since the Croatian language only a limited number of textbooks, reference books and other papers are available. This paper compares the concept of ''dose equivalent'' as outlined in International Commission on Radiological Protection (ICRP) recommendations No. 26 and newest, conceptually different concept of ''equivalent dose'' which is introduced in ICRP 60. It was found out that Croatian terminology is both not uniform and unprecise. For the term ''dose equivalent'' was, under influence of Russian and Serbian languages, often used as term ''equivalent dose'' even from the point of view of ICRP 26 recommendations, which was not justified. Unfortunately, even now, in Croatia the legal unit still ''dose equivalent'' defined as in ICRP 26, but the term used for it is ''equivalent dose''. Therefore, in Croatian legislation a modified set of quantities introduced in ICRP 60, should be incorporated as soon as possible
On Dual Phase-Space Relativity, the Machian Principle and Modified Newtonian Dynamics
Castro, C
2004-01-01
We investigate the consequences of the Mach's principle of inertia within the context of the Dual Phase Space Relativity which is compatible with the Eddington-Dirac large numbers coincidences and may provide with a physical reason behind the observed anomalous Pioneer acceleration and a solution to the riddle of the cosmological constant problem ( Nottale ). The cosmological implications of Non-Archimedean Geometry by assigning an upper impassible scale in Nature and the cosmological variations of the fundamental constants are also discussed. We study the corrections to Newtonian dynamics resulting from the Dual Phase Space Relativity by analyzing the behavior of a test particle in a modified Schwarzschild geometry (due to the the effects of the maximal acceleration) that leads in the weak-field approximation to essential modifications of the Newtonian dynamics and to violations of the equivalence principle. Finally we follow another avenue and find modified Newtonian dynamics induced by the Yang's Noncommut...
PLASMA EMISSION BY WEAK TURBULENCE PROCESSES
Energy Technology Data Exchange (ETDEWEB)
Ziebell, L. F.; Gaelzer, R. [Instituto de Física, UFRGS, Porto Alegre, RS (Brazil); Yoon, P. H. [Institute for Physical Science and Technology, University of Maryland, College Park, MD (United States); Pavan, J., E-mail: luiz.ziebell@ufrgs.br, E-mail: rudi.gaelzer@ufrgs.br, E-mail: yoonp@umd.edu, E-mail: joel.pavan@ufpel.edu.br [Instituto de Física e Matemática, UFPel, Pelotas, RS (Brazil)
2014-11-10
The plasma emission is the radiation mechanism responsible for solar type II and type III radio bursts. The first theory of plasma emission was put forth in the 1950s, but the rigorous demonstration of the process based upon first principles had been lacking. The present Letter reports the first complete numerical solution of electromagnetic weak turbulence equations. It is shown that the fundamental emission is dominant and unless the beam speed is substantially higher than the electron thermal speed, the harmonic emission is not likely to be generated. The present findings may be useful for validating reduced models and for interpreting particle-in-cell simulations.
DEFF Research Database (Denmark)
Kohlenbach, Ulrich
2002-01-01
The so-called weak Konig's lemma WKL asserts the existence of an infinite path b in any infinite binary tree (given by a representing function f). Based on this principle one can formulate subsystems of higher-order arithmetic which allow to carry out very substantial parts of classical mathematics...... which-relative to PRA -implies the schema of 10-induction). In this setting one can consider also a uniform version UWKL of WKL which asserts the existence of a functional which selects uniformly in a given infinite binary tree f an infinite path f of that tree. This uniform version of WKL...
Contextuality under weak assumptions
International Nuclear Information System (INIS)
Simmons, Andrew W; Rudolph, Terry; Wallman, Joel J; Pashayan, Hakop; Bartlett, Stephen D
2017-01-01
The presence of contextuality in quantum theory was first highlighted by Bell, Kochen and Specker, who discovered that for quantum systems of three or more dimensions, measurements could not be viewed as deterministically revealing pre-existing properties of the system. More precisely, no model can assign deterministic outcomes to the projectors of a quantum measurement in a way that depends only on the projector and not the context (the full set of projectors) in which it appeared, despite the fact that the Born rule probabilities associated with projectors are independent of the context. A more general, operational definition of contextuality introduced by Spekkens, which we will term ‘probabilistic contextuality’, drops the assumption of determinism and allows for operations other than measurements to be considered contextual. Even two-dimensional quantum mechanics can be shown to be contextual under this generalised notion. Probabilistic noncontextuality represents the postulate that elements of an operational theory that cannot be distinguished from each other based on the statistics of arbitrarily many repeated experiments (they give rise to the same operational probabilities) are ontologically identical. In this paper, we introduce a framework that enables us to distinguish between different noncontextuality assumptions in terms of the relationships between the ontological representations of objects in the theory given a certain relation between their operational representations. This framework can be used to motivate and define a ‘possibilistic’ analogue, encapsulating the idea that elements of an operational theory that cannot be unambiguously distinguished operationally can also not be unambiguously distinguished ontologically. We then prove that possibilistic noncontextuality is equivalent to an alternative notion of noncontextuality proposed by Hardy. Finally, we demonstrate that these weaker noncontextuality assumptions are sufficient to prove
Wilkesman, Jeff; Kurz, Liliana
2017-01-01
Zymography, the detection, identification, and even quantification of enzyme activity fractionated by gel electrophoresis, has received increasing attention in the last years, as revealed by the number of articles published. A number of enzymes are routinely detected by zymography, especially with clinical interest. This introductory chapter reviews the major principles behind zymography. New advances of this method are basically focused towards two-dimensional zymography and transfer zymography as will be explained in the rest of the chapters. Some general considerations when performing the experiments are outlined as well as the major troubleshooting and safety issues necessary for correct development of the electrophoresis.
International Nuclear Information System (INIS)
Wilson, P.D.
1996-01-01
Some basic explanations are given of the principles underlying the nuclear fuel cycle, starting with the physics of atomic and nuclear structure and continuing with nuclear energy and reactors, fuel and waste management and finally a discussion of economics and the future. An important aspect of the fuel cycle concerns the possibility of ''closing the back end'' i.e. reprocessing the waste or unused fuel in order to re-use it in reactors of various kinds. The alternative, the ''oncethrough'' cycle, discards the discharged fuel completely. An interim measure involves the prolonged storage of highly radioactive waste fuel. (UK)
Vervoort, L.; Plancken, Van der L.; Grauwet, T.; Verlinde, P.; Matser, A.M.; Hendrickx, M.; Loey, van A.
2012-01-01
This report describes the first study comparing different high pressure (HP) and thermal treatments at intensities ranging from mild pasteurization to sterilization conditions. To allow a fair comparison, the processing conditions were selected based on the principles of equivalence. Moreover,
Symmetries of dynamically equivalent theories
Energy Technology Data Exchange (ETDEWEB)
Gitman, D.M.; Tyutin, I.V. [Sao Paulo Univ., SP (Brazil). Inst. de Fisica; Lebedev Physics Institute, Moscow (Russian Federation)
2006-03-15
A natural and very important development of constrained system theory is a detail study of the relation between the constraint structure in the Hamiltonian formulation with specific features of the theory in the Lagrangian formulation, especially the relation between the constraint structure with the symmetries of the Lagrangian action. An important preliminary step in this direction is a strict demonstration, and this is the aim of the present article, that the symmetry structures of the Hamiltonian action and of the Lagrangian action are the same. This proved, it is sufficient to consider the symmetry structure of the Hamiltonian action. The latter problem is, in some sense, simpler because the Hamiltonian action is a first-order action. At the same time, the study of the symmetry of the Hamiltonian action naturally involves Hamiltonian constraints as basic objects. One can see that the Lagrangian and Hamiltonian actions are dynamically equivalent. This is why, in the present article, we consider from the very beginning a more general problem: how the symmetry structures of dynamically equivalent actions are related. First, we present some necessary notions and relations concerning infinitesimal symmetries in general, as well as a strict definition of dynamically equivalent actions. Finally, we demonstrate that there exists an isomorphism between classes of equivalent symmetries of dynamically equivalent actions. (author)
Zilberberg, Oded; Romito, Alessandro; Gefen, Yuval
2013-01-01
Weak value (WV) is a quantum mechanical measurement protocol, proposed by Aharonov, Albert, and Vaidman. It consists of a weak measurement, which is weighed in, conditional on the outcome of a later, strong measurement. Here we define another two-step measurement protocol, null weak value (NVW), and point out its advantages as compared to WV. We present two alternative derivations of NWVs and compare them to the corresponding derivations of WVs.
Weak openness and almost openness
Directory of Open Access Journals (Sweden)
David A. Rose
1984-01-01
Full Text Available Weak openness and almost openness for arbitrary functions between topological spaces are defined as duals to the weak continuity of Levine and the almost continuity of Husain respectively. Independence of these two openness conditions is noted and comparison is made between these and the almost openness of Singal and Singal. Some results dual to those known for weak continuity and almost continuity are obtained. Nearly almost openness is defined and used to obtain an improved link from weak continuity to almost continuity.
Weak measurements and quantum weak values for NOON states
Rosales-Zárate, L.; Opanchuk, B.; Reid, M. D.
2018-03-01
Quantum weak values arise when the mean outcome of a weak measurement made on certain preselected and postselected quantum systems goes beyond the eigenvalue range for a quantum observable. Here, we propose how to determine quantum weak values for superpositions of states with a macroscopically or mesoscopically distinct mode number, that might be realized as two-mode Bose-Einstein condensate or photonic NOON states. Specifically, we give a model for a weak measurement of the Schwinger spin of a two-mode NOON state, for arbitrary N . The weak measurement arises from a nondestructive measurement of the two-mode occupation number difference, which for atomic NOON states might be realized via phase contrast imaging and the ac Stark effect using an optical meter prepared in a coherent state. The meter-system coupling results in an entangled cat-state. By subsequently evolving the system under the action of a nonlinear Josephson Hamiltonian, we show how postselection leads to quantum weak values, for arbitrary N . Since the weak measurement can be shown to be minimally invasive, the weak values provide a useful strategy for a Leggett-Garg test of N -scopic realism.
Matching of equivalent field regions
DEFF Research Database (Denmark)
Appel-Hansen, Jørgen; Rengarajan, S.B.
2005-01-01
In aperture problems, integral equations for equivalent currents are often found by enforcing matching of equivalent fields. The enforcement is made in the aperture surface region adjoining the two volumes on each side of the aperture. In the case of an aperture in a planar perfectly conducting...... screen, having the same homogeneous medium on both sides and an impressed current on one aide, an alternative procedure is relevant. We make use of the fact that in the aperture the tangential component of the magnetic field due to the induced currents in the screen is zero. The use of such a procedure...... shows that equivalent currents can be found by a consideration of only one of the two volumes into which the aperture plane divides the space. Furthermore, from a consideration of an automatic matching at the aperture, additional information about tangential as well as normal field components...
Conditions for equivalence of statistical ensembles in nuclear multifragmentation
International Nuclear Information System (INIS)
Mallik, Swagata; Chaudhuri, Gargi
2012-01-01
Statistical models based on canonical and grand canonical ensembles are extensively used to study intermediate energy heavy-ion collisions. The underlying physical assumption behind canonical and grand canonical models is fundamentally different, and in principle agree only in the thermodynamical limit when the number of particles become infinite. Nevertheless, we show that these models are equivalent in the sense that they predict similar results if certain conditions are met even for finite nuclei. In particular, the results converge when nuclear multifragmentation leads to the formation of predominantly nucleons and low mass clusters. The conditions under which the equivalence holds are amenable to present day experiments.
The design of high performance weak current integrated amplifier
International Nuclear Information System (INIS)
Chen Guojie; Cao Hui
2005-01-01
A design method of high performance weak current integrated amplifier using ICL7650 operational amplifier is introduced. The operating principle of circuits and the step of improving amplifier's performance are illustrated. Finally, the experimental results are given. The amplifier has programmable measurement range of 10 -9 -10 -12 A, automatic zero-correction, accurate measurement, and good stability. (authors)
Unifying weak and electromagnetic forces in Weinberg-Salam theory
International Nuclear Information System (INIS)
Savoy, C.A.
1978-01-01
In this introduction to the ideas related to the unified gauge theories of the weak and electromagnetic interactions, we begin with the motivations for its basic principles. Then, the formalism is briefly developed, in particular the so-called Higgs mechanism. The advantages and the consequences of the (non-abelian) gauge invariance are emphasized, together with the experimental tests of the theory [fr
BH3105 type neutron dose equivalent meter of high sensitivity
International Nuclear Information System (INIS)
Ji Changsong; Zhang Enshan; Yang Jianfeng; Zhang Hong; Huang Jiling
1995-10-01
It is noted that to design a neutron dose meter of high sensitivity is almost impossible in the frame of traditional designing principle--'absorption net principle'. Based on a newly proposed principle of obtaining neutron dose equi-biological effect adjustment--' absorption stick principle', a brand-new neutron dose-equivalent meter with high neutron sensitivity BH3105 has been developed. Its sensitivity reaches 10 cps/(μSv·h -1 ), which is 18∼40 times higher than one of foreign products of the same kind and is 10 4 times higher than that of domestic FJ342 neutron rem-meter. BH3105 has a measurement range from 0.1μSv/h to 1 Sv/h which is 1 or 2 orders wider than that of the other's. It has the advanced properties of gamma-resistance, energy response, orientation, etc. (6 tabs., 5 figs.)
Teleparallel equivalent of Lovelock gravity
González, P. A.; Vásquez, Yerko
2015-12-01
There is a growing interest in modified gravity theories based on torsion, as these theories exhibit interesting cosmological implications. In this work inspired by the teleparallel formulation of general relativity, we present its extension to Lovelock gravity known as the most natural extension of general relativity in higher-dimensional space-times. First, we review the teleparallel equivalent of general relativity and Gauss-Bonnet gravity, and then we construct the teleparallel equivalent of Lovelock gravity. In order to achieve this goal, we use the vielbein and the connection without imposing the Weitzenböck connection. Then, we extract the teleparallel formulation of the theory by setting the curvature to null.
Weak decays of stable particles
International Nuclear Information System (INIS)
Brown, R.M.
1988-09-01
In this article we review recent advances in the field of weak decays and consider their implications for quantum chromodynamics (the theory of strong interactions) and electroweak theory (the combined theory of electromagnetic and weak interactions), which together form the ''Standard Model'' of elementary particles. (author)
Electromagnetic current in weak interactions
International Nuclear Information System (INIS)
Ma, E.
1983-01-01
In gauge models which unify weak and electromagnetic interactions, the weak neutral-current interaction also involves the electromagnetic current. The exact nature of such a component can be explored using e + e - experimental data. In recent years, the existence of a new component of the weak interaction has become firmly established, i.e., the neutral-current interaction. As such, it competes with the electromagnetic interaction whenever the particles involved are also charged, but at a very much lower rate because its effective strength is so small. Hence neutrino processes are best for the detection of the neutral-current interaction. However, in any gauge model which unifies weak and electromagnetic interactions, the weak neutral-current interaction also involves the electromagnetic current
Weak values in collision theory
de Castro, Leonardo Andreta; Brasil, Carlos Alexandre; Napolitano, Reginaldo de Jesus
2018-05-01
Weak measurements have an increasing number of applications in contemporary quantum mechanics. They were originally described as a weak interaction that slightly entangled the translational degrees of freedom of a particle to its spin, yielding surprising results after post-selection. That description often ignores the kinetic energy of the particle and its movement in three dimensions. Here, we include these elements and re-obtain the weak values within the context of collision theory by two different approaches, and prove that the results are compatible with each other and with the results from the traditional approach. To provide a more complete description, we generalize weak values into weak tensors and use them to provide a more realistic description of the Stern-Gerlach apparatus.
Gluon Bremsstrahlung in Weakly-Coupled Plasmas
International Nuclear Information System (INIS)
Arnold, Peter
2009-01-01
I report on some theoretical progress concerning the calculation of gluon bremsstrahlung for very high energy particles crossing a weakly-coupled quark-gluon plasma. (i) I advertise that two of the several formalisms used to study this problem, the BDMPS-Zakharov formalism and the AMY formalism (the latter used only for infinite, uniform media), can be made equivalent when appropriately formulated. (ii) A standard technique to simplify calculations is to expand in inverse powers of logarithms ln(E/T). I give an example where such expansions are found to work well for ω/T≥10 where ω is the bremsstrahlung gluon energy. (iii) Finally, I report on perturbative calculations of q.
EQUIVALENCE VERSUS NON-EQUIVALENCE IN ECONOMIC TRANSLATION
Directory of Open Access Journals (Sweden)
Cristina, Chifane
2012-01-01
Full Text Available This paper aims at highlighting the fact that “equivalence” represents a concept worth revisiting and detailing upon when tackling the translation process of economic texts both from English into Romanian and from Romanian into English. Far from being exhaustive, our analysis will focus upon the problems arising from the lack of equivalence at the word level. Consequently, relevant examples from the economic field will be provided to account for the following types of non-equivalence at word level: culturespecific concepts; the source language concept is not lexicalised in the target language; the source language word is semantically complex; differences in physical and interpersonal perspective; differences in expressive meaning; differences in form; differences in frequency and purpose of using specific forms and the use of loan words in the source text. Likewise, we shall illustrate a number of translation strategies necessary to deal with the afore-mentioned cases of non-equivalence: translation by a more general word (superordinate; translation by a more neutral/less expressive word; translation by cultural substitution; translation using a loan word or loan word plus explanation; translation by paraphrase using a related word; translation by paraphrase using unrelated words; translation by omission and translation by illustration.
Hartman effect and weak measurements that are not really weak
International Nuclear Information System (INIS)
Sokolovski, D.; Akhmatskaya, E.
2011-01-01
We show that in wave packet tunneling, localization of the transmitted particle amounts to a quantum measurement of the delay it experiences in the barrier. With no external degree of freedom involved, the envelope of the wave packet plays the role of the initial pointer state. Under tunneling conditions such ''self-measurement'' is necessarily weak, and the Hartman effect just reflects the general tendency of weak values to diverge, as postselection in the final state becomes improbable. We also demonstrate that it is a good precision, or a 'not really weak' quantum measurement: no matter how wide the barrier d, it is possible to transmit a wave packet with a width σ small compared to the observed advancement. As is the case with all weak measurements, the probability of transmission rapidly decreases with the ratio σ/d.
Modelling of Airship Flight Mechanics by the Projection Equivalent Method
Frantisek Jelenciak; Michael Gerke; Ulrich Borgolte
2015-01-01
This article describes the projection equivalent method (PEM) as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that - in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a...
Assaults by Mentally Disordered Offenders in Prison: Equity and Equivalence.
Hales, Heidi; Dixon, Amy; Newton, Zoe; Bartlett, Annie
2016-06-01
Managing the violent behaviour of mentally disordered offenders (MDO) is challenging in all jurisdictions. We describe the ethical framework and practical management of MDOs in England and Wales in the context of the move to equivalence of healthcare between hospital and prison. We consider the similarities and differences between prison and hospital management of the violent and challenging behaviours of MDOs. We argue that both types of institution can learn from each other and that equivalence of care should extend to equivalence of criminal proceedings in court and prisons for MDOs. We argue that any adjudication process in prison for MDOs is enhanced by the relevant involvement of mental health professionals and the articulation of the ethical principles underpinning health and criminal justice practices.
The Source Equivalence Acceleration Method
International Nuclear Information System (INIS)
Everson, Matthew S.; Forget, Benoit
2015-01-01
Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power
Equivalent statistics and data interpretation.
Francis, Gregory
2017-08-01
Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.
Weak Measurement and Quantum Correlation
Indian Academy of Sciences (India)
Arun Kumar Pati
Entanglement: Two quantum systems can be in a strongly correlated state even if .... These are resources which can be used to design quantum computer, quantum ...... Weak measurements have found numerous applications starting from the ...
Weakly infinite-dimensional spaces
International Nuclear Information System (INIS)
Fedorchuk, Vitalii V
2007-01-01
In this survey article two new classes of spaces are considered: m-C-spaces and w-m-C-spaces, m=2,3,...,∞. They are intermediate between the class of weakly infinite-dimensional spaces in the Alexandroff sense and the class of C-spaces. The classes of 2-C-spaces and w-2-C-spaces coincide with the class of weakly infinite-dimensional spaces, while the compact ∞-C-spaces are exactly the C-compact spaces of Haver. The main results of the theory of weakly infinite-dimensional spaces, including classification via transfinite Lebesgue dimensions and Luzin-Sierpinsky indices, extend to these new classes of spaces. Weak m-C-spaces are characterised by means of essential maps to Henderson's m-compacta. The existence of hereditarily m-strongly infinite-dimensional spaces is proved.
Weak interactions and presupernova evolution
International Nuclear Information System (INIS)
Aufderheide, M.B.; State Univ. of New York
1991-01-01
The role of weak interactions, particularly electron capture and β - decay, in presupernova evolution is discussed. The present uncertainty in these rates is examined and the possibility of improving the situation is addressed. 12 refs., 4 figs
Weak Deeply Virtual Compton Scattering
International Nuclear Information System (INIS)
Ales Psaker; Wolodymyr Melnitchouk; Anatoly Radyushkin
2006-01-01
We extend the analysis of the deeply virtual Compton scattering process to the weak interaction sector in the generalized Bjorken limit. The virtual Compton scattering amplitudes for the weak neutral and charged currents are calculated at the leading twist within the framework of the nonlocal light-cone expansion via coordinate space QCD string operators. Using a simple model, we estimate cross sections for neutrino scattering off the nucleon, relevant for future high intensity neutrino beam facilities
Weakly compact operators and interpolation
Maligranda, Lech
1992-01-01
The class of weakly compact operators is, as well as the class of compact operators, a fundamental operator ideal. They were investigated strongly in the last twenty years. In this survey, we have collected and ordered some of this (partly very new) knowledge. We have also included some comments, remarks and examples. The class of weakly compact operators is, as well as the class of compact operators, a fundamental operator ideal. They were investigated strongly in the last twenty years. I...
Equivalent nozzle in thermomechanical problems
International Nuclear Information System (INIS)
Cesari, F.
1977-01-01
When analyzing nuclear vessels, it is most important to study the behavior of the nozzle cylinder-cylinder intersection. For the elastic field, this analysis in three dimensions is quite easy using the method of finite elements. The same analysis in the non-linear field becomes difficult for designs in 3-D. It is therefore necessary to resolve a nozzle in two dimensions equivalent to a 3-D nozzle. The purpose of the present work is to find an equivalent nozzle both with a mechanical and thermal load. This has been achieved by the analysis in three dimensions of a nozzle and a nozzle cylinder-sphere intersection, of a different radius. The equivalent nozzle will be a nozzle with a sphere radius in a given ratio to the radius of a cylinder; thus, the maximum equivalent stress is the same in both 2-D and 3-D. The nozzle examined derived from the intersection of a cylindrical vessel of radius R=191.4 mm and thickness T=6.7 mm with a cylindrical nozzle of radius r=24.675 mm and thickness t=1.350 mm, for which the experimental results for an internal pressure load are known. The structure was subdivided into 96 finite, three-dimensional and isoparametric elements with 60 degrees of freedom and 661 total nodes. Both the analysis with a mechanical load as well as the analysis with a thermal load were carried out on this structure according to the Bersafe system. The thermal load consisted of a transient typical of an accident occurring in a sodium-cooled fast reactor, with a peak of the temperature (540 0 C) for the sodium inside the vessel with an insulating argon temperature constant at 525 0 C. The maximum value of the equivalent tension was found in the internal area at the union towards the vessel side. The analysis of the nozzle in 2-D consists in schematizing the structure as a cylinder-sphere intersection, where the sphere has a given relation to the
21 CFR 26.9 - Equivalence determination.
2010-04-01
... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Equivalence determination. 26.9 Section 26.9 Food... Specific Sector Provisions for Pharmaceutical Good Manufacturing Practices § 26.9 Equivalence determination... document insufficient evidence of equivalence, lack of opportunity to assess equivalence or a determination...
Information Leakage from Logically Equivalent Frames
Sher, Shlomi; McKenzie, Craig R. M.
2006-01-01
Framing effects are said to occur when equivalent frames lead to different choices. However, the equivalence in question has been incompletely conceptualized. In a new normative analysis of framing effects, we complete the conceptualization by introducing the notion of information equivalence. Information equivalence obtains when no…
Wijsman Orlicz Asymptotically Ideal -Statistical Equivalent Sequences
Directory of Open Access Journals (Sweden)
Bipan Hazarika
2013-01-01
in Wijsman sense and present some definitions which are the natural combination of the definition of asymptotic equivalence, statistical equivalent, -statistical equivalent sequences in Wijsman sense. Finally, we introduce the notion of Cesaro Orlicz asymptotically -equivalent sequences in Wijsman sense and establish their relationship with other classes.
Equivalence relations of AF-algebra extensions
Indian Academy of Sciences (India)
In this paper, we consider equivalence relations of *-algebra extensions and describe the relationship between the isomorphism equivalence and the unitary equivalence. We also show that a certain group homomorphism is the obstruction for these equivalence relations to be the same.
Acute muscular weakness in children
Directory of Open Access Journals (Sweden)
Ricardo Pablo Javier Erazo Torricelli
Full Text Available ABSTRACT Acute muscle weakness in children is a pediatric emergency. During the diagnostic approach, it is crucial to obtain a detailed case history, including: onset of weakness, history of associated febrile states, ingestion of toxic substances/toxins, immunizations, and family history. Neurological examination must be meticulous as well. In this review, we describe the most common diseases related to acute muscle weakness, grouped into the site of origin (from the upper motor neuron to the motor unit. Early detection of hyperCKemia may lead to a myositis diagnosis, and hypokalemia points to the diagnosis of periodic paralysis. Ophthalmoparesis, ptosis and bulbar signs are suggestive of myasthenia gravis or botulism. Distal weakness and hyporeflexia are clinical features of Guillain-Barré syndrome, the most frequent cause of acute muscle weakness. If all studies are normal, a psychogenic cause should be considered. Finding the etiology of acute muscle weakness is essential to execute treatment in a timely manner, improving the prognosis of affected children.
A study of principle and testing of piezoelectric transformer
International Nuclear Information System (INIS)
Liu Weiyue; Wang Yanfang; Huang Yihua; Shi Jun
2002-01-01
The operating principle and structure of a kind of piezoelectric transformer which can be used in a particle accelerator are investigated. The properties of piezoelectric transformer are tested through equivalent circuit combined with experiment
Mixed field dose equivalent measuring instruments
International Nuclear Information System (INIS)
Brackenbush, L.W.; McDonald, J.C.; Endres, G.W.R.; Quam, W.
1985-01-01
In the past, separate instruments have been used to monitor dose equivalent from neutrons and gamma rays. It has been demonstrated that it is now possible to measure simultaneously neutron and gamma dose with a single instrument, the tissue equivalent proportional counter (TEPC). With appropriate algorithms dose equivalent can also be determined from the TEPC. A simple ''pocket rem meter'' for measuring neutron dose equivalent has already been developed. Improved algorithms for determining dose equivalent for mixed fields are presented. (author)
... by a slipped disk in the spine) Stroke MUSCLE DISEASES Becker muscular dystrophy Dermatomyositis Muscular dystrophy (Duchenne) Myotonic dystrophy POISONING Botulism Poisoning ( insecticides , nerve gas) ...
A Cp-theory problem book functional equivalencies
Tkachuk, Vladimir V
2016-01-01
This fourth volume in Vladimir Tkachuk's series on Cp-theory gives reasonably complete coverage of the theory of functional equivalencies through 500 carefully selected problems and exercises. By systematically introducing each of the major topics of Cp-theory, the book is intended to bring a dedicated reader from basic topological principles to the frontiers of modern research. The book presents complete and up-to-date information on the preservation of topological properties by homeomorphisms of function spaces. An exhaustive theory of t-equivalent, u-equivalent and l-equivalent spaces is developed from scratch. The reader will also find introductions to the theory of uniform spaces, the theory of locally convex spaces, as well as the theory of inverse systems and dimension theory. Moreover, the inclusion of Kolmogorov's solution of Hilbert's Problem 13 is included as it is needed for the presentation of the theory of l-equivalent spaces. This volume contains the most important classical re...
An upper bound on right-chiral weak interactions
International Nuclear Information System (INIS)
Stephenson, G.J.; Goldman, T.; Maltman, K.
1990-01-01
Weak vertex corrections to the quark-gluon vertex functions produce differing form-factor corrections for quarks of differing chiralities. These differences grow with increasing four-momentum transfer in the gluon leg. Consequently, inclusive polarized proton--proton scattering to a final state jet should show a large parity-violating asymmetry at high energies. The absence of large signals at sufficiently high energies can be interpreted as being due to balancing vertex corrections from a right-handed weak vector boson of limited mass, and limits on the strength of such signals can, in principle, give upper bounds on that mass. 2 refs
An upper bound on right-Chiral weak interactions
International Nuclear Information System (INIS)
Stephenson, G.J.; Goldman, T.; Maltman, K.
1990-01-01
Weak vertex corrections to the quark-gluon vertex functions produce differing form-factor corrections for quarks of differing chiralities. These differences grow with increasing four-momentum transfer in the gluon leg. Consequently, inclusive polarized proton-proton scattering to a final state jet should show a large parity-violating asymmetry at high energies. The absence of large signals at sufficiently high energies can be interpreted as being due to balancing vertex corrections from a right-handed weak vector boson of limited mass, and limits on the strength of such signals can, in principle, give upper bounds on that mass
Derived equivalences for group rings
König, Steffen
1998-01-01
A self-contained introduction is given to J. Rickard's Morita theory for derived module categories and its recent applications in representation theory of finite groups. In particular, Broué's conjecture is discussed, giving a structural explanation for relations between the p-modular character table of a finite group and that of its "p-local structure". The book is addressed to researchers or graduate students and can serve as material for a seminar. It surveys the current state of the field, and it also provides a "user's guide" to derived equivalences and tilting complexes. Results and proofs are presented in the generality needed for group theoretic applications.
Cosmology with weak lensing surveys
International Nuclear Information System (INIS)
Munshi, Dipak; Valageas, Patrick; Waerbeke, Ludovic van; Heavens, Alan
2008-01-01
Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening matter. The distortions are due to fluctuations in the gravitational potential, and are directly related to the distribution of matter and to the geometry and dynamics of the Universe. As a consequence, weak gravitational lensing offers unique possibilities for probing the Dark Matter and Dark Energy in the Universe. In this review, we summarise the theoretical and observational state of the subject, focussing on the statistical aspects of weak lensing, and consider the prospects for weak lensing surveys in the future. Weak gravitational lensing surveys are complementary to both galaxy surveys and cosmic microwave background (CMB) observations as they probe the unbiased non-linear matter power spectrum at modest redshifts. Most of the cosmological parameters are accurately estimated from CMB and large-scale galaxy surveys, so the focus of attention is shifting to understanding the nature of Dark Matter and Dark Energy. On the theoretical side, recent advances in the use of 3D information of the sources from photometric redshifts promise greater statistical power, and these are further enhanced by the use of statistics beyond two-point quantities such as the power spectrum. The use of 3D information also alleviates difficulties arising from physical effects such as the intrinsic alignment of galaxies, which can mimic weak lensing to some extent. On the observational side, in the next few years weak lensing surveys such as CFHTLS, VST-KIDS and Pan-STARRS, and the planned Dark Energy Survey, will provide the first weak lensing surveys covering very large sky areas and depth. In the long run even more ambitious programmes such as DUNE, the Supernova Anisotropy Probe (SNAP) and Large-aperture Synoptic Survey Telescope (LSST) are planned. Weak lensing of diffuse components such as the CMB and 21 cm emission can also
Cosmology with weak lensing surveys
Energy Technology Data Exchange (ETDEWEB)
Munshi, Dipak [Institute of Astronomy, Madingley Road, Cambridge, CB3 OHA (United Kingdom); Astrophysics Group, Cavendish Laboratory, Madingley Road, Cambridge CB3 OHE (United Kingdom)], E-mail: munshi@ast.cam.ac.uk; Valageas, Patrick [Service de Physique Theorique, CEA Saclay, 91191 Gif-sur-Yvette (France); Waerbeke, Ludovic van [University of British Columbia, Department of Physics and Astronomy, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Heavens, Alan [SUPA - Scottish Universities Physics Alliance, Institute for Astronomy, University of Edinburgh, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom)
2008-06-15
Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening matter. The distortions are due to fluctuations in the gravitational potential, and are directly related to the distribution of matter and to the geometry and dynamics of the Universe. As a consequence, weak gravitational lensing offers unique possibilities for probing the Dark Matter and Dark Energy in the Universe. In this review, we summarise the theoretical and observational state of the subject, focussing on the statistical aspects of weak lensing, and consider the prospects for weak lensing surveys in the future. Weak gravitational lensing surveys are complementary to both galaxy surveys and cosmic microwave background (CMB) observations as they probe the unbiased non-linear matter power spectrum at modest redshifts. Most of the cosmological parameters are accurately estimated from CMB and large-scale galaxy surveys, so the focus of attention is shifting to understanding the nature of Dark Matter and Dark Energy. On the theoretical side, recent advances in the use of 3D information of the sources from photometric redshifts promise greater statistical power, and these are further enhanced by the use of statistics beyond two-point quantities such as the power spectrum. The use of 3D information also alleviates difficulties arising from physical effects such as the intrinsic alignment of galaxies, which can mimic weak lensing to some extent. On the observational side, in the next few years weak lensing surveys such as CFHTLS, VST-KIDS and Pan-STARRS, and the planned Dark Energy Survey, will provide the first weak lensing surveys covering very large sky areas and depth. In the long run even more ambitious programmes such as DUNE, the Supernova Anisotropy Probe (SNAP) and Large-aperture Synoptic Survey Telescope (LSST) are planned. Weak lensing of diffuse components such as the CMB and 21 cm emission can also
Peripheral facial weakness (Bell's palsy).
Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida
2013-06-01
Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.
Quantum discord with weak measurements
International Nuclear Information System (INIS)
Singh, Uttam; Pati, Arun Kumar
2014-01-01
Weak measurements cause small change to quantum states, thereby opening up the possibility of new ways of manipulating and controlling quantum systems. We ask, can weak measurements reveal more quantum correlation in a composite quantum state? We prove that the weak measurement induced quantum discord, called as the “super quantum discord”, is always larger than the quantum discord captured by the strong measurement. Moreover, we prove the monotonicity of the super quantum discord as a function of the measurement strength and in the limit of strong projective measurement the super quantum discord becomes the normal quantum discord. We find that unlike the normal discord, for pure entangled states, the super quantum discord can exceed the quantum entanglement. Our results provide new insights on the nature of quantum correlation and suggest that the notion of quantum correlation is not only observer dependent but also depends on how weakly one perturbs the composite system. We illustrate the key results for pure as well as mixed entangled states. -- Highlights: •Introduced the role of weak measurements in quantifying quantum correlation. •We have introduced the notion of the super quantum discord (SQD). •For pure entangled state, we show that the SQD exceeds the entanglement entropy. •This shows that quantum correlation depends not only on observer but also on measurement strength
The four variational principles of mechanics
International Nuclear Information System (INIS)
Gray, C.G.; Karl, G.; Novikov, V.A.
1996-01-01
We argue that there are four basic forms of the variational principles of mechanics: Hamilton close-quote s least action principle (HP), the generalized Maupertuis principle (MP), and their two reciprocal principles, RHP and RMP. This set is invariant under reciprocity and Legendre transformations. One of these forms (HP) is in the literature: only special cases of the other three are known. The generalized MP has a weaker constraint compared to the traditional formulation, only the mean energy bar E is kept fixed between virtual paths. This reformulation of MP alleviates several weaknesses of the old version. The reciprocal Maupertuis principle (RMP) is the classical limit of Schroedinger close-quote s variational principle of quantum mechanics, and this connection emphasizes the importance of the reciprocity transformation for variational principles. Two unconstrained formulations (UHP and UMP) of these four principles are also proposed, with completely specified Lagrange multipliers Percival close-quote s variational principle for invariant tori and variational principles for scattering orbits are derived from the RMP. The RMP is very convenient for approximate variational solutions to problems in mechanics using Ritz type methods Examples are provided. Copyright copyright 1996 Academic Press, Inc
Calculation methods for determining dose equivalent
International Nuclear Information System (INIS)
Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.
1987-11-01
A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab
Weak-interacting holographic QCD
International Nuclear Information System (INIS)
Gazit, D.; Yee, H.-U.
2008-06-01
We propose a simple prescription for including low-energy weak-interactions into the frame- work of holographic QCD, based on the standard AdS/CFT dictionary of double-trace deformations. As our proposal enables us to calculate various electro-weak observables involving strongly coupled QCD, it opens a new perspective on phenomenological applications of holographic QCD. We illustrate efficiency and usefulness of our method by performing a few exemplar calculations; neutron beta decay, charged pion weak decay, and meson-nucleon parity non-conserving (PNC) couplings. The idea is general enough to be implemented in both Sakai-Sugimoto as well as Hard/Soft Wall models. (author)
Plane waves with weak singularities
International Nuclear Information System (INIS)
David, Justin R.
2003-03-01
We study a class of time dependent solutions of the vacuum Einstein equations which are plane waves with weak null singularities. This singularity is weak in the sense that though the tidal forces diverge at the singularity, the rate of divergence is such that the distortion suffered by a freely falling observer remains finite. Among such weak singular plane waves there is a sub-class which does not exhibit large back reaction in the presence of test scalar probes. String propagation in these backgrounds is smooth and there is a natural way to continue the metric beyond the singularity. This continued metric admits string propagation without the string becoming infinitely excited. We construct a one parameter family of smooth metrics which are at a finite distance in the space of metrics from the extended metric and a well defined operator in the string sigma model which resolves the singularity. (author)
Cosmology and the weak interaction
Energy Technology Data Exchange (ETDEWEB)
Schramm, D.N. (Fermi National Accelerator Lab., Batavia, IL (USA)):(Chicago Univ., IL (USA))
1989-12-01
The weak interaction plays a critical role in modern Big Bang cosmology. This review will emphasize two of its most publicized cosmological connections: Big Bang nucleosynthesis and Dark Matter. The first of these is connected to the cosmological prediction of Neutrino Flavours, N{sub {nu}} {approximately} 3 which is now being confirmed at SLC and LEP. The second is interrelated to the whole problem of galaxy and structure formation in the universe. This review will demonstrate the role of the weak interaction both for dark matter candidates and for the problem of generating seeds to form structure. 87 refs., 3 figs., 5 tabs.
Cosmology and the weak interaction
International Nuclear Information System (INIS)
Schramm, D.N.
1989-12-01
The weak interaction plays a critical role in modern Big Bang cosmology. This review will emphasize two of its most publicized cosmological connections: Big Bang nucleosynthesis and Dark Matter. The first of these is connected to the cosmological prediction of Neutrino Flavours, N ν ∼ 3 which is now being confirmed at SLC and LEP. The second is interrelated to the whole problem of galaxy and structure formation in the universe. This review will demonstrate the role of the weak interaction both for dark matter candidates and for the problem of generating seeds to form structure. 87 refs., 3 figs., 5 tabs
Nonlinear waves and weak turbulence
Zakharov, V E
1997-01-01
This book is a collection of papers on dynamical and statistical theory of nonlinear wave propagation in dispersive conservative media. Emphasis is on waves on the surface of an ideal fluid and on Rossby waves in the atmosphere. Although the book deals mainly with weakly nonlinear waves, it is more than simply a description of standard perturbation techniques. The goal is to show that the theory of weakly interacting waves is naturally related to such areas of mathematics as Diophantine equations, differential geometry of waves, Poincaré normal forms, and the inverse scattering method.
Weak disorder in Fibonacci sequences
Energy Technology Data Exchange (ETDEWEB)
Ben-Naim, E [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Krapivsky, P L [Department of Physics and Center for Molecular Cybernetics, Boston University, Boston, MA 02215 (United States)
2006-05-19
We study how weak disorder affects the growth of the Fibonacci series. We introduce a family of stochastic sequences that grow by the normal Fibonacci recursion with probability 1 - {epsilon}, but follow a different recursion rule with a small probability {epsilon}. We focus on the weak disorder limit and obtain the Lyapunov exponent that characterizes the typical growth of the sequence elements, using perturbation theory. The limiting distribution for the ratio of consecutive sequence elements is obtained as well. A number of variations to the basic Fibonacci recursion including shift, doubling and copying are considered. (letter to the editor)
Weak interactions at high energies
International Nuclear Information System (INIS)
Ellis, J.
1978-08-01
Review lectures are presented on the phenomenological implications of the modern spontaneously broken gauge theories of the weak and electromagnetic interactions, and some observations are made about which high energy experiments probe what aspects of gauge theories. Basic quantum chromodynamics phenomenology is covered including momentum dependent effective quark distributions, the transverse momentum cutoff, search for gluons as sources of hadron jets, the status and prospects for the spectroscopy of fundamental fermions and how fermions may be used to probe aspects of the weak and electromagnetic gauge theory, studies of intermediate vector bosons, and miscellaneous possibilities suggested by gauge theories from the Higgs bosons to speculations about proton decay. 187 references
Editorial: New operational dose equivalent quantities
International Nuclear Information System (INIS)
Harvey, J.R.
1985-01-01
The ICRU Report 39 entitled ''Determination of Dose Equivalents Resulting from External Radiation Sources'' is briefly discussed. Four new operational dose equivalent quantities have been recommended in ICRU 39. The 'ambient dose equivalent' and the 'directional dose equivalent' are applicable to environmental monitoring and the 'individual dose equivalent, penetrating' and the 'individual dose equivalent, superficial' are applicable to individual monitoring. The quantities should meet the needs of day-to-day operational practice, while being acceptable to those concerned with metrological precision, and at the same time be used to give effective control consistent with current perceptions of the risks associated with exposure to ionizing radiations. (U.K.)
Foreword: Biomonitoring Equivalents special issue.
Meek, M E; Sonawane, B; Becker, R A
2008-08-01
The challenge of interpreting results of biomonitoring for environmental chemicals in humans is highlighted in this Foreword to the Biomonitoring Equivalents (BEs) special issue of Regulatory Toxicology and Pharmacology. There is a pressing need to develop risk-based tools in order to empower scientists and health professionals to interpret and communicate the significance of human biomonitoring data. The BE approach, which integrates dosimetry and risk assessment methods, represents an important advancement on the path toward achieving this objective. The articles in this issue, developed as a result of an expert panel meeting, present guidelines for derivation of BEs, guidelines for communication using BEs and several case studies illustrating application of the BE approach for specific substances.
Radiological equivalent of chemical pollutants
International Nuclear Information System (INIS)
Medina, V.O.
1982-01-01
The development of the peaceful uses of nuclear energy has caused continued effort toward public safety through radiation health protection measures and nuclear management practices. However, concern has not been focused on the development specifically in the operation of chemical pestrochemical industries as well as other industrial processes brought about by technological advancements. This article presents the comparison of the risk of radiation and chemicals. The methods used for comparing the risks of late effects of radiation and chemicals are considered at three levels. (a) as a frame of reference to give an impression of resolving power of biological tests; (b) as methods to quantify risks; (c) as instruments for an epidemiological survey of human populations. There are marked dissimilarities between chemicals and radiation and efforts to interpret chemical activity may not be achieved. Applicability of the concept of rad equivalence has many restrictions and as pointed out this approach is not an established one. (RTD)
Tissue equivalence in neutron dosimetry
International Nuclear Information System (INIS)
Nutton, D.H.; Harris, S.J.
1980-01-01
A brief review is presented of the essential features of neutron tissue equivalence for radiotherapy and gives the results of a computation of relative absorbed dose for 14 MeV neutrons, using various tissue models. It is concluded that for the Bragg-Gray equation for ionometric dosimetry it is not sufficient to define the value of W to high accuracy and that it is essential that, for dosimetric measurements to be applicable to real body tissue to an accuracy of better than several per cent, a correction to the total absorbed dose must be made according to the test and tissue atomic composition, although variations in patient anatomy and other radiotherapy parameters will often limit the benefits of such detailed dosimetry. (U.K.)
Weak localization of seismic waves
International Nuclear Information System (INIS)
Larose, E.; Margerin, L.; Tiggelen, B.A. van; Campillo, M.
2004-01-01
We report the observation of weak localization of seismic waves in a natural environment. It emerges as a doubling of the seismic energy around the source within a spot of the width of a wavelength, which is several tens of meters in our case. The characteristic time for its onset is the scattering mean-free time that quantifies the internal heterogeneity
Thomys, Janus; Zhang, Xiaohong
2013-01-01
We describe weak-BCC-algebras (also called BZ-algebras) in which the condition (x∗y)∗z = (x∗z)∗y is satisfied only in the case when elements x, y belong to the same branch. We also characterize ideals, nilradicals, and nilpotent elements of such algebras. PMID:24311983
Voltage Weak DC Distribution Grids
Hailu, T.G.; Mackay, L.J.; Ramirez Elizondo, L.M.; Ferreira, J.A.
2017-01-01
This paper describes the behavior of voltage weak DC distribution systems. These systems have relatively small system capacitance. The size of system capacitance, which stores energy, has a considerable effect on the value of fault currents, control complexity, and system reliability. A number of
The structure of weak interaction
International Nuclear Information System (INIS)
Zee, A.
1977-01-01
The effect of introducing righthanded currents on the structure of weak interaction is discussed. The ΔI=1/2 rule is in the spotlight. The discussion provides an interesting example in which the so-called Iizuka-Okubo-Zweing rule is not only evaded, but completely negated
Coverings, Networks and Weak Topologies
Czech Academy of Sciences Publication Activity Database
Dow, A.; Junnila, H.; Pelant, Jan
2006-01-01
Roč. 53, č. 2 (2006), s. 287-320 ISSN 0025-5793 R&D Projects: GA ČR GA201/97/0216 Institutional research plan: CEZ:AV0Z10190503 Keywords : Banach spaces * weak topologies * networks topologies Subject RIV: BA - General Mathematics
Weak differentiability of product measures
Heidergott, B.F.; Leahu, H.
2010-01-01
In this paper, we study cost functions over a finite collection of random variables. For these types of models, a calculus of differentiation is developed that allows us to obtain a closed-form expression for derivatives where "differentiation" has to be understood in the weak sense. The technique
International Nuclear Information System (INIS)
Huterer, Dragan
2002-01-01
We study the power of upcoming weak lensing surveys to probe dark energy. Dark energy modifies the distance-redshift relation as well as the matter power spectrum, both of which affect the weak lensing convergence power spectrum. Some dark-energy models predict additional clustering on very large scales, but this probably cannot be detected by weak lensing alone due to cosmic variance. With reasonable prior information on other cosmological parameters, we find that a survey covering 1000 sq deg down to a limiting magnitude of R=27 can impose constraints comparable to those expected from upcoming type Ia supernova and number-count surveys. This result, however, is contingent on the control of both observational and theoretical systematics. Concentrating on the latter, we find that the nonlinear power spectrum of matter perturbations and the redshift distribution of source galaxies both need to be determined accurately in order for weak lensing to achieve its full potential. Finally, we discuss the sensitivity of the three-point statistics to dark energy
Weak pion production from nuclei
Indian Academy of Sciences (India)
effect of Pauli blocking, Fermi motion and renormalization of weak ∆ properties ... Furthermore, the angular distribution and the energy distribution of ... Here ψα(p ) and u(p) are the Rarita Schwinger and Dirac spinors for ∆ and nucleon.
International Nuclear Information System (INIS)
Tauhata, L.; Marques, A.
1972-01-01
Energy levels and gamma radiation transitions of Ca 44 are experimentally determined, mainly the weak transition at 564 KeV and 728 KeV. The decay scheme and the method used (coincidence with Ge-Li detector) are also presented [pt
Expanding the Interaction Equivalency Theorem
Directory of Open Access Journals (Sweden)
Brenda Cecilia Padilla Rodriguez
2015-06-01
Full Text Available Although interaction is recognised as a key element for learning, its incorporation in online courses can be challenging. The interaction equivalency theorem provides guidelines: Meaningful learning can be supported as long as one of three types of interactions (learner-content, learner-teacher and learner-learner is present at a high level. This study sought to apply this theorem to the corporate sector, and to expand it to include other indicators of course effectiveness: satisfaction, knowledge transfer, business results and return on expectations. A large Mexican organisation participated in this research, with 146 learners, 30 teachers and 3 academic assistants. Three versions of an online course were designed, each emphasising a different type of interaction. Data were collected through surveys, exams, observations, activity logs, think aloud protocols and sales records. All course versions yielded high levels of effectiveness, in terms of satisfaction, learning and return on expectations. Yet, course design did not dictate the types of interactions in which students engaged within the courses. Findings suggest that the interaction equivalency theorem can be reformulated as follows: In corporate settings, an online course can be effective in terms of satisfaction, learning, knowledge transfer, business results and return on expectations, as long as (a at least one of three types of interaction (learner-content, learner-teacher or learner-learner features prominently in the design of the course, and (b course delivery is consistent with the chosen type of interaction. Focusing on only one type of interaction carries a high risk of confusion, disengagement or missed learning opportunities, which can be managed by incorporating other forms of interactions.
Equivalent damage of loads on pavements
CSIR Research Space (South Africa)
Prozzi, JA
2009-05-26
Full Text Available This report describes a new methodology for the determination of Equivalent Damage Factors (EDFs) of vehicles with multiple axle and wheel configurations on pavements. The basic premise of this new procedure is that "equivalent pavement response...
Investigation of Equivalent Circuit for PEMFC Assessment
International Nuclear Information System (INIS)
Myong, Kwang Jae
2011-01-01
Chemical reactions occurring in a PEMFC are dominated by the physical conditions and interface properties, and the reactions are expressed in terms of impedance. The performance of a PEMFC can be simply diagnosed by examining the impedance because impedance characteristics can be expressed by an equivalent electrical circuit. In this study, the characteristics of a PEMFC are assessed using the AC impedance and various equivalent circuits such as a simple equivalent circuit, equivalent circuit with a CPE, equivalent circuit with two RCs, and equivalent circuit with two CPEs. It was found in this study that the characteristics of a PEMFC could be assessed using impedance and an equivalent circuit, and the accuracy was highest for an equivalent circuit with two CPEs
2010-10-01
... Safety Management (ISM) Code (IMO Resolution A.741(18)) for the purpose of determining that an equivalent... Organization (IMO) “Code of Safety for High Speed Craft” as an equivalent to compliance with applicable...
How the Weak Variance of Momentum Can Turn Out to be Negative
Feyereisen, M. R.
2015-05-01
Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.
Some spectral equivalences between Schroedinger operators
International Nuclear Information System (INIS)
Dunning, C; Hibberd, K E; Links, J
2008-01-01
Spectral equivalences of the quasi-exactly solvable sectors of two classes of Schroedinger operators are established, using Gaudin-type Bethe ansatz equations. In some instances the results can be extended leading to full isospectrality. In this manner we obtain equivalences between PT-symmetric problems and Hermitian problems. We also find equivalences between some classes of Hermitian operators
The definition of the individual dose equivalent
International Nuclear Information System (INIS)
Ehrlich, Margarete
1986-01-01
A brief note examines the choice of the present definition of the individual dose equivalent, the new operational dosimetry quantity for external exposure. The consequences of the use of the individual dose equivalent and the danger facing the individual dose equivalent, as currently defined, are briefly discussed. (UK)
Light weakly interacting massive particles
Gelmini, Graciela B.
2017-08-01
Light weakly interacting massive particles (WIMPs) are dark matter particle candidates with weak scale interaction with the known particles, and mass in the GeV to tens of GeV range. Hints of light WIMPs have appeared in several dark matter searches in the last decade. The unprecedented possible coincidence into tantalizingly close regions of mass and cross section of four separate direct detection experimental hints and a potential indirect detection signal in gamma rays from the galactic center, aroused considerable interest in our field. Even if these hints did not so far result in a discovery, they have had a significant impact in our field. Here we review the evidence for and against light WIMPs as dark matter candidates and discuss future relevant experiments and observations.
(Weakly) three-dimensional caseology
International Nuclear Information System (INIS)
Pomraning, G.C.
1996-01-01
The singular eigenfunction technique of Case for solving one-dimensional planar symmetry linear transport problems is extended to a restricted class of three-dimensional problems. This class involves planar geometry, but with forcing terms (either boundary conditions or internal sources) which are weakly dependent upon the transverse spatial variables. Our analysis involves a singular perturbation about the classic planar analysis, and leads to the usual Case discrete and continuum modes, but modulated by weakly dependent three-dimensional spatial functions. These functions satisfy parabolic differential equations, with a different diffusion coefficient for each mode. Representative one-speed time-independent transport problems are solved in terms of these generalised Case eigenfunctions. Our treatment is very heuristic, but may provide an impetus for more rigorous analysis. (author)
Spatial evolutionary games with weak selection.
Nanda, Mridu; Durrett, Richard
2017-06-06
Recently, a rigorous mathematical theory has been developed for spatial games with weak selection, i.e., when the payoff differences between strategies are small. The key to the analysis is that when space and time are suitably rescaled, the spatial model converges to the solution of a partial differential equation (PDE). This approach can be used to analyze all [Formula: see text] games, but there are a number of [Formula: see text] games for which the behavior of the limiting PDE is not known. In this paper, we give rules for determining the behavior of a large class of [Formula: see text] games and check their validity using simulation. In words, the effect of space is equivalent to making changes in the payoff matrix, and once this is done, the behavior of the spatial game can be predicted from the behavior of the replicator equation for the modified game. We say predicted here because in some cases the behavior of the spatial game is different from that of the replicator equation for the modified game. For example, if a rock-paper-scissors game has a replicator equation that spirals out to the boundary, space stabilizes the system and produces an equilibrium.
History of the weak interactions
International Nuclear Information System (INIS)
Lee, T.D.
1987-01-01
At the 'Jackfest' marking the 65th birthday of Jack Steinberger (see July/August 1986 issue, page 29), T.D. Lee gave an account of the history of the weak interactions. This edited version omits some of Lee's tributes to Steinberger, but retains the impressive insight into the subtleties of a key area of modern physics by one who played a vital role in its development. (orig./HSI).
Weak neutral-current interactions
International Nuclear Information System (INIS)
Barnett, R.M.
1978-08-01
The roles of each type of experiment in establishing uniquely the values of the the neutral-current couplings of u and d quarks are analyzed together with their implications for gauge models of the weak and electromagnetic interactions. An analysis of the neutral-current couplings of electrons and of the data based on the assumption that only one Z 0 boson exists is given. Also a model-independent analysis of parity violation experiments is discussed. 85 references
Submanifolds weakly associated with graphs
Indian Academy of Sciences (India)
A CARRIAZO, L M FERN ´ANDEZ and A RODRÍGUEZ-HIDALGO. Department of Geometry and Topology, ..... by means of trees (connected graphs without cycles) and forests (disjoint unions of trees, see [6]) given in [3], by extending it to weak ... CR-submanifold. In this case, every tree is a K2. Finally, Theorem 3.8 of [3] can ...
Zhu, Junya
2018-06-11
Self-report instruments have been widely used to better understand variations in patient safety climate between physicians and nurses. Research is needed to determine whether differences in patient safety climate reflect true differences in the underlying concepts. This is known as measurement equivalence, which is a prerequisite for meaningful group comparisons. This study aims to examine the degree of measurement equivalence of the responses to a patient safety climate survey of Chinese hospitals and to demonstrate how the measurement equivalence method can be applied to self-report climate surveys for patient safety research. Using data from the Chinese Hospital Survey of Patient Safety Climate from six Chinese hospitals in 2011, we constructed two groups: physicians and nurses (346 per group). We used multiple-group confirmatory factor analyses to examine progressively more stringent restrictions for measurement equivalence. We identified weak factorial equivalence across the two groups. Strong factorial equivalence was found for Organizational Learning, Unit Management Support for Safety, Adequacy of Safety Arrangements, Institutional Commitment to Safety, Error Reporting and Teamwork. Strong factorial equivalence, however, was not found for Safety System, Communication and Peer Support and Staffing. Nevertheless, further analyses suggested that nonequivalence did not meaningfully affect the conclusions regarding physician-nurse differences in patient safety climate. Our results provide evidence of at least partial equivalence of the survey responses between nurses and physicians, supporting mean comparisons of its constructs between the two groups. The measurement equivalence approach is essential to ensure that conclusions about group differences are valid.
Matter tensor from the Hilbert variational principle
International Nuclear Information System (INIS)
Pandres, D. Jr.
1976-01-01
We consider the Hilbert variational principle which is conventionally used to derive Einstein's equations for the source-free gravitational field. We show that at least one version of the equivalence principle suggests an alternative way of performing the variation, resulting in a different set of Einstein equations with sources automatically present. This illustrates a technique which may be applied to any theory that is derived from a variational principle and that admits a gauge group. The essential point is that, if one first imposes a gauge condition and then performs the variation, one obtains field equations with source terms which do not appear if one first performs the variation and then imposes the gauge condition. A second illustration is provided by the variational principle conventionally used to derive Maxwell's equations for the source-free electromagnetic field. If one first imposes the Lorentz gauge condition and then performs the variation, one obtains Maxwell's equations with sources present
The Principle of Energetic Consistency
Cohn, Stephen E.
2009-01-01
A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Directory of Open Access Journals (Sweden)
Alla Luchyk
2015-06-01
Full Text Available Interpretation of Ukrainian and Polish Adverbial Word Equivalents Form and Meaning Interaction in National Explanatory Lexicography The article proves the necessity and possibility of compiling dictionaries with intermediate existence status glossary units, to which the word equivalents belong. In order to form the Ukrainian-Polish dictionary glossary of this type the form and meaning analysis of Ukrainian and Polish word equivalents is done, the common and distinctive features of these language system elements are described, the compiling principles of such dictionary are clarified.
Major strengths and weaknesses of the lod score method.
Ott, J
2001-01-01
Strengths and weaknesses of the lod score method for human genetic linkage analysis are discussed. The main weakness is its requirement for the specification of a detailed inheritance model for the trait. Various strengths are identified. For example, the lod score (likelihood) method has optimality properties when the trait to be studied is known to follow a Mendelian mode of inheritance. The ELOD is a useful measure for information content of the data. The lod score method can emulate various "nonparametric" methods, and this emulation is equivalent to the nonparametric methods. Finally, the possibility of building errors into the analysis will prove to be essential for the large amount of linkage and disequilibrium data expected in the near future.
International Nuclear Information System (INIS)
Goradia, S.G.
2006-01-01
Why is gravity weak? Gravity is plagued with this and many other questions. After decades of exhausting work we do not have a clear answer. In view of this fact it will be shown in the following pages that there are reasons for thinking that gravity is just a composite force consisting of the long-range manifestations of short range nuclear forces that are too tiny to be measured at illuminated or long ranges by particle colliders. This is consistent with Einstein's proposal in 1919
International Nuclear Information System (INIS)
Gaillard, M.K.
1978-08-01
The properties that may help to identify the two additional quark flavors that are expected to be discovered. These properties are lifetime, branching ratios, selection rules, and lepton decay spectra. It is also noted that CP violation may manifest itself more strongly in heavy particle decays than elsewhere providing a new probe of its origin. The theoretical progress in the understanding of nonleptonic transitions among lighter quarks, nonleptonic K and hyperon decay amplitudes, omega minus and charmed particle decay predictions, and lastly the Kobayashi--Maskawa model for the weak coupling of heavy quarks together with the details of its implications for topology and bottomology are treated. 48 references
Weak consistency and strong paraconsistency
Directory of Open Access Journals (Sweden)
Gemma Robles
2009-11-01
Full Text Available In a standard sense, consistency and paraconsistency are understood as, respectively, the absence of any contradiction and as the absence of the ECQ (“E contradictione quodlibet” rule that allows us to conclude any well formed formula from any contradiction. The aim of this paper is to explain the concepts of weak consistency alternative to the standard one, the concepts of paraconsistency related to them and the concept of strong paraconsistency, all of which have been defined by the author together with José M. Méndez.
Electromagnetic weak turbulence theory revisited
Energy Technology Data Exchange (ETDEWEB)
Yoon, P. H. [IPST, University of Maryland, College Park, Maryland 20742 (United States); Ziebell, L. F. [Instituto de Fisica, UFRGS, Porto Alegre, RS (Brazil); Gaelzer, R.; Pavan, J. [Instituto de Fisica e Matematica, UFPel, Pelotas, RS (Brazil)
2012-10-15
The statistical mechanical reformulation of weak turbulence theory for unmagnetized plasmas including fully electromagnetic effects was carried out by Yoon [Phys. Plasmas 13, 022302 (2006)]. However, the wave kinetic equation for the transverse wave ignores the nonlinear three-wave interaction that involves two transverse waves and a Langmuir wave, the incoherent analogue of the so-called Raman scattering process, which may account for the third and higher-harmonic plasma emissions. The present paper extends the previous formalism by including such a term.
Relaxion monodromy and the Weak Gravity Conjecture
International Nuclear Information System (INIS)
Ibáñez, L.E.; Montero, M.; Uranga, A.M.; Valenzuela, I.
2016-01-01
The recently proposed relaxion models require extremely large trans-Planckian axion excursions as well as a potential explicitly violating the axion shift symmetry. The latter property is however inconsistent with the axion periodicity, which corresponds to a gauged discrete shift symmetry. A way to make things consistent is to use monodromy, i.e. both the axion and the potential parameters transform under the discrete shift symmetry. The structure is better described in terms of a 3-form field C_μ_ν_ρ coupling to the SM Higgs through its field strength F_4. The 4-form also couples linearly to the relaxion, in the Kaloper-Sorbo fashion. The extremely small relaxion-Higgs coupling arises in a see-saw fashion as g≃F_4/f, with f being the axion decay constant. We discuss constraints on this type of constructions from membrane nucleation and the Weak Gravity Conjecture. The latter requires the existence of membranes, whose too fast nucleation could in principle drive the theory out of control, unless the cut-off scale is lowered. This allows to rule out the simplest models with the QCD axion as relaxion candidate on purely theoretical grounds. We also discuss possible avenues to embed this structure into string theory.
Relaxion monodromy and the Weak Gravity Conjecture
Energy Technology Data Exchange (ETDEWEB)
Ibáñez, L.E.; Montero, M. [Departamento de Física Teórica, Facultad de CienciasUniversidad Autónoma de Madrid, 28049 Madrid (Spain); Instituto de Física Teórica IFT-UAM/CSIC,C/ Nicolás Cabrera 13-15, Campus de Cantoblanco, 28049 Madrid (Spain); Uranga, A.M. [Instituto de Física Teórica IFT-UAM/CSIC,C/ Nicolás Cabrera 13-15, Campus de Cantoblanco, 28049 Madrid (Spain); Valenzuela, I. [Max-Planck-Institut fur Physik,Fohringer Ring 6, 80805 Munich (Germany); Institute for Theoretical Physics and Center for Extreme Matter and Emergent Phenomena,Utrecht University,Leuvenlaan 4, 3584 CE Utrecht (Netherlands)
2016-04-05
The recently proposed relaxion models require extremely large trans-Planckian axion excursions as well as a potential explicitly violating the axion shift symmetry. The latter property is however inconsistent with the axion periodicity, which corresponds to a gauged discrete shift symmetry. A way to make things consistent is to use monodromy, i.e. both the axion and the potential parameters transform under the discrete shift symmetry. The structure is better described in terms of a 3-form field C{sub μνρ} coupling to the SM Higgs through its field strength F{sub 4}. The 4-form also couples linearly to the relaxion, in the Kaloper-Sorbo fashion. The extremely small relaxion-Higgs coupling arises in a see-saw fashion as g≃F{sub 4}/f, with f being the axion decay constant. We discuss constraints on this type of constructions from membrane nucleation and the Weak Gravity Conjecture. The latter requires the existence of membranes, whose too fast nucleation could in principle drive the theory out of control, unless the cut-off scale is lowered. This allows to rule out the simplest models with the QCD axion as relaxion candidate on purely theoretical grounds. We also discuss possible avenues to embed this structure into string theory.
The Complexity of Identifying Large Equivalence Classes
DEFF Research Database (Denmark)
Skyum, Sven; Frandsen, Gudmund Skovbjerg; Miltersen, Peter Bro
1999-01-01
We prove that at least 3k−4/k(2k−3)(n/2) – O(k)equivalence tests and no more than 2/k (n/2) + O(n) equivalence tests are needed in the worst case to identify the equivalence classes with at least k members in set of n elements. The upper bound is an improvement by a factor 2 compared to known res...
Equivalent Simplification Method of Micro-Grid
Cai Changchun; Cao Xiangqin
2013-01-01
The paper concentrates on the equivalent simplification method for the micro-grid system connection into distributed network. The equivalent simplification method proposed for interaction study between micro-grid and distributed network. Micro-grid network, composite load, gas turbine synchronous generation, wind generation are equivalent simplification and parallel connect into the point of common coupling. A micro-grid system is built and three phase and single phase grounded faults are per...
Calculation methods for determining dose equivalent
International Nuclear Information System (INIS)
Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.
1988-01-01
A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed
Equivalences of real submanifolds in complex space.
ZAITSEV, DMITRI
2001-01-01
PUBLISHED We show that for any real-analytic submanifold M in CN there is a proper real-analytic subvariety V contained in M such that for any p ? M \\ V , any real-analytic submanifold M? in CN, and any p? ? M?, the germs of the submanifolds M and M? at p and p? respectively are formally equivalent if and only if they are biholomorphically equivalent. More general results for k-equivalences are also stated and proved.
Relations of equivalence of conditioned radioactive waste
International Nuclear Information System (INIS)
Kumer, L.; Szeless, A.; Oszuszky, F.
1982-01-01
A compensation for the wastes remaining with the operator of a waste management center, to be given by the agent having caused the waste, may be assured by effecting a financial valuation (equivalence) of wastes. Technically and logically, this equivalence between wastes (or specifically between different waste categories) and financial valuation has been established as reasonable. In this paper, the possibility of establishing such equivalences are developed, and their suitability for waste management concepts is quantitatively expressed
Behavioural equivalence for infinite systems - Partially decidable!
DEFF Research Database (Denmark)
Sunesen, Kim; Nielsen, Mogens
1996-01-01
languages with two generalizations based on traditional approaches capturing non-interleaving behaviour, pomsets representing global causal dependency, and locality representing spatial distribution of events. We first study equivalences on Basic Parallel Processes, BPP, a process calculus equivalent...... of processes between BPP and TCSP, not only are the two equivalences different, but one (locality) is decidable whereas the other (pomsets) is not. The decidability result for locality is proved by a reduction to the reachability problem for Petri nets....
Equivalence in Bilingual Lexicography: Criticism and Suggestions*
Directory of Open Access Journals (Sweden)
Herbert Ernst Wiegand
2011-10-01
Full Text Available
Abstract: A reminder of general problems in the formation of terminology, as illustrated by theGerman Äquivalence (Eng. equivalence and äquivalent (Eng. equivalent, is followed by a critical discussionof the concept of equivalence in contrastive lexicology. It is shown that especially the conceptof partial equivalence is contradictory in its different manifestations. Consequently attemptsare made to give a more precise indication of the concept of equivalence in the metalexicography,with regard to the domain of the nominal lexicon. The problems of especially the metalexicographicconcept of partial equivalence as well as that of divergence are fundamentally expounded.In conclusion the direction is indicated to find more appropriate metalexicographic versions of theconcept of equivalence.
Keywords: EQUIVALENCE, LEXICOGRAPHIC EQUIVALENT, PARTIAL EQUIVALENCE,CONGRUENCE, DIVERGENCE, CONVERGENCE, POLYDIVERGENCE, SYNTAGM-EQUIVALENCE,ZERO EQUIVALENCE, CORRESPONDENCE
Abstrakt: Äquivalenz in der zweisprachigen Lexikographie: Kritik und Vorschläge.Nachdem an allgemeine Probleme der Begriffsbildung am Beispiel von dt. Äquivalenzund dt. äquivalent erinnert wurde, wird zunächst auf Äquivalenzbegriffe in der kontrastiven Lexikologiekritisch eingegangen. Es wird gezeigt, dass insbesondere der Begriff der partiellen Äquivalenzin seinen verschiedenen Ausprägungen widersprüchlich ist. Sodann werden Präzisierungenzu den Äquivalenzbegriffen in der Metalexikographie versucht, die sich auf den Bereich der Nennlexikbeziehen. Insbesondere der metalexikographische Begriff der partiellen Äquivalenz sowie derder Divergenz werden grundsätzlich problematisiert. In welche Richtung man gehen kann, umangemessenere metalexikographische Fassungen des Äquivalenzbegriffs zu finden, wird abschließendangedeutet.
Stichwörter: ÄQUIVALENZ, LEXIKOGRAPHISCHES ÄQUIVALENT, PARTIELLE ÄQUIVALENZ,KONGRUENZ, DIVERGENZ, KONVERGENZ, POLYDIVERGENZ
Potentiometric titration and equivalent weight of humic acid
Pommer, A.M.; Breger, I.A.
1960-01-01
The "acid nature" of humic acid has been controversial for many years. Some investigators claim that humic acid is a true weak acid, while others feel that its behaviour during potentiometric titration can be accounted for by colloidal adsorption of hydrogen ions. The acid character of humic acid has been reinvestigated using newly-derived relationships for the titration of weak acids with strong base. Re-interpreting the potentiometric titration data published by Thiele and Kettner in 1953, it was found that Merck humic acid behaves as a weak polyelectrolytic acid having an equivalent weight of 150, a pKa of 6.8 to 7.0, and a titration exponent of about 4.8. Interdretation of similar data pertaining to the titration of phenol-formaldehyde and pyrogallol-formaldehyde resins, considered to be analogs for humic acid by Thiele and Kettner, leads to the conclusion that it is not possible to differentiate between adsorption and acid-base reaction for these substances. ?? 1960.
International Nuclear Information System (INIS)
Chanowitz, M.S.
1986-03-01
Prospects for the study of standard model weak interactions at the SSC are reviewed, with emphasis on the unique capability of the SSC to study the mechanism of electroweak symmetry breaking whether the associated new quanta are at the TeV scale or higher. Symmetry breaking by the minimal Higgs mechanism and by related strong interaction dynamical variants is summarized. A set of measurements is outlined that would calibrate the proton structure functions and the backgrounds to new physics. The ability to measure the three weak gauge boson vertex is found to complement LEP II, with measurements extending to larger Q 2 at a comparable statistical level in detectable decays. B factory physics is briefly reviewed as one example of a possible broad program of high statistics studies of sub-TeV scale phenomena. The largest section of the talk is devoted to the possible manifestations of symmetry breaking in the WW and ZZ production cross sections. Some new results are presented bearing on the ability to detect high mass WW and ZZ pairs. The principal conclusion is that although nonstandard model scenarios are typically more forgiving, the capability to study symmetry breaking in the standard model (and in related strong interaction dynamical variants) requires achieving the SSC design goals of √ s,L = 40Tev, 10 33 cm -2 sec -1 . 28 refs., 5 figs
Probing supervoids with weak lensing
Higuchi, Yuichi; Inoue, Kaiki Taro
2018-05-01
The cosmic microwave background (CMB) has non-Gaussian features in the temperature fluctuations. An anomalous cold spot surrounded with a hot ring, called the Cold Spot, is one of such features. If a large underdense region (supervoid) resides towards the Cold Spot, we would be able to detect a systematic shape distortion in the images of background source galaxies via weak lensing effect. In order to estimate the detectability of such signals, we used the data of N-body simulations to simulate full-sky ray-tracing of source galaxies. We searched for a most prominent underdense region using the simulated convergence maps smoothed at a scale of 20° and obtained tangential shears around it. The lensing signal expected in a concordant Λ cold dark matter model can be detected at a signal-to-noise ratio S/N ˜ 3. If a supervoid with a radius of ˜200 h-1 Mpc and a density contrast δ0 ˜ -0.3 at the centre resides at a redshift z ˜ 0.2, on-going and near-future weak gravitational lensing surveys would detect a lensing signal with S/N ≳ 4 without resorting to stacking. From the tangential shear profile, we can obtain a constraint on the projected mass distribution of the supervoid.
Copernicus, Kant, and the anthropic cosmological principles
Roush, Sherrilyn
In the last three decades several cosmological principles and styles of reasoning termed 'anthropic' have been introduced into physics research and popular accounts of the universe and human beings' place in it. I discuss the circumstances of 'fine tuning' that have motivated this development, and what is common among the principles. I examine the two primary principles, and find a sharp difference between these 'Weak' and 'Strong' varieties: contrary to the view of the progenitors that all anthropic principles represent a departure from Copernicanism in cosmology, the Weak Anthropic Principle is an instance of Copernicanism. It has close affinities with the step of Copernicus that Immanuel Kant took himself to be imitating in the 'critical' turn that gave rise to the Critique of Pure Reason. I conclude that the fact that a way of going about natural science mentions human beings is not sufficient reason to think that it is a subjective approach; in fact, it may need to mention human beings in order to be objective.
On Resolution Complexity of Matching Principles
DEFF Research Database (Denmark)
Dantchev, Stefan S.
proof system. The results in the thesis fall in this category. We study the Resolution complexity of some Matching Principles. The three major contributions of the thesis are as follows. Firstly, we develop a general technique of proving resolution lower bounds for the perfect matchingprinciples based...... Chessboard as well as for Tseitin tautologies based on rectangular grid graph. We reduce these problems to Tiling games, a concept introduced by us, which may be of interest on its own. Secondly, we find the exact Tree-Resolution complexity of the Weak Pigeon-Hole Principle. It is the most studied...
Radiation protection principles
International Nuclear Information System (INIS)
Ismail Bahari
2007-01-01
The presentation outlines the aspects of radiation protection principles. It discussed the following subjects; radiation hazards and risk, the objectives of radiation protection, three principles of the system - justification of practice, optimization of protection and safety, dose limit
Principles of project management
1982-01-01
The basic principles of project management as practiced by NASA management personnel are presented. These principles are given as ground rules and guidelines to be used in the performance of research, development, construction or operational assignments.
Small vacuum energy from small equivalence violation in scalar gravity
International Nuclear Information System (INIS)
Agrawal, Prateek; Sundrum, Raman
2017-01-01
The theory of scalar gravity proposed by Nordström, and refined by Einstein and Fokker, provides a striking analogy to general relativity. In its modern form, scalar gravity appears as the low-energy effective field theory of the spontaneous breaking of conformal symmetry within a CFT, and is AdS/CFT dual to the original Randall-Sundrum I model, but without a UV brane. Scalar gravity faithfully exhibits several qualitative features of the cosmological constant problem of standard gravity coupled to quantum matter, and the Weinberg no-go theorem can be extended to this case as well. Remarkably, a solution to the scalar gravity cosmological constant problem has been proposed, where the key is a very small violation of the scalar equivalence principle, which can be elegantly formulated as a particular type of deformation of the CFT. In the dual AdS picture this involves implementing Goldberger-Wise radion stabilization where the Goldberger-Wise field is a pseudo-Nambu Goldstone boson. In quantum gravity however, global symmetries protecting pNGBs are not expected to be fundamental. We provide a natural six-dimensional gauge theory origin for this global symmetry and show that the violation of the equivalence principle and the size of the vacuum energy seen by scalar gravity can naturally be exponentially small. Our solution may be of interest for study of non-supersymmetric CFTs in the spontaneously broken phase.
Small vacuum energy from small equivalence violation in scalar gravity
Energy Technology Data Exchange (ETDEWEB)
Agrawal, Prateek [Department of Physics, Harvard University,Cambridge, MA 02138 (United States); Sundrum, Raman [Department of Physics, University of Maryland,College Park, MD 20742 (United States)
2017-05-29
The theory of scalar gravity proposed by Nordström, and refined by Einstein and Fokker, provides a striking analogy to general relativity. In its modern form, scalar gravity appears as the low-energy effective field theory of the spontaneous breaking of conformal symmetry within a CFT, and is AdS/CFT dual to the original Randall-Sundrum I model, but without a UV brane. Scalar gravity faithfully exhibits several qualitative features of the cosmological constant problem of standard gravity coupled to quantum matter, and the Weinberg no-go theorem can be extended to this case as well. Remarkably, a solution to the scalar gravity cosmological constant problem has been proposed, where the key is a very small violation of the scalar equivalence principle, which can be elegantly formulated as a particular type of deformation of the CFT. In the dual AdS picture this involves implementing Goldberger-Wise radion stabilization where the Goldberger-Wise field is a pseudo-Nambu Goldstone boson. In quantum gravity however, global symmetries protecting pNGBs are not expected to be fundamental. We provide a natural six-dimensional gauge theory origin for this global symmetry and show that the violation of the equivalence principle and the size of the vacuum energy seen by scalar gravity can naturally be exponentially small. Our solution may be of interest for study of non-supersymmetric CFTs in the spontaneously broken phase.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Directory of Open Access Journals (Sweden)
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
The certainty principle (review)
Arbatsky, D. A.
2006-01-01
The certainty principle (2005) allowed to conceptualize from the more fundamental grounds both the Heisenberg uncertainty principle (1927) and the Mandelshtam-Tamm relation (1945). In this review I give detailed explanation and discussion of the certainty principle, oriented to all physicists, both theorists and experimenters.
The equivalent energy method: an engineering approach to fracture
International Nuclear Information System (INIS)
Witt, F.J.
1981-01-01
The equivalent energy method for elastic-plastic fracture evaluations was developed around 1970 for determining realistic engineering estimates for the maximum load-displacement or stress-strain conditions for fracture of flawed structures. The basis principles were summarized but the supporting experimental data, most of which were obtained after the method was proposed, have never been collated. This paper restates the original bases more explicitly and presents the validating data in graphical form. Extensive references are given. The volumetric energy ratio, a modelling parameter encompassing both size and temperature, is the fundamental parameter of the equivalent energy method. It is demonstrated that, in an engineering sense, the volumetric energy ratio is a unique material characteristic for a steel, much like a material property except size must be taken into account. With this as a proposition, the basic formula of the equivalent energy method is derived. Sufficient information is presented so that investigators and analysts may judge the viability and applicability of the method to their areas of interest. (author)
A method to obtain new cross-sections transport equivalent
International Nuclear Information System (INIS)
Palmiotti, G.
1988-01-01
We present a method, that allows the calculation, by the mean of variational principle, of equivalent cross-sections in order to take into account the transport and mesh size effects on reactivity variation calculations. The method validation has been made in two and three dimensions geometries. The reactivity variations calculated in three dimensional hexagonal geometry with seven points by subassembly using two sets of equivalent cross-sections for control rods are in a very good agreement with the ones of a transport, extrapolated to zero mesh size, calculation. The difficulty encountered in obtaining a good flux distribution has lead to the utilisation of a single set of equivalent cross-sections calculated by starting from an appropriated R-Z model that allows to take into account also the axial transport effects for the control rod followers. The global results in reactivity variations are still satisfactory with a good performance for the flux distribution. The main interest of the proposed method is the possibility to simulate a full 3D transport calculation, with fine mesh size, using a 3D diffusion code, with a larger mesh size. The results obtained should be affected by uncertainties, which do not exceed ± 4% for a large LMFBR control rod worth and for very different rod configurations. This uncertainty is by far smaller than the experimental uncertainties. (author). 5 refs, 8 figs, 9 tabs
On organizing principles of discrete differential geometry. Geometry of spheres
International Nuclear Information System (INIS)
Bobenko, Alexander I; Suris, Yury B
2007-01-01
Discrete differential geometry aims to develop discrete equivalents of the geometric notions and methods of classical differential geometry. This survey contains a discussion of the following two fundamental discretization principles: the transformation group principle (smooth geometric objects and their discretizations are invariant with respect to the same transformation group) and the consistency principle (discretizations of smooth parametrized geometries can be extended to multidimensional consistent nets). The main concrete geometric problem treated here is discretization of curvature-line parametrized surfaces in Lie geometry. Systematic use of the discretization principles leads to a discretization of curvature-line parametrization which unifies circular and conical nets.
Weak KAM for commuting Hamiltonians
International Nuclear Information System (INIS)
Zavidovique, M
2010-01-01
For two commuting Tonelli Hamiltonians, we recover the commutation of the Lax–Oleinik semi-groups, a result of Barles and Tourin (2001 Indiana Univ. Math. J. 50 1523–44), using a direct geometrical method (Stoke's theorem). We also obtain a 'generalization' of a theorem of Maderna (2002 Bull. Soc. Math. France 130 493–506). More precisely, we prove that if the phase space is the cotangent of a compact manifold then the weak KAM solutions (or viscosity solutions of the critical stationary Hamilton–Jacobi equation) for G and for H are the same. As a corollary we obtain the equality of the Aubry sets and of the Peierls barrier. This is also related to works of Sorrentino (2009 On the Integrability of Tonelli Hamiltonians Preprint) and Bernard (2007 Duke Math. J. 136 401–20)
Equivalent drawbead performance in deep drawing simulations
Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han
1999-01-01
Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the
On uncertainties in definition of dose equivalent
International Nuclear Information System (INIS)
Oda, Keiji
1995-01-01
The author has entertained always the doubt that in a neutron field, if the measured value of the absorbed dose with a tissue equivalent ionization chamber is 1.02±0.01 mGy, may the dose equivalent be taken as 10.2±0.1 mSv. Should it be 10.2 or 11, but the author considers it is 10 or 20. Even if effort is exerted for the precision measurement of absorbed dose, if the coefficient being multiplied to it is not precise, it is meaningless. [Absorbed dose] x [Radiation quality fctor] = [Dose equivalent] seems peculiar. How accurately can dose equivalent be evaluated ? The descriptions related to uncertainties in the publications of ICRU and ICRP are introduced, which are related to radiation quality factor, the accuracy of measuring dose equivalent and so on. Dose equivalent shows the criterion for the degree of risk, or it is considered only as a controlling quantity. The description in the ICRU report 1973 related to dose equivalent and its unit is cited. It was concluded that dose equivalent can be considered only as the absorbed dose being multiplied by a dimensionless factor. The author presented the questions. (K.I.)
Orientifold Planar Equivalence: The Chiral Condensate
DEFF Research Database (Denmark)
Armoni, Adi; Lucini, Biagio; Patella, Agostino
2008-01-01
The recently introduced orientifold planar equivalence is a promising tool for solving non-perturbative problems in QCD. One of the predictions of orientifold planar equivalence is that the chiral condensates of a theory with $N_f$ flavours of Dirac fermions in the symmetric (or antisymmetric...
7 CFR 1005.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1005.54 Section 1005.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1005.54 Equivalent price. See § 1000.54. Uniform Prices ...
7 CFR 1126.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1126.54 Section 1126.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1126.54 Equivalent price. See § 1000.54. Producer Price Differential ...
7 CFR 1001.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1001.54 Section 1001.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1001.54 Equivalent price. See § 1000.54. Producer Price Differential ...
7 CFR 1032.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1032.54 Section 1032.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1032.54 Equivalent price. See § 1000.54. Producer Price Differential ...
7 CFR 1124.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1124.54 Section 1124.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Class Prices § 1124.54 Equivalent price. See § 1000.54. Producer Price Differential ...
7 CFR 1030.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1030.54 Section 1030.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1030.54 Equivalent price. See § 1000.54. ...
7 CFR 1033.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1033.54 Section 1033.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1033.54 Equivalent price. See § 1000.54. Producer Price Differential ...
7 CFR 1131.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1131.54 Section 1131.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1131.54 Equivalent price. See § 1000.54. Uniform Prices ...
7 CFR 1006.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1006.54 Section 1006.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1006.54 Equivalent price. See § 1000.54. Uniform Prices ...
7 CFR 1007.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1007.54 Section 1007.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Handling Class Prices § 1007.54 Equivalent price. See § 1000.54. Uniform Prices ...
7 CFR 1000.54 - Equivalent price.
2010-01-01
... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1000.54 Section 1000.54 Agriculture... Prices § 1000.54 Equivalent price. If for any reason a price or pricing constituent required for computing the prices described in § 1000.50 is not available, the market administrator shall use a price or...
Finding small equivalent decision trees is hard
Zantema, H.; Bodlaender, H.L.
2000-01-01
Two decision trees are called decision equivalent if they represent the same function, i.e., they yield the same result for every possible input. We prove that given a decision tree and a number, to decide if there is a decision equivalent decision tree of size at most that number is NPcomplete. As
What is Metaphysical Equivalence? | Miller | Philosophical Papers
African Journals Online (AJOL)
Theories are metaphysically equivalent just if there is no fact of the matter that could render one theory true and the other false. In this paper I argue that if we are judiciously to resolve disputes about whether theories are equivalent or not, we need to develop testable criteria that will give us epistemic access to the obtaining ...
EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS
LUIJBEN, TCW
1991-01-01
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank
Fermi and the Theory of Weak Interactions
Indian Academy of Sciences (India)
IAS Admin
Quantum Field Theory created by Dirac and used by Fermi to describe weak ... of classical electrodynamics (from which the electric field and magnetic field can be obtained .... Universe. However, thanks to weak interactions, this can be done.
Nuclear beta decay and the weak interaction
International Nuclear Information System (INIS)
Kean, D.C.
1975-11-01
Short notes are presented on various aspects of nuclear beta decay and weak interactions including: super-allowed transitions, parity violation, interaction strengths, coupling constants, and the current-current formalism of weak interaction. (R.L.)
Nuclear detectors: principles and applications
International Nuclear Information System (INIS)
Belhadj, Marouane
1999-01-01
Nuclear technology is a vast domain. It has several applications, for instance in hydrology, it is used in the analysis of underground water, dating by carbon 14, Our study consists on representing the nuclear detectors based on their principle of functioning and their electronic constitution. However, because of some technical problems, we have not made a deepen study on their applications that could certainly have a big support on our subject. In spite of the existence of an equipment of high performance and technology in the centre, it remains to resolve the problem of control of instruments. Therefore, the calibration of these equipment remains the best guaranteed of a good quality of the counting. Besides, it allows us to approach the influence of the external and internal parameters on the equipment and the reasons of errors of measurements, to introduce equivalent corrections. (author). 22 refs
Equivalence in Ventilation and Indoor Air Quality
Energy Technology Data Exchange (ETDEWEB)
Sherman, Max; Walker, Iain; Logue, Jennifer
2011-08-01
We ventilate buildings to provide acceptable indoor air quality (IAQ). Ventilation standards (such as American Society of Heating, Refrigerating, and Air-Conditioning Enginners [ASHRAE] Standard 62) specify minimum ventilation rates without taking into account the impact of those rates on IAQ. Innovative ventilation management is often a desirable element of reducing energy consumption or improving IAQ or comfort. Variable ventilation is one innovative strategy. To use variable ventilation in a way that meets standards, it is necessary to have a method for determining equivalence in terms of either ventilation or indoor air quality. This study develops methods to calculate either equivalent ventilation or equivalent IAQ. We demonstrate that equivalent ventilation can be used as the basis for dynamic ventilation control, reducing peak load and infiltration of outdoor contaminants. We also show that equivalent IAQ could allow some contaminants to exceed current standards if other contaminants are more stringently controlled.
Beyond Language Equivalence on Visibly Pushdown Automata
DEFF Research Database (Denmark)
Srba, Jiri
2009-01-01
We study (bi)simulation-like preorder/equivalence checking on the class of visibly pushdown automata and its natural subclasses visibly BPA (Basic Process Algebra) and visibly one-counter automata. We describe generic methods for proving complexity upper and lower bounds for a number of studied...... preorders and equivalences like simulation, completed simulation, ready simulation, 2-nested simulation preorders/equivalences and bisimulation equivalence. Our main results are that all the mentioned equivalences and preorders are EXPTIME-complete on visibly pushdown automata, PSPACE-complete on visibly...... one-counter automata and P-complete on visibly BPA. Our PSPACE lower bound for visibly one-counter automata improves also the previously known DP-hardness results for ordinary one-counter automata and one-counter nets. Finally, we study regularity checking problems for visibly pushdown automata...
Equity-regarding poverty measures: differences in needs and the role of equivalence scales
Udo Ebert
2010-01-01
The paper investigates the definition of equity-regarding poverty measures when there are different household types in the population. It demonstrates the implications of a between-type regressive transfer principle for poverty measures, for the choice of poverty lines, and for the measurement of living standard. The role of equivalence scales, which are popular in empirical work on poverty measurement, is clarified.
Visser, J.H.M.; Bigaj, A.J.
2014-01-01
In order not to hampers innovations, the Dutch National Building Regulations (NBR), allow an alternative approval route for new building materials. It is based on the principles of equivalent performance which states that if the solution proposed can be proven to have the same level of safety,
Dimensional cosmological principles
International Nuclear Information System (INIS)
Chi, L.K.
1985-01-01
The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle
Magnified Weak Lensing Cross Correlation Tomography
Energy Technology Data Exchange (ETDEWEB)
Ulmer, Melville P., Clowe, Douglas I.
2010-11-30
This project carried out a weak lensing tomography (WLT) measurement around rich clusters of galaxies. This project used ground based photometric redshift data combined with HST archived cluster images that provide the WLT and cluster mass modeling. The technique has already produced interesting results (Guennou et al, 2010,Astronomy & Astrophysics Vol 523, page 21, and Clowe et al, 2011 to be submitted). Guennou et al have validated that the necessary accuracy can be achieved with photometric redshifts for our purposes. Clowe et al titled "The DAFT/FADA survey. II. Tomographic weak lensing signal from 10 high redshift clusters," have shown that for the **first time** via this purely geometrical technique, which does not assume a standard rod or candle, that a cosmological constant is **required** for flat cosmologies. The intent of this project is not to produce the best constraint on the value of the dark energy equation of state, w. Rather, this project is to carry out a sustained effort of weak lensing tomography that will naturally feed into the near term Dark Energy Survey (DES) and to provide invaluable mass calibration for that project. These results will greatly advance a key cosmological method which will be applied to the top-rated ground-based project in the Astro2020 decadal survey, LSST. Weak lensing tomography is one of the key science drivers behind LSST. CO-I Clowe is on the weak lensing LSST committee, and senior scientist on this project, at FNAL James Annis, plays a leading role in the DES. This project has built on successful proposals to obtain ground-based imaging for the cluster sample. By 1 Jan, it is anticipated the project will have accumulated complete 5-color photometry on 30 (or about 1/3) of the targeted cluster sample (public webpage for the survey is available at http://cencos.oamp.fr/DAFT/ and has a current summary of the observational status of various clusters). In all, the project has now been awarded the equivalent of over 60
A generalized Principle of Relativity
International Nuclear Information System (INIS)
Felice, Fernando de; Preti, Giovanni
2009-01-01
The Theory of Relativity stands as a firm groundstone on which modern physics is founded. In this paper we bring to light an hitherto undisclosed richness of this theory, namely its admitting a consistent reformulation which is able to provide a unified scenario for all kinds of particles, be they lightlike or not. This result hinges on a generalized Principle of Relativity which is intrinsic to Einstein's theory - a fact which went completely unnoticed before. The road leading to this generalization starts, in the very spirit of Relativity, from enhancing full equivalence between the four spacetime directions by requiring full equivalence between the motions along these four spacetime directions as well. So far, no measurable spatial velocity in the direction of the time axis has ever been defined, on the same footing of the usual velocities - the 'space-velocities' - in the local three-space of a given observer. In this paper, we show how Relativity allows such a 'time-velocity' to be defined in a very natural way, for any particle and in any reference frame. As a consequence of this natural definition, it also follows that the time- and space-velocity vectors sum up to define a spacelike 'world-velocity' vector, the modulus of which - the world-velocity - turns out to be equal to the Maxwell's constant c, irrespective of the observer who measures it. This measurable world-velocity (not to be confused with the space-velocities we are used to deal with) therefore represents the speed at which all kinds of particles move in spacetime, according to any observer. As remarked above, the unifying scenario thus emerging is intrinsic to Einstein's Theory; it extends the role traditionally assigned to Maxwell's constant c, and can therefore justly be referred to as 'a generalized Principle of Relativity'.
Weakly distributive modules. Applications to supplement submodules
Indian Academy of Sciences (India)
Abstract. In this paper, we define and study weakly distributive modules as a proper generalization of distributive modules. We prove that, weakly distributive supplemented modules are amply supplemented. In a weakly distributive supplemented module every submodule has a unique coclosure. This generalizes a result of ...
Physical acoustics principles and methods
Mason, Warren P
2012-01-01
Physical Acoustics: Principles and Methods, Volume IV, Part B: Applications to Quantum and Solid State Physics provides an introduction to the various applications of quantum mechanics to acoustics by describing several processes for which such considerations are essential. This book discusses the transmission of sound waves in molten metals. Comprised of seven chapters, this volume starts with an overview of the interactions that can happen between electrons and acoustic waves when magnetic fields are present. This text then describes acoustic and plasma waves in ionized gases wherein oscillations are subject to hydrodynamic as well as electromagnetic forces. Other chapters examine the resonances and relaxations that can take place in polymer systems. This book discusses as well the general theory of the interaction of a weak sinusoidal field with matter. The final chapter describes the sound velocities in the rocks composing the Earth. This book is a valuable resource for physicists and engineers.
2013-11-12
... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...
2012-10-05
... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...
Variational principles for locally variational forms
International Nuclear Information System (INIS)
Brajercik, J.; Krupka, D.
2005-01-01
We present the theory of higher order local variational principles in fibered manifolds, in which the fundamental global concept is a locally variational dynamical form. Any two Lepage forms, defining a local variational principle for this form, differ on intersection of their domains, by a variationally trivial form. In this sense, but in a different geometric setting, the local variational principles satisfy analogous properties as the variational functionals of the Chern-Simons type. The resulting theory of extremals and symmetries extends the first order theories of the Lagrange-Souriau form, presented by Grigore and Popp, and closed equivalents of the first order Euler-Lagrange forms of Hakova and Krupkova. Conceptually, our approach differs from Prieto, who uses the Poincare-Cartan forms, which do not have higher order global analogues
Principles of Economic Rationality in Mice.
Rivalan, Marion; Winter, York; Nachev, Vladislav
2017-12-12
Humans and non-human animals frequently violate principles of economic rationality, such as transitivity, independence of irrelevant alternatives, and regularity. The conditions that lead to these violations are not completely understood. Here we report a study on mice tested in automated home-cage setups using rewards of drinking water. Rewards differed in one of two dimensions, volume or probability. Our results suggest that mouse choice conforms to the principles of economic rationality for options that differ along a single reward dimension. A psychometric analysis of mouse choices further revealed that mice responded more strongly to differences in probability than to differences in volume, despite equivalence in return rates. This study also demonstrates the synergistic effect between the principles of economic rationality and psychophysics in making quantitative predictions about choices of healthy laboratory mice. This opens up new possibilities for the analyses of multi-dimensional choice and the use of mice with cognitive impairments that may violate economic rationality.
A variational principle for the plasma centrifuge
International Nuclear Information System (INIS)
Ludwig, G.O.
1986-09-01
A variational principle is derived which describes the stationary state of the plasma column in a plasma centrifuge. Starting with the fluid equations in a rotating frame the theory is developed using the method of irreversible thermodynamics. This formulation easily leads to an expression for the density distribution of the l-species at sedimentation equilibrium, taking into account the effect of the electric and magnetic forces. Assuming stationary boundary conditions and rigid rotation nonequilibrium states the condition for thermodynamic stability integrated over the volume of the system reduces, under certain restrictions, to the principle of minimum entropy production in the stationary state. This principle yields a variational problem which is equivalent to the original problem posed by the stationary fluid equations. The variational method is useful in achieving approximate solutions that give the electric potential and current distributions in the rotating plasma column consistent with an assumed plasma density profile. (Author) [pt
Analytical and numerical construction of equivalent cables.
Lindsay, K A; Rosenberg, J R; Tucker, G
2003-08-01
The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.
Modified physiologically equivalent temperature—basics and applications for western European climate
Chen, Yung-Chang; Matzarakis, Andreas
2018-05-01
A new thermal index, the modified physiologically equivalent temperature (mPET) has been developed for universal application in different climate zones. The mPET has been improved against the weaknesses of the original physiologically equivalent temperature (PET) by enhancing evaluation of the humidity and clothing variability. The principles of mPET and differences between original PET and mPET are introduced and discussed in this study. Furthermore, this study has also evidenced the usability of mPET with climatic data in Freiburg, which is located in Western Europe. Comparisons of PET, mPET, and Universal Thermal Climate Index (UTCI) have shown that mPET gives a more realistic estimation of human thermal sensation than the other two thermal indices (PET, UTCI) for the thermal conditions in Freiburg. Additionally, a comparison of physiological parameters between mPET model and PET model (Munich Energy Balance Model for Individual, namely MEMI) is proposed. The core temperatures and skin temperatures of PET model vary more violently to a low temperature during cold stress than the mPET model. It can be regarded as that the mPET model gives a more realistic core temperature and mean skin temperature than the PET model. Statistical regression analysis of mPET based on the air temperature, mean radiant temperature, vapor pressure, and wind speed has been carried out. The R square (0.995) has shown a well co-relationship between human biometeorological factors and mPET. The regression coefficient of each factor represents the influence of the each factor on changing mPET (i.e., ±1 °C of T a = ± 0.54 °C of mPET). The first-order regression has been considered predicting a more realistic estimation of mPET at Freiburg during 2003 than the other higher order regression model, because the predicted mPET from the first-order regression has less difference from mPET calculated from measurement data. Statistic tests recognize that mPET can effectively evaluate the
Measurements of weak conversion lines
International Nuclear Information System (INIS)
Feoktistov, A.I.; Frantsev, Yu.E.
1979-01-01
Described is a new methods for measuring weak conversion lines with the help of the β spectrometer of the π √ 2 type which permits to increase the reliability of the results obtained. According to this method the measurements were carried out by short series with the storage of the information obtained on the punched tape. The spectrometer magnetic field was stabilized during the measuring of the conversion spectra with the help of three nmr recorders. Instead of the dependence of the pulse calculation rate on the magnetic field value was measured the dependence of the calculation rate on the value of the voltage applied between the source and the spectrometer chamber. A short description of the automatic set-up for measuring conversion lines according to the method proposed is given. The main set-up elements are the voltage multiplexer timer, printer, scaler and the pulse analyzer. With the help of the above methods obtained is the K 1035, 8 keV 182 Ta line. It is obtained as a result of the composition of 96 measurement series. Each measurement time constitutes 640 s 12 points are taken on the line
Methodology for analyzing weak spectra
International Nuclear Information System (INIS)
Yankovich, T.L.; Swainson, I.P.
2000-02-01
There is considerable interest in quantifying radionuclide transfer between environmental compartments. However, in many cases, it can be a challenge to detect concentrations of gamma-emitting radionuclides due to their low levels in environmental samples. As a result, it is valuable to develop analytical protocols to ensure consistent analysis of the areas under weak peaks. The current study has focused on testing how reproducibly peak areas and baselines can be determined using two analytical approaches. The first approach, which can be carried out using Maestro software, involves extracting net counts under a curve without fitting a functional form to the peak, whereas the second approach, which is used by most other peak fitting programs, determines net counts from spectra by fitting a Gaussian form to the data. It was found that the second approach produces more consistent peak area and baseline measurements, with the ability to de-convolute multiple, overlapping peaks. In addition, programs, such as Peak Fit, which can be used to fit a form to spectral data, often provide goodness of fit analyses, since the Gaussian form can be described using a characteristic equation against which peak data can be tested for their statistical significance. (author)
Geometry of the local equivalence of states
Energy Technology Data Exchange (ETDEWEB)
Sawicki, A; Kus, M, E-mail: assawi@cft.edu.pl, E-mail: marek.kus@cft.edu.pl [Center for Theoretical Physics, Polish Academy of Sciences, Al Lotnikow 32/46, 02-668 Warszawa (Poland)
2011-12-09
We present a description of locally equivalent states in terms of symplectic geometry. Using the moment map between local orbits in the space of states and coadjoint orbits of the local unitary group, we reduce the problem of local unitary equivalence to an easy part consisting of identifying the proper coadjoint orbit and a harder problem of the geometry of fibers of the moment map. We give a detailed analysis of the properties of orbits of 'equally entangled states'. In particular, we show connections between certain symplectic properties of orbits such as their isotropy and coisotropy with effective criteria of local unitary equivalence. (paper)
HIGH-REDSHIFT SDSS QUASARS WITH WEAK EMISSION LINES
International Nuclear Information System (INIS)
Diamond-Stanic, Aleksandar M.; Fan Xiaohui; Jiang Linhua; Kim, J. Serena; Schmidt, Gary D.; Smith, Paul S.; Vestergaard, Marianne; Young, Jason E.; Brandt, W. N.; Shemmer, Ohad; Gibson, Robert R.; Schneider, Donald P.; Strauss, Michael A.; Shen Yue; Anderson, Scott F.; Carilli, Christopher L.; Richards, Gordon T.
2009-01-01
We identify a sample of 74 high-redshift quasars (z > 3) with weak emission lines from the Fifth Data Release of the Sloan Digital Sky Survey and present infrared, optical, and radio observations of a subsample of four objects at z > 4. These weak emission-line quasars (WLQs) constitute a prominent tail of the Lyα + N v equivalent width distribution, and we compare them to quasars with more typical emission-line properties and to low-redshift active galactic nuclei with weak/absent emission lines, namely BL Lac objects. We find that WLQs exhibit hot (T ∼ 1000 K) thermal dust emission and have rest-frame 0.1-5 μm spectral energy distributions that are quite similar to those of normal quasars. The variability, polarization, and radio properties of WLQs are also different from those of BL Lacs, making continuum boosting by a relativistic jet an unlikely physical interpretation. The most probable scenario for WLQs involves broad-line region properties that are physically distinct from those of normal quasars.
On the equivalence of chaos control systems
International Nuclear Information System (INIS)
Wang Xiaofan
2003-01-01
For a given chaotic system, different control systems can be constructed depending on which parameter is tuned or where the external input is added. We prove that two different feedback control systems are qualitatively equivalent if they are feedback linearizable
Equivalence relations and the reinforcement contingency.
Sidman, M
2000-07-01
Where do equivalence relations come from? One possible answer is that they arise directly from the reinforcement contingency. That is to say, a reinforcement contingency produces two types of outcome: (a) 2-, 3-, 4-, 5-, or n-term units of analysis that are known, respectively, as operant reinforcement, simple discrimination, conditional discrimination, second-order conditional discrimination, and so on; and (b) equivalence relations that consist of ordered pairs of all positive elements that participate in the contingency. This conception of the origin of equivalence relations leads to a number of new and verifiable ways of conceptualizing equivalence relations and, more generally, the stimulus control of operant behavior. The theory is also capable of experimental disproof.
REFractions: The Representing Equivalent Fractions Game
Tucker, Stephen I.
2014-01-01
Stephen Tucker presents a fractions game that addresses a range of fraction concepts including equivalence and computation. The REFractions game also improves students' fluency with representing, comparing and adding fractions.
ON THE EQUIVALENCE OF THE ABEL EQUATION
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
This article uses the reflecting function of Mironenko to study some complicated differential equations which are equivalent to the Abel equation. The results are applied to discuss the behavior of solutions of these complicated differential equations.
interpratation: of equivalences and cultural untranslatability
African Journals Online (AJOL)
jgmweri
translatability in cultural diversity in terms equivalences such as –Vocabulary or lexical ..... A KSL interpreter who does not understand this English idiom may literally interpret it .... Nida, E.A. (1958) Analysis of meaning and dictionary making.
Biomechanics principles and practices
Peterson, Donald R
2014-01-01
Presents Current Principles and ApplicationsBiomedical engineering is considered to be the most expansive of all the engineering sciences. Its function involves the direct combination of core engineering sciences as well as knowledge of nonengineering disciplines such as biology and medicine. Drawing on material from the biomechanics section of The Biomedical Engineering Handbook, Fourth Edition and utilizing the expert knowledge of respected published scientists in the application and research of biomechanics, Biomechanics: Principles and Practices discusses the latest principles and applicat
Dolan, Thomas James
2013-01-01
Fusion Research, Volume I: Principles provides a general description of the methods and problems of fusion research. The book contains three main parts: Principles, Experiments, and Technology. The Principles part describes the conditions necessary for a fusion reaction, as well as the fundamentals of plasma confinement, heating, and diagnostics. The Experiments part details about forty plasma confinement schemes and experiments. The last part explores various engineering problems associated with reactor design, vacuum and magnet systems, materials, plasma purity, fueling, blankets, neutronics
Homological properties of modules with finite weak injective and weak flat dimensions
Zhao, Tiwei
2017-01-01
In this paper, we define a class of relative derived functors in terms of left or right weak flat resolutions to compute the weak flat dimension of modules. Moreover, we investigate two classes of modules larger than that of weak injective and weak flat modules, study the existence of covers and preenvelopes, and give some applications.
S-equivalents lagrangians in generalized mechanics
International Nuclear Information System (INIS)
Negri, L.J.; Silva, Edna G. da.
1985-01-01
The problem of s-equivalent lagrangians is considered in the realm of generalized mechanics. Some results corresponding to the ordinary (non-generalized) mechanics are extended to the generalized case. A theorem for the reduction of the higher order lagrangian description to the usual order is found to be useful for the analysis of generalized mechanical systems and leads to a new class of equivalence between lagrangian functions. Some new perspectives are pointed out. (Author) [pt
Database principles programming performance
O'Neil, Patrick
2014-01-01
Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi
National Research Council Canada - National Science Library
Walker, C. H
2012-01-01
"Now in its fourth edition, this exceptionally accessible text provides students with a multidisciplinary perspective and a grounding in the fundamental principles required for research in toxicology today...
Weak boson emission in hadron collider processes
International Nuclear Information System (INIS)
Baur, U.
2007-01-01
The O(α) virtual weak radiative corrections to many hadron collider processes are known to become large and negative at high energies, due to the appearance of Sudakov-like logarithms. At the same order in perturbation theory, weak boson emission diagrams contribute. Since the W and Z bosons are massive, the O(α) virtual weak radiative corrections and the contributions from weak boson emission are separately finite. Thus, unlike in QED or QCD calculations, there is no technical reason for including gauge boson emission diagrams in calculations of electroweak radiative corrections. In most calculations of the O(α) electroweak radiative corrections, weak boson emission diagrams are therefore not taken into account. Another reason for not including these diagrams is that they lead to final states which differ from that of the original process. However, in experiment, one usually considers partially inclusive final states. Weak boson emission diagrams thus should be included in calculations of electroweak radiative corrections. In this paper, I examine the role of weak boson emission in those processes at the Fermilab Tevatron and the CERN LHC for which the one-loop electroweak radiative corrections are known to become large at high energies (inclusive jet, isolated photon, Z+1 jet, Drell-Yan, di-boson, tt, and single top production). In general, I find that the cross section for weak boson emission is substantial at high energies and that weak boson emission and the O(α) virtual weak radiative corrections partially cancel
Heisenberg's principle of uncertainty and the uncertainty relations
International Nuclear Information System (INIS)
Redei, Miklos
1987-01-01
The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs
The role of general relativity in the uncertainty principle
International Nuclear Information System (INIS)
Padmanabhan, T.
1986-01-01
The role played by general relativity in quantum mechanics (especially as regards the uncertainty principle) is investigated. It is confirmed that the validity of time-energy uncertainty does depend on gravitational time dilation. It is also shown that there exists an intrinsic lower bound to the accuracy with which acceleration due to gravity can be measured. The motion of equivalence principle in quantum mechanics is clarified. (author)
Further investigation on the precise formulation of the equivalence theorem
International Nuclear Information System (INIS)
He, H.; Kuang, Y.; Li, X.
1994-01-01
Based on a systematic analysis of the renormalization schemes in the general R ξ gauge, the precise formulation of the equivalence theorem for longitudinal weak boson scatterings is given both in the SU(2) L Higgs theory and in the realistic SU(2)xU(1) electroweak theory to all orders in the perturbation for an arbitrary Higgs boson mass m H . It is shown that there is generally a renormalization-scheme- and ξ-dependent modification factor C mod and a simple formula for C mod is obtained. Furthermore, a convenient particular renormalization scheme is proposed in which C mod is exactly unity. Results of C mod in other currently used schemes are also discussed especially on their ξ and m H dependence through explicit one-loop calculations. It is shown that in some currently used schemes the deviation of C mod from unity and the ξ dependence of C mod are significant even in the large-m H limit. Therefore care should be taken when applying the equivalence theorem
APPLYING THE PRINCIPLES OF ACCOUNTING IN
NAGY CRISTINA MIHAELA; SABĂU CRĂCIUN; ”Tibiscus” University of Timişoara, Faculty of Economic Science
2015-01-01
The application of accounting principles (accounting principle on accrual basis; principle of business continuity; method consistency principle; prudence principle; independence principle; the principle of separate valuation of assets and liabilities; intangibility principle; non-compensation principle; the principle of substance over form; the principle of threshold significance) to companies that are in bankruptcy procedure has a number of particularities. Thus, some principl...
Classical field approach to quantum weak measurements.
Dressel, Justin; Bliokh, Konstantin Y; Nori, Franco
2014-03-21
By generalizing the quantum weak measurement protocol to the case of quantum fields, we show that weak measurements probe an effective classical background field that describes the average field configuration in the spacetime region between pre- and postselection boundary conditions. The classical field is itself a weak value of the corresponding quantum field operator and satisfies equations of motion that extremize an effective action. Weak measurements perturb this effective action, producing measurable changes to the classical field dynamics. As such, weakly measured effects always correspond to an effective classical field. This general result explains why these effects appear to be robust for pre- and postselected ensembles, and why they can also be measured using classical field techniques that are not weak for individual excitations of the field.
Instrumental systematics and weak gravitational lensing
International Nuclear Information System (INIS)
Mandelbaum, R.
2015-01-01
We present a pedagogical review of the weak gravitational lensing measurement process and its connection to major scientific questions such as dark matter and dark energy. Then we describe common ways of parametrizing systematic errors and understanding how they affect weak lensing measurements. Finally, we discuss several instrumental systematics and how they fit into this context, and conclude with some future perspective on how progress can be made in understanding the impact of instrumental systematics on weak lensing measurements
Fixed points of occasionally weakly biased mappings
Y. Mahendra Singh, M. R. Singh
2012-01-01
Common fixed point results due to Pant et al. [Pant et al., Weak reciprocal continuity and fixed point theorems, Ann Univ Ferrara, 57(1), 181-190 (2011)] are extended to a class of non commuting operators called occasionally weakly biased pair[ N. Hussain, M. A. Khamsi A. Latif, Commonfixed points for JH-operators and occasionally weakly biased pairs under relaxed conditions, Nonlinear Analysis, 74, 2133-2140 (2011)]. We also provideillustrative examples to justify the improvements. Abstract....
Robust weak measurements on finite samples
International Nuclear Information System (INIS)
Tollaksen, Jeff
2007-01-01
A new weak measurement procedure is introduced for finite samples which yields accurate weak values that are outside the range of eigenvalues and which do not require an exponentially rare ensemble. This procedure provides a unique advantage in the amplification of small nonrandom signals by minimizing uncertainties in determining the weak value and by minimizing sample size. This procedure can also extend the strength of the coupling between the system and measuring device to a new regime
Spin effects in the weak interaction
International Nuclear Information System (INIS)
Freedman, S.J.; Chicago Univ., IL; Chicago Univ., IL
1990-01-01
Modern experiments investigating the beta decay of the neutron and light nuclei are still providing important constraints on the theory of the weak interaction. Beta decay experiments are yielding more precise values for allowed and induced weak coupling constants and putting constraints on possible extensions to the standard electroweak model. Here we emphasize the implications of recent experiments to pin down the strengths of the weak vector and axial vector couplings of the nucleon
The genetic difference principle.
Farrelly, Colin
2004-01-01
In the newly emerging debates about genetics and justice three distinct principles have begun to emerge concerning what the distributive aim of genetic interventions should be. These principles are: genetic equality, a genetic decent minimum, and the genetic difference principle. In this paper, I examine the rationale of each of these principles and argue that genetic equality and a genetic decent minimum are ill-equipped to tackle what I call the currency problem and the problem of weight. The genetic difference principle is the most promising of the three principles and I develop this principle so that it takes seriously the concerns of just health care and distributive justice in general. Given the strains on public funds for other important social programmes, the costs of pursuing genetic interventions and the nature of genetic interventions, I conclude that a more lax interpretation of the genetic difference principle is appropriate. This interpretation stipulates that genetic inequalities should be arranged so that they are to the greatest reasonable benefit of the least advantaged. Such a proposal is consistent with prioritarianism and provides some practical guidance for non-ideal societies--that is, societies that do not have the endless amount of resources needed to satisfy every requirement of justice.
van Heerwaarden, A.E.; Kaas, R.
1992-01-01
A premium principle is derived, in which the loading for a risk is the reinsurance loading for an excess-of-loss cover. It is shown that the principle is well-behaved in the sense that it results in larger premiums for risks that are larger in stop-loss order or in stochastic dominance.
International Nuclear Information System (INIS)
Fatmi, H.A.; Resconi, G.
1988-01-01
In 1954 while reviewing the theory of communication and cybernetics the late Professor Dennis Gabor presented a new mathematical principle for the design of advanced computers. During our work on these computers it was found that the Gabor formulation can be further advanced to include more recent developments in Lie algebras and geometric probability, giving rise to a new computing principle
International Nuclear Information System (INIS)
Carr, B.J.
1982-01-01
The anthropic principle (the conjecture that certain features of the world are determined by the existence of Man) is discussed with the listing of the objections, and is stated that nearly all the constants of nature may be determined by the anthropic principle which does not give exact values for the constants but only their orders of magnitude. (J.T.)
On Equivalence of Nonequilibrium Thermodynamic and Statistical Entropies
Directory of Open Access Journals (Sweden)
Purushottam D. Gujrati
2015-02-01
Full Text Available We review the concept of nonequilibrium thermodynamic entropy and observables and internal variables as state variables, introduced recently by us, and provide a simple first principle derivation of additive statistical entropy, applicable to all nonequilibrium states by treating thermodynamics as an experimental science. We establish their numerical equivalence in several cases, which includes the most important case when the thermodynamic entropy is a state function. We discuss various interesting aspects of the two entropies and show that the number of microstates in the Boltzmann entropy includes all possible microstates of non-zero probabilities even if the system is trapped in a disjoint component of the microstate space. We show that negative thermodynamic entropy can appear from nonnegative statistical entropy.
Equivalent formulations of “the equation of life”
International Nuclear Information System (INIS)
Ao Ping
2014-01-01
Motivated by progress in theoretical biology a recent proposal on a general and quantitative dynamical framework for nonequilibrium processes and dynamics of complex systems is briefly reviewed. It is nothing but the evolutionary process discovered by Charles Darwin and Alfred Wallace. Such general and structured dynamics may be tentatively named “the equation of life”. Three equivalent formulations are discussed, and it is also pointed out that such a quantitative dynamical framework leads naturally to the powerful Boltzmann-Gibbs distribution and the second law in physics. In this way, the equation of life provides a logically consistent foundation for thermodynamics. This view clarifies a particular outstanding problem and further suggests a unifying principle for physics and biology. (topical review - statistical physics and complex systems)
Weak strange particle production: advantages and difficulties
International Nuclear Information System (INIS)
Angelescu, Tatiana; Baker, O.K.
2002-01-01
Electromagnetic strange particle production developed at Jefferson Laboratory was an important source of information on strange particle electromagnetic formfactors and induced and transferred polarization. The high quality of the beam and the detection techniques involved could be an argument for detecting strange particles in weak interactions and answer questions about cross sections, weak formfactors, neutrino properties, which have not been investigated yet. The paper analyses some aspects related to the weak lambda production and detection with the Hall C facilities at Jefferson Laboratory and the limitations in measuring the weak interaction quantities. (authors)
International Nuclear Information System (INIS)
Khoury, Justin; Parikh, Maulik
2009-01-01
Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.
Variational principles in physics
Basdevant, Jean-Louis
2007-01-01
Optimization under constraints is an essential part of everyday life. Indeed, we routinely solve problems by striking a balance between contradictory interests, individual desires and material contingencies. This notion of equilibrium was dear to thinkers of the enlightenment, as illustrated by Montesquieu’s famous formulation: "In all magistracies, the greatness of the power must be compensated by the brevity of the duration." Astonishingly, natural laws are guided by a similar principle. Variational principles have proven to be surprisingly fertile. For example, Fermat used variational methods to demonstrate that light follows the fastest route from one point to another, an idea which came to be known as Fermat’s principle, a cornerstone of geometrical optics. Variational Principles in Physics explains variational principles and charts their use throughout modern physics. The heart of the book is devoted to the analytical mechanics of Lagrange and Hamilton, the basic tools of any physicist. Prof. Basdev...
Kohn's theorem, Larmor's equivalence principle and the Newton-Hooke group
International Nuclear Information System (INIS)
Gibbons, G.W.; Pope, C.N.
2011-01-01
Highlights: → We show that non-relativistic electrons moving in a magnetic field with trapping potential admits as relativity group the Newton-Hooke group. → We use this fact to give a group theoretic interpretation of Kohn's theorem and to obtain the spectrum. → We obtain the lightlike lift of the system exhibiting showing it coincides with the Nappi-Witten spacetime. - Abstract: We consider non-relativistic electrons, each of the same charge to mass ratio, moving in an external magnetic field with an interaction potential depending only on the mutual separations, possibly confined by a harmonic trapping potential. We show that the system admits a 'relativity group' which is a one-parameter family of deformations of the standard Galilei group to the Newton-Hooke group which is a Wigner-Inoenue contraction of the de Sitter group. This allows a group-theoretic interpretation of Kohn's theorem and related results. Larmor's theorem is used to show that the one-parameter family of deformations are all isomorphic. We study the 'Eisenhart' or 'lightlike' lift of the system, exhibiting it as a pp-wave. In the planar case, the Eisenhart lift is the Brdicka-Eardley-Nappi-Witten pp-wave solution of Einstein-Maxwell theory, which may also be regarded as a bi-invariant metric on the Cangemi-Jackiw group.
The MICROSCOPE experiment, ready for the in-orbit test of the equivalence principle
Touboul, P.; Métris, G.; Lebat, V.; Robert, A.
2012-09-01
Deviations from standard general relativity are being intensively tested in various aspects. The MICROSCOPE space mission, which has recently been approved to be launched in 2016, aims at testing the universality of free fall with an accuracy better than 10-15. The instrument has been developed and most of the sub-systems have been tested to the level required for the detection of accelerations lower than one tenth of a femto-g. Two concentric test masses are electrostatically levitated inside the same silica structure and controlled on the same trajectory at better than 0.1 Å. Any dissymmetry in the measured electrostatic pressures shall be analysed with respect to the Earth's gravity field. The nearly 300 kg heavy dedicated satellite is defined to provide a very steady environment to the experiment and a fine control of its attitude and of its drag-free motion along the orbit. Both the evaluations of the performance of the instrument and the satellite demonstrate the expected test accuracy. The detailed description of the experiment and the major driving parameters of the instrument, the satellite and the data processing are provided in this paper.
Oscillations of neutral mesons and the equivalence principle for particles and antiparticles
International Nuclear Information System (INIS)
Karshenboim, S.G.
2009-01-01
The K 0 -K 0 bar, D 0 -D 0 bar, and B 0 -B 0 bar oscillations are extremely sensitive to the K 0 and K 0 bar energy at rest. The energy is determined by the values mc 2 with the related mass as well as the energy of the gravitational interaction. Assuming the CPT theorem for the inertial masses and estimating the gravitational potential through the dominant contribution of the gravitational potential of our Galaxy center, we obtain from the experimental data on the K 0 -K 0 bar oscillations the following constraint: |(m g /m i ) K 0 - (m g /m i ) K 0 bar| ≤ 8·10 -13 , CL=90%. This estimation is model dependent and in particular it depends on a way we estimate the gravitational potential. Examining the K 0 -K 0 bar, B 0 -B 0 bar, and D 0 -D 0 bar oscillations provides us also with weaker, but model independent constraints, which in particular rule out the very possibility of antigravity for antimatter
Existence and multiplicity of weak solutions for a class of degenerate nonlinear elliptic equations
Directory of Open Access Journals (Sweden)
Mihăilescu Mihai
2006-01-01
Full Text Available The goal of this paper is to study the existence and the multiplicity of non-trivial weak solutions for some degenerate nonlinear elliptic equations on the whole space . The solutions will be obtained in a subspace of the Sobolev space . The proofs rely essentially on the Mountain Pass theorem and on Ekeland's Variational principle.
Existence and multiplicity of weak solutions for a class of degenerate nonlinear elliptic equations
Directory of Open Access Journals (Sweden)
Mihai Mihăilescu
2006-02-01
Full Text Available The goal of this paper is to study the existence and the multiplicity of non-trivial weak solutions for some degenerate nonlinear elliptic equations on the whole space RN. The solutions will be obtained in a subspace of the Sobolev space W1/p(RN. The proofs rely essentially on the Mountain Pass theorem and on Ekeland's Variational principle.
Equivalence of Szegedy's and coined quantum walks
Wong, Thomas G.
2017-09-01
Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.
From the Neutral Theory to a Comprehensive and Multiscale Theory of Ecological Equivalence.
Munoz, François; Huneman, Philippe
2016-09-01
The neutral theory of biodiversity assumes that coexisting organisms are equally able to survive, reproduce, and disperse (ecological equivalence), but predicts that stochastic fluctuations of these abilities drive diversity dynamics. It predicts remarkably well many biodiversity patterns, although substantial evidence for the role of niche variation across organisms seems contradictory. Here, we discuss this apparent paradox by exploring the meaning and implications of ecological equivalence. We address the question whether neutral theory provides an explanation for biodiversity patterns and acknowledges causal processes. We underline that ecological equivalence, although central to neutral theory, can emerge at local and regional scales from niche-based processes through equalizing and stabilizing mechanisms. Such emerging equivalence corresponds to a weak conception of neutral theory, as opposed to the assumption of strict equivalence at the individual level in strong conception. We show that this duality is related to diverging views on hypothesis testing and modeling in ecology. In addition, the stochastic dynamics exposed in neutral theory are pervasive in ecological systems and, rather than a null hypothesis, ecological equivalence is best understood as a parsimonious baseline to address biodiversity dynamics at multiple scales.
Water equivalence of polymer gel dosimeters
International Nuclear Information System (INIS)
Sellakumar, P.; James Jebaseelan Samuel, E.; Supe, Sanjay S.
2007-01-01
To evaluate the water equivalence and radiation transport properties of polymer gel dosimeters over the wide range of photon and electron energies 14 different types of polymer gels were considered. Their water equivalence was evaluated in terms of effective atomic number (Z eff ), electron density (ρ e ), photon mass attenuation coefficient (μ/ρ), photon mass energy absorption coefficient (μ en /ρ) and total stopping power (S/ρ) tot of electrons using the XCOM and the ESTAR database. The study showed that the effective atomic number of polymer gels were very close ( en /ρ for all polymer gels were in close agreement ( tot of electrons in polymer gel dosimeters were within 1% agreement with that of water. From the study we conclude that at lower energy (<80keV) the polymer gel dosimeters cannot be considered water equivalent and study has to be carried out before using the polymer gel for clinical application
Using frequency equivalency in stability calculations
Energy Technology Data Exchange (ETDEWEB)
Gruzdev, I.A.; Temirbulatov, R.A.; Tereshko, L.A.
1981-01-01
A methodology for calculating oscillatory instability that involves using frequency equivalency is employed in carrying out the following proceedures: dividing an electric power system into subgroups; determining the adjustments to the automatic excitation control in each subsystem; simplifying the mathematical definition of the separate subsystems by using frequency equivalency; gradually re-tuning the automatic excitation control in the separate subsystems to account for neighboring subsystems by using their equivalent frequency characteristics. The methodology is to be used with a computer program to determine the gain in the stabilization channels of the automatic excitation control unit in which static stability of the entire aggregate of normal and post-breakdown conditions acceptable damping of transient processes are provided. The possibility of reducing the equation series to apply to chosen regions of the existing range of frequencies is demonstrated. The use of the methodology is illustrated in a sample study on stability in a Siberian unified power system.
The equivalence problem for LL- and LR-regular grammars
Nijholt, Antinus; Gecsec, F.
It will be shown that the equivalence problem for LL-regular grammars is decidable. Apart from extending the known result for LL(k) grammar equivalence to LLregular grammar equivalence, we obtain an alternative proof of the decidability of LL(k) equivalence. The equivalence prob]em for LL-regular
S-parameters for weakly excited slots
DEFF Research Database (Denmark)
Albertsen, Niels Christian
1999-01-01
A simple approach to account for parasitic effects in weakly excited slots cut in the broad wall of a rectangular waveguide is proposed......A simple approach to account for parasitic effects in weakly excited slots cut in the broad wall of a rectangular waveguide is proposed...
Low-energy Electro-weak Reactions
International Nuclear Information System (INIS)
Gazit, Doron
2012-01-01
Chiral effective field theory (EFT) provides a systematic and controlled approach to low-energy nuclear physics. Here, we use chiral EFT to calculate low-energy weak Gamow-Teller transitions. We put special emphasis on the role of two-body (2b) weak currents within the nucleus and discuss their applications in predicting physical observables.
Staggering towards a calculation of weak amplitudes
Energy Technology Data Exchange (ETDEWEB)
Sharpe, S.R.
1988-09-01
An explanation is given of the methods required to calculate hadronic matrix elements of the weak Hamiltonians using lattice QCD with staggered fermions. New results are presented for the 1-loop perturbative mixing of the weak interaction operators. New numerical techniques designed for staggered fermions are described. A preliminary result for the kaon B parameter is presented. 24 refs., 3 figs.
Weak measurements with a qubit meter
DEFF Research Database (Denmark)
Wu, Shengjun; Mølmer, Klaus
2009-01-01
We derive schemes to measure the so-called weak values of quantum system observables by coupling of the system to a qubit meter system. We highlight, in particular, the meaning of the imaginary part of the weak values, and show how it can be measured directly on equal footing with the real part...
Optimization of strong and weak coordinates
Swart, M.; Bickelhaupt, F.M.
2006-01-01
We present a new scheme for the geometry optimization of equilibrium and transition state structures that can be used for both strong and weak coordinates. We use a screening function that depends on atom-pair distances to differentiate strong coordinates from weak coordinates. This differentiation
Evaluating the Quality of Transfer versus Nontransfer Accounting Principles Grades.
Colley, J. R.; And Others
1996-01-01
Using 1989-92 student records from three colleges accepting large numbers of transfers from junior schools into accounting, regression analyses compared grades of transfer and nontransfer students. Quality of accounting principle grades of transfer students was not equivalent to that of nontransfer students. (SK)
Limitations of Boltzmann's principle
International Nuclear Information System (INIS)
Lavenda, B.H.
1995-01-01
The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2
Biomedical engineering principles
Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N
2011-01-01
Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem
Modern electronic maintenance principles
Garland, DJ
2013-01-01
Modern Electronic Maintenance Principles reviews the principles of maintaining modern, complex electronic equipment, with emphasis on preventive and corrective maintenance. Unfamiliar subjects such as the half-split method of fault location, functional diagrams, and fault finding guides are explained. This book consists of 12 chapters and begins by stressing the need for maintenance principles and discussing the problem of complexity as well as the requirements for a maintenance technician. The next chapter deals with the connection between reliability and maintenance and defines the terms fai
Pérez-Soba Díez del Corral, Juan José
2008-01-01
Bioethics emerges about the tecnological problems of acting in human life. Emerges also the problem of the moral limits determination, because they seem exterior of this practice. The Bioethics of Principles, take his rationality of the teleological thinking, and the autonomism. These divergence manifest the epistemological fragility and the great difficulty of hmoralñ thinking. This is evident in the determination of autonomy's principle, it has not the ethical content of Kant's propose. We need a new ethic rationality with a new refelxion of new Principles whose emerges of the basic ethic experiences.
Hill, Rodney
2013-01-01
Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics
Hamilton's principle for beginners
International Nuclear Information System (INIS)
Brun, J L
2007-01-01
I find that students have difficulty with Hamilton's principle, at least the first time they come into contact with it, and therefore it is worth designing some examples to help students grasp its complex meaning. This paper supplies the simplest example to consolidate the learning of the quoted principle: that of a free particle moving along a line. Next, students are challenged to add gravity to reinforce the argument and, finally, a two-dimensional motion in a vertical plane is considered. Furthermore these examples force us to be very clear about such an abstract principle
Developing principles of growth
DEFF Research Database (Denmark)
Neergaard, Helle; Fleck, Emma
of the principles of growth among women-owned firms. Using an in-depth case study methodology, data was collected from women-owned firms in Denmark and Ireland, as these countries are similar in contextual terms, e.g. population and business composition, dominated by micro, small and medium-sized enterprises....... Extending on principles put forward in effectuation theory, we propose that women grow their firms according to five principles which enable women’s enterprises to survive in the face of crises such as the current financial world crisis....
Fiscal adjustments in Europe and Ricardian equivalence
Directory of Open Access Journals (Sweden)
V. DE BONIS
1998-09-01
Full Text Available According to the ‘Ricardian’ equivalence hypothesis, consumption is dependent on permanent disposable income and current deficits are equivalent to future tax payments. This hypothesis is tested on 14 European countries in the 1990s. The relationships between private sector savings and general government deficit, and the GDP growth rate and the unemployment rate are determined. The results show the change in consumers' behaviour with respect to government deficit, and that expectations of an increase in future wealth are no longer associated with a decrease in deficit.
Equivalent circuit analysis of terahertz metamaterial filters
Zhang, Xueqian
2011-01-01
An equivalent circuit model for the analysis and design of terahertz (THz) metamaterial filters is presented. The proposed model, derived based on LMC equivalent circuits, takes into account the detailed geometrical parameters and the presence of a dielectric substrate with the existing analytic expressions for self-inductance, mutual inductance, and capacitance. The model is in good agreement with the experimental measurements and full-wave simulations. Exploiting the circuit model has made it possible to predict accurately the resonance frequency of the proposed structures and thus, quick and accurate process of designing THz device from artificial metamaterials is offered. ©2011 Chinese Optics Letters.
Topological equivalence of nonlinear autonomous dynamical systems
International Nuclear Information System (INIS)
Nguyen Huynh Phan; Tran Van Nhung
1995-12-01
We show in this paper that the autonomous nonlinear dynamical system Σ(A,B,F): x' = Ax+Bu+F(x) is topologically equivalent to the linear dynamical system Σ(A,B,O): x' = Ax+Bu if the projection of A on the complement in R n of the controllable vectorial subspace is hyperbolic and if lipschitz constant of F is sufficiently small ( * ) and F(x) = 0 when parallel x parallel is sufficiently large ( ** ). In particular, if Σ(A,B,O) is controllable, it is topologically equivalent to Σ(A,B,F) when it is only that F satisfy ( ** ). (author). 18 refs
A neutron dose equivalent meter at CAEP
International Nuclear Information System (INIS)
Tian Shihai; Lu Yan; Wang Heyi; Yuan Yonggang; Chen Xu
2012-01-01
The measurement of neutron dose equivalent has been a widespread need in industry and research. In this paper, aimed at improving the accuracy of neutron dose equivalent meter: a neutron dose counter is simulated with MCNP5, and the energy response curve is optimized. The results show that the energy response factor is from 0.2 to 1.8 for neutrons in the energy range of 2.53×10 -8 MeV to 10 MeV Compared with other related meters, it turns that the design of this meter is right. (authors)
Measurements of the personal dose equivalent
International Nuclear Information System (INIS)
Scarlat, F.; Scarisoreanu, A.; Badita, E.; Oane, M.; Mitru, E.
2008-01-01
Full text: The paper presents the results of measurements related to the personal dose equivalent in the rooms adjacent to NILPRP 7 MeV linear accelerator, by means of the secondary standard chamber T34035 Hp(10). The chamber was calibrated by PTB at S- 137 Cs (E av = 661.6 keV, T 1/2 11050 days) and has N H = 3.17x10 6 Sv/C calibration factor for the personal dose equivalent, Hp(10), at a depth of 10 mm in climatic reference conditions. The measurements were made for the two operation mode of the 7 MeV linac: electrons and bremsstrahlung
Modelling of Airship Flight Mechanics by the Projection Equivalent Method
Directory of Open Access Journals (Sweden)
Frantisek Jelenciak
2015-12-01
Full Text Available This article describes the projection equivalent method (PEM as a specific and relatively simple approach for the modelling of aircraft dynamics. By the PEM it is possible to obtain a mathematic al model of the aerodynamic forces and momentums acting on different kinds of aircraft during flight. For the PEM, it is a characteristic of it that -in principle - it provides an acceptable regression model of aerodynamic forces and momentums which exhibits reasonable and plausible behaviour from a dynamics viewpoint. The principle of this method is based on applying Newton's mechanics, which are then combined with a specific form of the finite element method to cover additional effects. The main advantage of the PEM is that it is not necessary to carry out measurements in a wind tunnel for the identification of the model's parameters. The plausible dynamical behaviour of the model can be achieved by specific correction parameters, which can be determined on the basis of experimental data obtained during the flight of the aircraft. In this article, we present the PEM as applied to an airship as well as a comparison of the data calculated by the PEM and experimental flight data.
Statistical inference using weak chaos and infinite memory
International Nuclear Information System (INIS)
Welling, Max; Chen Yutian
2010-01-01
We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These 'herding systems' combine learning and inference into one algorithm, where moments or data-items are converted directly into an arbitrarily long sequence of pseudo-samples. This sequence has infinite range correlations and as such is highly structured. We show that its information content, as measured by sub-extensive entropy, can grow as fast as K log T, which is faster than the usual 1/2 K log T for exchangeable sequences generated by random posterior sampling from a Bayesian model. In one dimension we prove that herding sequences are equivalent to Sturmian sequences which have complexity exactly log(T + 1). More generally, we advocate the application of the rich theoretical framework around nonlinear dynamical systems, chaos theory and fractal geometry to statistical learning.
Statistical inference using weak chaos and infinite memory
Energy Technology Data Exchange (ETDEWEB)
Welling, Max; Chen Yutian, E-mail: welling@ics.uci.ed, E-mail: yutian.chen@uci.ed [Donald Bren School of Information and Computer Science, University of California Irvine CA 92697-3425 (United States)
2010-06-01
We describe a class of deterministic weakly chaotic dynamical systems with infinite memory. These 'herding systems' combine learning and inference into one algorithm, where moments or data-items are converted directly into an arbitrarily long sequence of pseudo-samples. This sequence has infinite range correlations and as such is highly structured. We show that its information content, as measured by sub-extensive entropy, can grow as fast as K log T, which is faster than the usual 1/2 K log T for exchangeable sequences generated by random posterior sampling from a Bayesian model. In one dimension we prove that herding sequences are equivalent to Sturmian sequences which have complexity exactly log(T + 1). More generally, we advocate the application of the rich theoretical framework around nonlinear dynamical systems, chaos theory and fractal geometry to statistical learning.
Weakly sheared active suspensions: hydrodynamics, stability, and rheology.
Cui, Zhenlu
2011-03-01
We present a kinetic model for flowing active suspensions and analyze the behavior of a suspension subjected to a weak steady shear. Asymptotic solutions are sought in Deborah number expansions. At the leading order, we explore the steady states and perform their stability analysis. We predict the rheology of active systems including an activity thickening or thinning behavior of the apparent viscosity and a negative apparent viscosity depending on the particle type, flow alignment, and the anchoring conditions, which can be tested on bacterial suspensions. We find remarkable dualities that show that flow-aligning rodlike contractile (extensile) particles are dynamically and rheologically equivalent to flow-aligning discoid extensile (contractile) particles for both tangential and homeotropic anchoring conditions. Another key prediction of this work is the role of the concentration of active suspensions in controlling the rheological behavior: the apparent viscosity may decrease with the increase of the concentration.
Algorithms for singularities and real structures of weak Del Pezzo surfaces
Lubbes, Niels
2014-08-01
In this paper, we consider the classification of singularities [P. Du Val, On isolated singularities of surfaces which do not affect the conditions of adjunction. I, II, III, Proc. Camb. Philos. Soc. 30 (1934) 453-491] and real structures [C. T. C. Wall, Real forms of smooth del Pezzo surfaces, J. Reine Angew. Math. 1987(375/376) (1987) 47-66, ISSN 0075-4102] of weak Del Pezzo surfaces from an algorithmic point of view. It is well-known that the singularities of weak Del Pezzo surfaces correspond to root subsystems. We present an algorithm which computes the classification of these root subsystems. We represent equivalence classes of root subsystems by unique labels. These labels allow us to construct examples of weak Del Pezzo surfaces with the corresponding singularity configuration. Equivalence classes of real structures of weak Del Pezzo surfaces are also represented by root subsystems. We present an algorithm which computes the classification of real structures. This leads to an alternative proof of the known classification for Del Pezzo surfaces and extends this classification to singular weak Del Pezzo surfaces. As an application we classify families of real conics on cyclides. © World Scientific Publishing Company.
Vaccinology: principles and practice
National Research Council Canada - National Science Library
Morrow, John
2012-01-01
... principles to implementation. This is an authoritative textbook that details a comprehensive and systematic approach to the science of vaccinology focusing on not only basic science, but the many stages required to commercialize...
Energy Technology Data Exchange (ETDEWEB)
Moller-Nielsen, Thomas [University of Oxford (United Kingdom)
2014-07-01
Physicists and philosophers have long claimed that the symmetries of our physical theories - roughly speaking, those transformations which map solutions of the theory into solutions - can provide us with genuine insight into what the world is really like. According to this 'Invariance Principle', only those quantities which are invariant under a theory's symmetries should be taken to be physically real, while those quantities which vary under its symmetries should not. Physicists and philosophers, however, are generally divided (or, indeed, silent) when it comes to explaining how such a principle is to be justified. In this paper, I spell out some of the problems inherent in other theorists' attempts to justify this principle, and sketch my own proposed general schema for explaining how - and when - the Invariance Principle can indeed be used as a legitimate tool of metaphysical inference.
Principles of applied statistics
National Research Council Canada - National Science Library
Cox, D. R; Donnelly, Christl A
2011-01-01
.... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...
Minimum entropy production principle
Czech Academy of Sciences Publication Activity Database
Maes, C.; Netočný, Karel
2013-01-01
Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle
Global ethics and principlism.
Gordon, John-Stewart
2011-09-01
This article examines the special relation between common morality and particular moralities in the four-principles approach and its use for global ethics. It is argued that the special dialectical relation between common morality and particular moralities is the key to bridging the gap between ethical universalism and relativism. The four-principles approach is a good model for a global bioethics by virtue of its ability to mediate successfully between universal demands and cultural diversity. The principle of autonomy (i.e., the idea of individual informed consent), however, does need to be revised so as to make it compatible with alternatives such as family- or community-informed consent. The upshot is that the contribution of the four-principles approach to global ethics lies in the so-called dialectical process and its power to deal with cross-cultural issues against the background of universal demands by joining them together.
2012-06-01
... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION... accordance with 40 CFR Part 53, three new equivalent methods: One for measuring concentrations of nitrogen... INFORMATION: In accordance with regulations at 40 CFR Part 53, the EPA evaluates various methods for...
Microprocessors principles and applications
Debenham, Michael J
1979-01-01
Microprocessors: Principles and Applications deals with the principles and applications of microprocessors and covers topics ranging from computer architecture and programmed machines to microprocessor programming, support systems and software, and system design. A number of microprocessor applications are considered, including data processing, process control, and telephone switching. This book is comprised of 10 chapters and begins with a historical overview of computers and computing, followed by a discussion on computer architecture and programmed machines, paying particular attention to t
Electrical and electronic principles
Knight, S A
1991-01-01
Electrical and Electronic Principles, 2, Second Edition covers the syllabus requirements of BTEC Unit U86/329, including the principles of control systems and elements of data transmission. The book first tackles series and parallel circuits, electrical networks, and capacitors and capacitance. Discussions focus on flux density, electric force, permittivity, Kirchhoff's laws, superposition theorem, arrangement of resistors, internal resistance, and powers in a circuit. The text then takes a look at capacitors in circuit, magnetism and magnetization, electromagnetic induction, and alternating v
Microwave system engineering principles
Raff, Samuel J
1977-01-01
Microwave System Engineering Principles focuses on the calculus, differential equations, and transforms of microwave systems. This book discusses the basic nature and principles that can be derived from thermal noise; statistical concepts and binomial distribution; incoherent signal processing; basic properties of antennas; and beam widths and useful approximations. The fundamentals of propagation; LaPlace's Equation and Transmission Line (TEM) waves; interfaces between homogeneous media; modulation, bandwidth, and noise; and communications satellites are also deliberated in this text. This bo
Electrical and electronic principles
Knight, SA
1988-01-01
Electrical and Electronic Principles, 3 focuses on the principles involved in electrical and electronic circuits, including impedance, inductance, capacitance, and resistance.The book first deals with circuit elements and theorems, D.C. transients, and the series circuits of alternating current. Discussions focus on inductance and resistance in series, resistance and capacitance in series, power factor, impedance, circuit magnification, equation of charge, discharge of a capacitor, transfer of power, and decibels and attenuation. The manuscript then examines the parallel circuits of alternatin
Remark on Heisenberg's principle
International Nuclear Information System (INIS)
Noguez, G.
1988-01-01
Application of Heisenberg's principle to inertial frame transformations allows a distinction between three commutative groups of reciprocal transformations along one direction: Galilean transformations, dual transformations, and Lorentz transformations. These are three conjugate groups and for a given direction, the related commutators are all proportional to one single conjugation transformation which compensates for uniform and rectilinear motions. The three transformation groups correspond to three complementary ways of measuring space-time as a whole. Heisenberg's Principle then gets another explanation [fr