WorldWideScience

Sample records for superposition principle measurements

  1. A superposition principle in quantum logics

    International Nuclear Information System (INIS)

    Pulmannova, S.

    1976-01-01

    A new definition of the superposition principle in quantum logics is given which enables us to define the sectors. It is shown that the superposition principle holds only in the irreducible quantum logics. (orig.) [de

  2. On the superposition principle and its physics content

    International Nuclear Information System (INIS)

    Roos, M.

    1984-01-01

    What is commonly denoted the superposition principle is shown to consist of three different physical assumptions: conservation of probability, completeness, and some phase conditions. The latter conditions form the physical assumptions of the superposition principle. These phase conditions are exemplified by the Kobayashi-Maskawa matrix. Some suggestions for testing the superposition principle are given. (Auth.)

  3. Testing the quantum superposition principle: matter waves and beyond

    Science.gov (United States)

    Ulbricht, Hendrik

    2015-05-01

    New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Testing the superposition principle intrinsically also means to test suggested extensions of quantum theory, so-called collapse models. We will report on three new proposals to experimentally test the superposition principle with nanoparticle interferometry, optomechanical devices and by spectroscopic experiments in the frequency domain. We will also report on the status of optical levitation and cooling experiments with nanoparticles in our labs, towards an Earth bound matter-wave interferometer to test the superposition principle for a particle mass of one million amu (atomic mass unit).

  4. The general use of the time-temperature-pressure superposition principle

    DEFF Research Database (Denmark)

    Rasmussen, Henrik Koblitz

    This note is a supplement to Dynamic of Polymeric Liquids (DPL) section 3.6(a). DPL do only concern material functions and only the effect of the temperature on these. This is a short introduction to the general use of the time-temperature-pressure superposition principle.......This note is a supplement to Dynamic of Polymeric Liquids (DPL) section 3.6(a). DPL do only concern material functions and only the effect of the temperature on these. This is a short introduction to the general use of the time-temperature-pressure superposition principle....

  5. The principle of superposition in human prehension.

    Science.gov (United States)

    Zatsiorsky, Vladimir M; Latash, Mark L; Gao, Fan; Shim, Jae Kun

    2004-03-01

    The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: "Grasp the object stronger/weaker to prevent slipping" and "Maintain the rotational equilibrium of the object". The effects of the two commands are summed up.

  6. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1976-01-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented; the deductive approach appears here for the first time in the literature. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution is then re-arranged into the superposition principle. The inductive proof is simpler than Rostoker's although similar in some ways; it differs in that first-order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  7. Two new proofs of the test particle superposition principle of plasma kinetic theory

    International Nuclear Information System (INIS)

    Krommes, J.A.

    1975-12-01

    The test particle superposition principle of plasma kinetic theory is discussed in relation to the recent theory of two-time fluctuations in plasma given by Williams and Oberman. Both a new deductive and a new inductive proof of the principle are presented. The fundamental observation is that two-time expectations of one-body operators are determined completely in terms of the (x,v) phase space density autocorrelation, which to lowest order in the discreteness parameter obeys the linearized Vlasov equation with singular initial condition. For the deductive proof, this equation is solved formally using time-ordered operators, and the solution then rearranged into the superposition principle. The inductive proof is simpler than Rostoker's, although similar in some ways; it differs in that first order equations for pair correlation functions need not be invoked. It is pointed out that the superposition principle is also applicable to the short-time theory of neutral fluids

  8. Superposition Principle in Auger Recombination of Charged and Neutral Multicarrier States in Semiconductor Quantum Dots.

    Science.gov (United States)

    Wu, Kaifeng; Lim, Jaehoon; Klimov, Victor I

    2017-08-22

    Application of colloidal semiconductor quantum dots (QDs) in optical and optoelectronic devices is often complicated by unintentional generation of extra charges, which opens fast nonradiative Auger recombination pathways whereby the recombination energy of an exciton is quickly transferred to the extra carrier(s) and ultimately dissipated as heat. Previous studies of Auger recombination have primarily focused on neutral and, more recently, negatively charged multicarrier states. Auger dynamics of positively charged species remains more poorly explored due to difficulties in creating, stabilizing, and detecting excess holes in the QDs. Here we apply photochemical doping to prepare both negatively and positively charged CdSe/CdS QDs with two distinct core/shell interfacial profiles ("sharp" versus "smooth"). Using neutral and charged QD samples we evaluate Auger lifetimes of biexcitons, negative and positive trions (an exciton with an extra electron or a hole, respectively), and multiply negatively charged excitons. Using these measurements, we demonstrate that Auger decay of both neutral and charged multicarrier states can be presented as a superposition of independent elementary three-particle Auger events. As one of the manifestations of the superposition principle, we observe that the biexciton Auger decay rate can be presented as a sum of the Auger rates for independent negative and positive trion pathways. By comparing the measurements on the QDs with the "sharp" versus "smooth" interfaces, we also find that while affecting the absolute values of Auger lifetimes, manipulation of the shape of the confinement potential does not lead to violation of the superposition principle, which still allows us to accurately predict the biexciton Auger lifetimes based on the measured negative and positive trion dynamics. These findings indicate considerable robustness of the superposition principle as applied to Auger decay of charged and neutral multicarrier states

  9. Collapsing a perfect superposition to a chosen quantum state without measurement.

    Directory of Open Access Journals (Sweden)

    Ahmed Younes

    Full Text Available Given a perfect superposition of [Formula: see text] states on a quantum system of [Formula: see text] qubits. We propose a fast quantum algorithm for collapsing the perfect superposition to a chosen quantum state [Formula: see text] without applying any measurements. The basic idea is to use a phase destruction mechanism. Two operators are used, the first operator applies a phase shift and a temporary entanglement to mark [Formula: see text] in the superposition, and the second operator applies selective phase shifts on the states in the superposition according to their Hamming distance with [Formula: see text]. The generated state can be used as an excellent input state for testing quantum memories and linear optics quantum computers. We make no assumptions about the used operators and applied quantum gates, but our result implies that for this purpose the number of qubits in the quantum register offers no advantage, in principle, over the obvious measurement-based feedback protocol.

  10. A multidimensional superposition principle and wave switching in integrable and nonintegrable soliton models

    Energy Technology Data Exchange (ETDEWEB)

    Alexeyev, Alexander A [Laboratory of Computer Physics and Mathematical Simulation, Research Division, Room 247, Faculty of Phys.-Math. and Natural Sciences, Peoples' Friendship University of Russia, 6 Miklukho-Maklaya street, Moscow 117198 (Russian Federation) and Department of Mathematics 1, Faculty of Cybernetics, Moscow State Institute of Radio Engineering, Electronics and Automatics, 78 Vernadskogo Avenue, Moscow 117454 (Russian Federation)

    2004-11-26

    In the framework of a multidimensional superposition principle a series of computer experiments with integrable and nonintegrable models are carried out with the goal of verifying the existence of switching effect and superposition in soliton-perturbation interactions for a wide class of nonlinear PDEs. (letter to the editor)

  11. On the superposition principle in interference experiments.

    Science.gov (United States)

    Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi

    2015-05-14

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.

  12. Superposition and macroscopic observation

    International Nuclear Information System (INIS)

    Cartwright, N.D.

    1976-01-01

    The principle of superposition has long plagued the quantum mechanics of macroscopic bodies. In at least one well-known situation - that of measurement - quantum mechanics predicts a superposition. It is customary to try to reconcile macroscopic reality and quantum mechanics by reducing the superposition to a mixture. To establish consistency with quantum mechanics, values for the apparatus after a measurement are to be distributed in the way predicted by the superposition. The distributions observed, however, are those of the mixture. The statistical predictions of quantum mechanics, it appears, are not borne out by observation in macroscopic situations. It has been shown that, insofar as specific ergodic hypotheses apply to the apparatus after the interaction, the superposition which evolves is experimentally indistinguishable from the corresponding mixture. In this paper an idealized model of the measuring situation is presented in which this consistency can be demonstrated. It includes a simplified version of the measurement solution proposed by Daneri, Loinger, and Prosperi (1962). The model should make clear the kind of statistical evidence required to carry of this approach, and the role of the ergodic hypotheses assumed. (Auth.)

  13. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  14. Long-term creep modeling of wood using time temperature superposition principle

    OpenAIRE

    Gamalath, Sandhya Samarasinghe

    1991-01-01

    Long-term creep and recovery models (master curves) were developed from short-term data using the time temperature superposition principle (TTSP) for kiln-dried southern pine loaded in compression parallel-to-grain and exposed to constant environmental conditions (~70°F, ~9%EMC). Short-term accelerated creep (17 hour) and recovery (35 hour) data were collected for each specimen at a range of temperature (70°F-150°F) and constant moisture condition of 9%. The compressive stra...

  15. Approach to the nonrelatiVistic scattering theory based on the causality superposition and unitarity principles

    International Nuclear Information System (INIS)

    Gajnutdinov, R.Kh.

    1983-01-01

    Possibility is studied to build the nonrelativistic scattering theory on the base of the general physical principles: causality, superposition, and unitarity, making no use of the Schroedinger formalism. The suggested approach is shown to be more general than the nonrelativistic scattering theory based on the Schroedinger equation. The approach is applied to build a model ofthe scattering theory for a system which consists of heavy nonrelativistic particles and a light relativistic particle

  16. Linear superposition solutions to nonlinear wave equations

    International Nuclear Information System (INIS)

    Liu Yu

    2012-01-01

    The solutions to a linear wave equation can satisfy the principle of superposition, i.e., the linear superposition of two or more known solutions is still a solution of the linear wave equation. We show in this article that many nonlinear wave equations possess exact traveling wave solutions involving hyperbolic, triangle, and exponential functions, and the suitable linear combinations of these known solutions can also constitute linear superposition solutions to some nonlinear wave equations with special structural characteristics. The linear superposition solutions to the generalized KdV equation K(2,2,1), the Oliver water wave equation, and the k(n, n) equation are given. The structure characteristic of the nonlinear wave equations having linear superposition solutions is analyzed, and the reason why the solutions with the forms of hyperbolic, triangle, and exponential functions can form the linear superposition solutions is also discussed

  17. Evaluation of Class II treatment by cephalometric regional superpositions versus conventional measurements.

    Science.gov (United States)

    Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph

    2005-11-01

    The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.

  18. Projective measurement onto arbitrary superposition of weak coherent state bases

    DEFF Research Database (Denmark)

    Izumi, Shuro; Takeoka, Masahiro; Wakui, Kentaro

    2018-01-01

    One of the peculiar features in quantum mechanics is that a superposition of macroscopically distinct states can exist. In optical system, this is highlighted by a superposition of coherent states (SCS), i.e. a superposition of classical states. Recently this highly nontrivial quantum state and i...

  19. Chaos and Complexities Theories. Superposition and Standardized Testing: Are We Coming or Going?

    Science.gov (United States)

    Erwin, Susan

    2005-01-01

    The purpose of this paper is to explore the possibility of using the principle of "superposition of states" (commonly illustrated by Schrodinger's Cat experiment) to understand the process of using standardized testing to measure a student's learning. Comparisons from literature, neuroscience, and Schema Theory will be used to expound upon the…

  20. Decoherence bypass of macroscopic superpositions in quantum measurement

    International Nuclear Information System (INIS)

    Spehner, Dominique; Haake, Fritz

    2008-01-01

    We study a class of quantum measurement models. A microscopic object is entangled with a macroscopic pointer such that a distinct pointer position is tied to each eigenvalue of the measured object observable. Those different pointer positions mutually decohere under the influence of an environment. Overcoming limitations of previous approaches we (i) cope with initial correlations between pointer and environment by considering them initially in a metastable local thermal equilibrium, (ii) allow for object-pointer entanglement and environment-induced decoherence of distinct pointer readouts to proceed simultaneously, such that mixtures of macroscopically distinct object-pointer product states arise without intervening macroscopic superpositions, and (iii) go beyond the Markovian treatment of decoherence. (fast track communication)

  1. Engineering mesoscopic superpositions of superfluid flow

    International Nuclear Information System (INIS)

    Hallwood, D. W.; Brand, J.

    2011-01-01

    Modeling strongly correlated atoms demonstrates the possibility to prepare quantum superpositions that are robust against experimental imperfections and temperature. Such superpositions of vortex states are formed by adiabatic manipulation of interacting ultracold atoms confined to a one-dimensional ring trapping potential when stirred by a barrier. Here, we discuss the influence of nonideal experimental procedures and finite temperature. Adiabaticity conditions for changing the stirring rate reveal that superpositions of many atoms are most easily accessed in the strongly interacting, Tonks-Girardeau, regime, which is also the most robust at finite temperature. NOON-type superpositions of weakly interacting atoms are most easily created by adiabatically decreasing the interaction strength by means of a Feshbach resonance. The quantum dynamics of small numbers of particles is simulated and the size of the superpositions is calculated based on their ability to make precision measurements. The experimental creation of strongly correlated and NOON-type superpositions with about 100 atoms seems feasible in the near future.

  2. The superposition of the states and the logic approach to quantum mechanics

    International Nuclear Information System (INIS)

    Zecca, A.

    1981-01-01

    An axiomatic approach to quantum mechanics is proposed in terms of a 'logic' scheme satisfying a suitable set of axioms. In this context the notion of pure, maximal, and characteristic state as well as the superposition relation and the superposition principle for the states are studied. The role the superposition relation plays in the reversible and in the irreversible dynamics is investigated and its connection with the tensor product is studied. Throughout the paper, the W*-algebra model, is used to exemplify results and properties of the general scheme. (author)

  3. Measurement-Induced Macroscopic Superposition States in Cavity Optomechanics

    DEFF Research Database (Denmark)

    Hoff, Ulrich Busk; Kollath-Bönig, Johann; Neergaard-Nielsen, Jonas Schou

    2016-01-01

    A novel protocol for generating quantum superpositions of macroscopically distinct states of a bulk mechanical oscillator is proposed, compatible with existing optomechanical devices operating in the bad-cavity limit. By combining a pulsed optomechanical quantum nondemolition (QND) interaction...

  4. Lifetime Prediction of Nano-Silica based Glass Fibre/Epoxy composite by Time Temperature Superposition Principle

    Science.gov (United States)

    Anand, Abhijeet; Banerjee, Poulami; Prusty, Rajesh Kumar; Ray, Bankin Chandra

    2018-03-01

    The incorporation of nano fillers in Fibre reinforced polymer (FRP) composites has been a source of experimentation for researchers. Addition of nano fillers has been found to improve mechanical, thermal as well as electrical properties of Glass fibre reinforced polymer (GFRP) composites. The in-plane mechanical properties of GFRP composite are mainly controlled by fibers and therefore exhibit good values. However, composite exhibits poor through-thickness properties, in which the matrix and interface are the dominant factors. Therefore, it is conducive to modify the matrix through dispersion of nano fillers. Creep is defined as the plastic deformation experienced by a material for a temperature at constant stress over a prolonged period of time. Determination of Master Curve using time-temperature superposition principle is conducive for predicting the lifetime of materials involved in naval and structural applications. This is because such materials remain in service for a prolonged time period before failure which is difficult to be kept marked. However, the failure analysis can be extrapolated from its behaviour in a shorter time at an elevated temperature as is done in master creep analysis. The present research work dealt with time-temperature analysis of 0.1% SiO2-based GFRP composites fabricated through hand-layup method. Composition of 0.1% for SiO2nano fillers with respect to the weight of the fibers was observed to provide optimized flexural properties. Time and temperature dependence of flexural properties of GFRP composites with and without nano SiO2 was determined by conducting 3-point bend flexural creep tests over a range of temperature. Stepwise isothermal creep tests from room temperature (30°C) to the glass transition temperature Tg (120°C) were performed with an alternative creep/relaxation period of 1 hour at each temperature. A constant stress of 40MPa was applied during the creep tests. The time-temperature superposition principle was

  5. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    International Nuclear Information System (INIS)

    Daoud, M.; Ahl Laamara, R.

    2012-01-01

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states

  6. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    Energy Technology Data Exchange (ETDEWEB)

    Daoud, M., E-mail: m_daoud@hotmail.com [Department of Physics, Faculty of Sciences, University Ibnou Zohr, Agadir (Morocco); Ahl Laamara, R., E-mail: ahllaamara@gmail.com [LPHE-Modeling and Simulation, Faculty of Sciences, University Mohammed V, Rabat (Morocco); Centre of Physics and Mathematics, CPM, CNESTEN, Rabat (Morocco)

    2012-07-16

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states.

  7. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    Science.gov (United States)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  8. Quantum equivalence principle without mass superselection

    International Nuclear Information System (INIS)

    Hernandez-Coronado, H.; Okon, E.

    2013-01-01

    The standard argument for the validity of Einstein's equivalence principle in a non-relativistic quantum context involves the application of a mass superselection rule. The objective of this work is to show that, contrary to widespread opinion, the compatibility between the equivalence principle and quantum mechanics does not depend on the introduction of such a restriction. For this purpose, we develop a formalism based on the extended Galileo group, which allows for a consistent handling of superpositions of different masses, and show that, within such scheme, mass superpositions behave as they should in order to obey the equivalence principle. - Highlights: • We propose a formalism for consistently handling, within a non-relativistic quantum context, superpositions of states with different masses. • The formalism utilizes the extended Galileo group, in which mass is a generator. • The proposed formalism allows for the equivalence principle to be satisfied without the need of imposing a mass superselection rule

  9. Macroscopic superposition states and decoherence by quantum telegraph noise

    Energy Technology Data Exchange (ETDEWEB)

    Abel, Benjamin Simon

    2008-12-19

    In the first part of the present thesis we address the question about the size of superpositions of macroscopically distinct quantum states. We propose a measure for the ''size'' of a Schroedinger cat state, i.e. a quantum superposition of two many-body states with (supposedly) macroscopically distinct properties, by counting how many single-particle operations are needed to map one state onto the other. We apply our measure to a superconducting three-junction flux qubit put into a superposition of clockwise and counterclockwise circulating supercurrent states and find this Schroedinger cat to be surprisingly small. The unavoidable coupling of any quantum system to many environmental degrees of freedom leads to an irreversible loss of information about an initially prepared superposition of quantum states. This phenomenon, commonly referred to as decoherence or dephasing, is the subject of the second part of the thesis. We have studied the time evolution of the reduced density matrix of a two-level system (qubit) subject to quantum telegraph noise which is the major source of decoherence in Josephson charge qubits. We are able to derive an exact expression for the time evolution of the reduced density matrix. (orig.)

  10. Macroscopic superposition states and decoherence by quantum telegraph noise

    International Nuclear Information System (INIS)

    Abel, Benjamin Simon

    2008-01-01

    In the first part of the present thesis we address the question about the size of superpositions of macroscopically distinct quantum states. We propose a measure for the ''size'' of a Schroedinger cat state, i.e. a quantum superposition of two many-body states with (supposedly) macroscopically distinct properties, by counting how many single-particle operations are needed to map one state onto the other. We apply our measure to a superconducting three-junction flux qubit put into a superposition of clockwise and counterclockwise circulating supercurrent states and find this Schroedinger cat to be surprisingly small. The unavoidable coupling of any quantum system to many environmental degrees of freedom leads to an irreversible loss of information about an initially prepared superposition of quantum states. This phenomenon, commonly referred to as decoherence or dephasing, is the subject of the second part of the thesis. We have studied the time evolution of the reduced density matrix of a two-level system (qubit) subject to quantum telegraph noise which is the major source of decoherence in Josephson charge qubits. We are able to derive an exact expression for the time evolution of the reduced density matrix. (orig.)

  11. Generalization of Abel's mechanical problem: The extended isochronicity condition and the superposition principle

    Energy Technology Data Exchange (ETDEWEB)

    Kinugawa, Tohru, E-mail: kinugawa@phoenix.kobe-u.ac.jp [Institute for Promotion of Higher Education, Kobe University, Kobe 657-8501 (Japan)

    2014-02-15

    This paper presents a simple but nontrivial generalization of Abel's mechanical problem, based on the extended isochronicity condition and the superposition principle. There are two primary aims. The first one is to reveal the linear relation between the transit-time T and the travel-length X hidden behind the isochronicity problem that is usually discussed in terms of the nonlinear equation of motion (d{sup 2}X)/(dt{sup 2}) +(dU)/(dX) =0 with U(X) being an unknown potential. Second, the isochronicity condition is extended for the possible Abel-transform approach to designing the isochronous trajectories of charged particles in spectrometers and/or accelerators for time-resolving experiments. Our approach is based on the integral formula for the oscillatory motion by Landau and Lifshitz [Mechanics (Pergamon, Oxford, 1976), pp. 27–29]. The same formula is used to treat the non-periodic motion that is driven by U(X). Specifically, this unknown potential is determined by the (linear) Abel transform X(U) ∝ A[T(E)], where X(U) is the inverse function of U(X), A=(1/√(π))∫{sub 0}{sup E}dU/√(E−U) is the so-called Abel operator, and T(E) is the prescribed transit-time for a particle with energy E to spend in the region of interest. Based on this Abel-transform approach, we have introduced the extended isochronicity condition: typically, τ = T{sub A}(E) + T{sub N}(E) where τ is a constant period, T{sub A}(E) is the transit-time in the Abel type [A-type] region spanning X > 0 and T{sub N}(E) is that in the Non-Abel type [N-type] region covering X < 0. As for the A-type region in X > 0, the unknown inverse function X{sub A}(U) is determined from T{sub A}(E) via the Abel-transform relation X{sub A}(U) ∝ A[T{sub A}(E)]. In contrast, the N-type region in X < 0 does not ensure this linear relation: the region is covered with a predetermined potential U{sub N}(X) of some arbitrary choice, not necessarily obeying the Abel-transform relation. In

  12. Interplay of gravitation and linear superposition of different mass eigenstates

    International Nuclear Information System (INIS)

    Ahluwalia, D.V.

    1998-01-01

    The interplay of gravitation and the quantum-mechanical principle of linear superposition induces a new set of neutrino oscillation phases. These ensure that the flavor-oscillation clocks, inherent in the phenomenon of neutrino oscillations, redshift precisely as required by Einstein close-quote s theory of gravitation. The physical observability of these phases in the context of the solar neutrino anomaly, type-II supernova, and certain atomic systems is briefly discussed. copyright 1998 The American Physical Society

  13. Active measurement-based quantum feedback for preparing and stabilizing superpositions of two cavity photon number states

    Science.gov (United States)

    Berube-Lauziere, Yves

    The measurement-based quantum feedback scheme developed and implemented by Haroche and collaborators to actively prepare and stabilize specific photon number states in cavity quantum electrodynamics (CQED) is a milestone achievement in the active protection of quantum states from decoherence. This feat was achieved by injecting, after each weak dispersive measurement of the cavity state via Rydberg atoms serving as cavity sensors, a low average number classical field (coherent state) to steer the cavity towards the targeted number state. This talk will present the generalization of the theory developed for targeting number states in order to prepare and stabilize desired superpositions of two cavity photon number states. Results from realistic simulations taking into account decoherence and imperfections in a CQED set-up will be presented. These demonstrate the validity of the generalized theory and points to the experimental feasibility of preparing and stabilizing such superpositions. This is a further step towards the active protection of more complex quantum states than number states. This work, cast in the context of CQED, is also almost readily applicable to circuit QED. YBL acknowledges financial support from the Institut Quantique through a Canada First Research Excellence Fund.

  14. Variational principles for collective motion: Relation between invariance principle of the Schroedinger equation and the trace variational principle

    International Nuclear Information System (INIS)

    Klein, A.; Tanabe, K.

    1984-01-01

    The invariance principle of the Schroedinger equation provides a basis for theories of collective motion with the help of the time-dependent variational principle. It is formulated here with maximum generality, requiring only the motion of intrinsic state in the collective space. Special cases arise when the trial vector is a generalized coherent state and when it is a uniform superposition of collective eigenstates. The latter example yields variational principles uncovered previously only within the framework of the equations of motion method. (orig.)

  15. Single-Atom Gating of Quantum State Superpositions

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Christopher

    2010-04-28

    The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.

  16. Complementary Huygens Principle for Geometrical and Nongeometrical Optics

    Science.gov (United States)

    Luis, Alfredo

    2007-01-01

    We develop a fundamental principle depicting the generalized ray formulation of optics provided by the Wigner function. This principle is formally identical to the Huygens-Fresnel principle but in terms of opposite concepts, rays instead of waves, and incoherent superpositions instead of coherent ones. This ray picture naturally includes…

  17. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    Science.gov (United States)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  18. Exclusion of identification by negative superposition

    Directory of Open Access Journals (Sweden)

    Takač Šandor

    2012-01-01

    Full Text Available The paper represents the first report of negative superposition in our country. Photo of randomly selected young, living woman was superimposed on the previously discovered female skull. Computer program Adobe Photoshop 7.0 was used in work. Digitilized photographs of the skull and face, after uploaded to computer, were superimposed on each other and displayed on the monitor in order to assess their possible similarities or differences. Special attention was payed to matching the same anthropometrical points of the skull and face, as well as following their contours. The process of fitting the skull and the photograph is usually started by setting eyes in correct position relative to the orbits. In this case, lower jaw gonions go beyond the face contour and gnathion is highly placed. By positioning the chin, mouth and nose their correct anatomical position cannot be achieved. All the difficulties associated with the superposition were recorded, with special emphasis on critical evaluation of work results in a negative superposition. Negative superposition has greater probative value (exclusion of identification than positive (possible identification. 100% negative superposition is easily achieved, but 100% positive - almost never. 'Each skull is unique and viewed from different perspectives is always a new challenge'. From this point of view, identification can be negative or of high probability.

  19. On-line and real-time diagnosis method for proton membrane fuel cell (PEMFC) stack by the superposition principle

    Science.gov (United States)

    Lee, Young-Hyun; Kim, Jonghyeon; Yoo, Seungyeol

    2016-09-01

    The critical cell voltage drop in a stack can be followed by stack defect. A method of detecting defective cell is the cell voltage monitoring. The other methods are based on the nonlinear frequency response. In this paper, the superposition principle for the diagnosis of PEMFC stack is introduced. If critical cell voltage drops exist, the stack behaves as a nonlinear system. This nonlinearity can explicitly appear in the ohmic overpotential region of a voltage-current curve. To detect the critical cell voltage drop, a stack is excited by two input direct test-currents which have smaller amplitude than an operating stack current and have an equal distance value from the operating current. If the difference between one voltage excited by a test current and the voltage excited by a load current is not equal to the difference between the other voltage response and the voltage excited by the load current, the stack system acts as a nonlinear system. This means that there is a critical cell voltage drop. The deviation from the value zero of the difference reflects the grade of the system nonlinearity. A simulation model for the stack diagnosis is developed based on the SPP, and experimentally validated.

  20. Equivalence principle and quantum mechanics: quantum simulation with entangled photons.

    Science.gov (United States)

    Longhi, S

    2018-01-15

    Einstein's equivalence principle (EP) states the complete physical equivalence of a gravitational field and corresponding inertial field in an accelerated reference frame. However, to what extent the EP remains valid in non-relativistic quantum mechanics is a controversial issue. To avoid violation of the EP, Bargmann's superselection rule forbids a coherent superposition of states with different masses. Here we suggest a quantum simulation of non-relativistic Schrödinger particle dynamics in non-inertial reference frames, which is based on the propagation of polarization-entangled photon pairs in curved and birefringent optical waveguides and Hong-Ou-Mandel quantum interference measurement. The photonic simulator can emulate superposition of mass states, which would lead to violation of the EP.

  1. Fundamental principles of quantum theory

    International Nuclear Information System (INIS)

    Bugajski, S.

    1980-01-01

    After introducing general versions of three fundamental quantum postulates - the superposition principle, the uncertainty principle and the complementarity principle - the question of whether the three principles are sufficiently strong to restrict the general Mackey description of quantum systems to the standard Hilbert-space quantum theory is discussed. An example which shows that the answer must be negative is constructed. An abstract version of the projection postulate is introduced and it is demonstrated that it could serve as the missing physical link between the general Mackey description and the standard quantum theory. (author)

  2. A method to study the characteristics of 3D dose distributions created by superposition of many intensity-modulated beams delivered via a slit aperture with multiple absorbing vanes

    International Nuclear Information System (INIS)

    Webb, S.; Oldham, M.

    1996-01-01

    Highly conformal dose distributions can be created by the superposition of many radiation fields from different directions, each with its intensity spatially modulated by the method known as tomotherapy. At the planning stage, the intensity of radiation of each beam element (or bixel) is determined by working out the effect of superposing the radiation through all bixels with the elemental dose distribution specified as that from a single bixel with all its neighbours closed (the 'independent-vane' (IV) model). However, at treatment-delivery stage, neighbouring bixels may not be closed. Instead the slit beam is delivered with parts of the beam closed for different periods of time to create the intensity modulation. As a result, the 3D dose distribution actually delivered will differ from that determined at the planning stage if the elemental beams do not obey the superposition principle. The purpose of this paper is to present a method to investigate and quantify the relation between planned and delivered 3D dose distributions. Two modes of inverse planning have been performed: (i) with a fit to the measured elemental dose distribution and (ii) with a 'stretched fit' obeying the superposition principle as in the PEACOCK 3D planning system. The actual delivery has been modelled as a series of component deliveries (CDs). The algorithm for determining the component intensities and the appropriate collimation conditions is specified. The elemental beam from the NOMOS MIMiC collimator is too narrow to obey the superposition principle although it can be 'stretched' and fitted to a superposition function. Hence there are differences between the IV plans made using modes (i) and (ii) and the raw and the stretched elemental beam, and also differences with CD delivery. This study shows that the differences between IV and CD dose distributions are smaller for mode (ii) inverse planning than for mode (i), somewhat justifying the way planning is done within PEACOCK. Using a

  3. Optimal simultaneous superpositioning of multiple structures with missing data.

    Science.gov (United States)

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  4. Complementary Huygens principle for geometrical and nongeometrical optics

    International Nuclear Information System (INIS)

    Luis, Alfredo

    2007-01-01

    We develop a fundamental principle depicting the generalized ray formulation of optics provided by the Wigner function. This principle is formally identical to the Huygens-Fresnel principle but in terms of opposite concepts, rays instead of waves, and incoherent superpositions instead of coherent ones. This ray picture naturally includes diffraction and interference, and provides a geometrical picture of the degree of coherence

  5. Long-distance measurement-device-independent quantum key distribution with coherent-state superpositions.

    Science.gov (United States)

    Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B

    2014-09-15

    Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.

  6. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  7. Non-coaxial superposition of vector vortex beams.

    Science.gov (United States)

    Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P

    2016-02-10

    Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.

  8. Electrical and electronic principles

    CERN Document Server

    Knight, S A

    1991-01-01

    Electrical and Electronic Principles, 2, Second Edition covers the syllabus requirements of BTEC Unit U86/329, including the principles of control systems and elements of data transmission. The book first tackles series and parallel circuits, electrical networks, and capacitors and capacitance. Discussions focus on flux density, electric force, permittivity, Kirchhoff's laws, superposition theorem, arrangement of resistors, internal resistance, and powers in a circuit. The text then takes a look at capacitors in circuit, magnetism and magnetization, electromagnetic induction, and alternating v

  9. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  10. Thermalization as an Invisibility Cloak for Fragile Quantum Superpositions

    OpenAIRE

    Hahn, Walter; Fine, Boris V.

    2017-01-01

    We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time reversal manipulation known as Loschmidt echo. The thermalization dynamics makes t...

  11. Toward quantum superposition of living organisms

    International Nuclear Information System (INIS)

    Romero-Isart, Oriol; Cirac, J Ignacio; Juan, Mathieu L; Quidant, Romain

    2010-01-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deleglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6; Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schroedinger's cat 'gedanken' paradigm (Schroedinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  12. Toward quantum superposition of living organisms

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Isart, Oriol; Cirac, J Ignacio [Max-Planck-Institut fuer Quantenoptik, Hans-Kopfermann-Strasse 1, D-85748, Garching (Germany); Juan, Mathieu L; Quidant, Romain [ICFO-Institut de Ciencies Fotoniques, Mediterranean Technology Park, Castelldefels, Barcelona 08860 (Spain)], E-mail: oriol.romero-isart@mpq.mpg.de

    2010-03-15

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deleglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6; Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schroedinger's cat 'gedanken' paradigm (Schroedinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  13. Entanglement of arbitrary superpositions of modes within two-dimensional orbital angular momentum state spaces

    International Nuclear Information System (INIS)

    Jack, B.; Leach, J.; Franke-Arnold, S.; Ireland, D. G.; Padgett, M. J.; Yao, A. M.; Barnett, S. M.; Romero, J.

    2010-01-01

    We use spatial light modulators (SLMs) to measure correlations between arbitrary superpositions of orbital angular momentum (OAM) states generated by spontaneous parametric down-conversion. Our technique allows us to fully access a two-dimensional OAM subspace described by a Bloch sphere, within the higher-dimensional OAM Hilbert space. We quantify the entanglement through violations of a Bell-type inequality for pairs of modal superpositions that lie on equatorial, polar, and arbitrary great circles of the Bloch sphere. Our work shows that SLMs can be used to measure arbitrary spatial states with a fidelity sufficient for appropriate quantum information processing systems.

  14. Thermalization as an invisibility cloak for fragile quantum superpositions

    Science.gov (United States)

    Hahn, Walter; Fine, Boris V.

    2017-07-01

    We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time-reversal manipulation known as Loschmidt echo. The thermalization dynamics makes the superposed states almost indistinguishable during most of the above procedure. We validate the method by applying it to a cluster of spins ½.

  15. Intra-cavity generation of superpositions of Laguerre-Gaussian beams

    CSIR Research Space (South Africa)

    Naidoo, Darryl

    2012-01-01

    Full Text Available In this paper we demonstrate experimentally the intra-cavity generation of a coherent superposition of Laguerre–Gaussian modes of zero radial order but opposite azimuthal order. The superposition is created with a simple intra-cavity stop...

  16. Teleportation of Unknown Superpositions of Collective Atomic Coherent States

    Institute of Scientific and Technical Information of China (English)

    ZHENG ShiBiao

    2001-01-01

    We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time.``

  17. Quantum State Engineering Via Coherent-State Superpositions

    Science.gov (United States)

    Janszky, Jozsef; Adam, P.; Szabo, S.; Domokos, P.

    1996-01-01

    The quantum interference between the two parts of the optical Schrodinger-cat state makes possible to construct a wide class of quantum states via discrete superpositions of coherent states. Even a small number of coherent states can approximate the given quantum states at a high accuracy when the distance between the coherent states is optimized, e. g. nearly perfect Fock state can be constructed by discrete superpositions of n + 1 coherent states lying in the vicinity of the vacuum state.

  18. Experimental superposition of orders of quantum gates

    Science.gov (United States)

    Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip

    2015-01-01

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107

  19. Deterministic preparation of superpositions of vacuum plus one photon by adaptive homodyne detection: experimental considerations

    International Nuclear Information System (INIS)

    Pozza, Nicola Dalla; Wiseman, Howard M; Huntington, Elanor H

    2015-01-01

    The preparation stage of optical qubits is an essential task in all the experimental setups employed for the test and demonstration of quantum optics principles. We consider a deterministic protocol for the preparation of qubits as a superposition of vacuum and one photon number states, which has the advantage to reduce the amount of resources required via phase-sensitive measurements using a local oscillator (‘dyne detection’). We investigate the performances of the protocol using different phase measurement schemes: homodyne, heterodyne, and adaptive dyne detection (involving a feedback loop). First, we define a suitable figure of merit for the prepared state and we obtain an analytical expression for that in terms of the phase measurement considered. Further, we study limitations that the phase measurement can exhibit, such as delay or limited resources in the feedback strategy. Finally, we evaluate the figure of merit of the protocol for different mode-shapes handily available in an experimental setup. We show that even in the presence of such limitations simple feedback algorithms can perform surprisingly well, outperforming the protocols when simple homodyne or heterodyne schemes are employed. (paper)

  20. Robust mesoscopic superposition of strongly correlated ultracold atoms

    International Nuclear Information System (INIS)

    Hallwood, David W.; Ernst, Thomas; Brand, Joachim

    2010-01-01

    We propose a scheme to create coherent superpositions of annular flow of strongly interacting bosonic atoms in a one-dimensional ring trap. The nonrotating ground state is coupled to a vortex state with mesoscopic angular momentum by means of a narrow potential barrier and an applied phase that originates from either rotation or a synthetic magnetic field. We show that superposition states in the Tonks-Girardeau regime are robust against single-particle loss due to the effects of strong correlations. The coupling between the mesoscopically distinct states scales much more favorably with particle number than in schemes relying on weak interactions, thus making particle numbers of hundreds or thousands feasible. Coherent oscillations induced by time variation of parameters may serve as a 'smoking gun' signature for detecting superposition states.

  1. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  2. Superposition Attacks on Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Funder, Jakob Løvstad; Nielsen, Jesper Buus

    2011-01-01

    of information. In this paper, we introduce a fundamentally new model of quantum attacks on classical cryptographic protocols, where the adversary is allowed to ask several classical queries in quantum superposition. This is a strictly stronger attack than the standard one, and we consider the security......Attacks on classical cryptographic protocols are usually modeled by allowing an adversary to ask queries from an oracle. Security is then defined by requiring that as long as the queries satisfy some constraint, there is some problem the adversary cannot solve, such as compute a certain piece...... of several primitives in this model. We show that a secret-sharing scheme that is secure with threshold $t$ in the standard model is secure against superposition attacks if and only if the threshold is lowered to $t/2$. We use this result to give zero-knowledge proofs for all of NP in the common reference...

  3. Linear dynamic analysis of arbitrary thin shells modal superposition by using finite element method

    International Nuclear Information System (INIS)

    Goncalves Filho, O.J.A.

    1978-11-01

    The linear dynamic behaviour of arbitrary thin shells by the Finite Element Method is studied. Plane triangular elements with eighteen degrees of freedom each are used. The general equations of movement are obtained from the Hamilton Principle and solved by the Modal Superposition Method. The presence of a viscous type damping can be considered by means of percentages of the critical damping. An automatic computer program was developed to provide the vibratory properties and the dynamic response to several types of deterministic loadings, including temperature effects. The program was written in FORTRAN IV for the Burroughs B-6700 computer. (author)

  4. Superposition in quantum and relativity physics: an interaction interpretation of special relativity theory. III

    International Nuclear Information System (INIS)

    Schlegel, R.

    1975-01-01

    With the interaction interpretation, the Lorentz transformation of a system arises with selection from a superposition of its states in an observation-interaction. Integration of momentum states of a mass over all possible velocities gives the rest-mass energy. Static electrical and magnetic fields are not found to form such a superposition and are to be taken as irreducible elements. The external superposition consists of those states that are reached only by change of state of motion, whereas the internal superposition contains all the states available to an observer in a single inertial coordinate system. The conjecture is advanced that states of superposition may only be those related by space-time transformations (Lorentz transformations plus space inversion and charge conjugation). The continuum of external and internal superpositions is examined for various masses, and an argument for the unity of the superpositions is presented

  5. On the L-characteristic of nonlinear superposition operators in lp-spaces

    International Nuclear Information System (INIS)

    Dedagic, F.

    1995-04-01

    In this paper we describe the L-characteristic of the nonlinear superposition operator F(x) f(s,x(s)) between two Banach spaces of functions x from N to R. It was shown that L-characteristic of the nonlinear superposition operator which acts between two Lebesgue spaces has so-called Σ-convexity property. In this paper we show that L-characteristic of the operator F (between two Banach spaces) has the convexity property. It means that the classical interpolation theorem of Reisz-Thorin for a linear operator holds for the nonlinear operator superposition which acts between two Banach spaces of sequences. Moreover, we consider the growth function of the operator superposition in mentioned spaces and we show that one has the logarithmically convexity property. (author). 7 refs

  6. Entanglement and quantum superposition induced by a single photon

    Science.gov (United States)

    Lü, Xin-You; Zhu, Gui-Lei; Zheng, Li-Li; Wu, Ying

    2018-03-01

    We predict the occurrence of single-photon-induced entanglement and quantum superposition in a hybrid quantum model, introducing an optomechanical coupling into the Rabi model. Originally, it comes from the photon-dependent quantum property of the ground state featured by the proposed hybrid model. It is associated with a single-photon-induced quantum phase transition, and is immune to the A2 term of the spin-field interaction. Moreover, the obtained quantum superposition state is actually a squeezed cat state, which can significantly enhance precision in quantum metrology. This work offers an approach to manipulate entanglement and quantum superposition with a single photon, which might have potential applications in the engineering of new single-photon quantum devices, and also fundamentally broaden the regime of cavity QED.

  7. Generation of optical coherent state superpositions for quantum information processing

    DEFF Research Database (Denmark)

    Tipsmark, Anders

    2012-01-01

    I dette projektarbejde med titlen “Generation of optical coherent state superpositions for quantum information processing” har målet været at generere optiske kat-tilstande. Dette er en kvantemekanisk superpositions tilstand af to koherente tilstande med stor amplitude. Sådan en tilstand er...

  8. Experimental Demonstration of Capacity-Achieving Phase-Shifted Superposition Modulation

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Caballero Jambrina, Antonio

    2013-01-01

    We report on the first experimental demonstration of phase-shifted superposition modulation (PSM) for optical links. Successful demodulation and decoding is obtained after 240 km transmission for 16-, 32- and 64-PSM.......We report on the first experimental demonstration of phase-shifted superposition modulation (PSM) for optical links. Successful demodulation and decoding is obtained after 240 km transmission for 16-, 32- and 64-PSM....

  9. Measuring the band structures of periodic beams using the wave superposition method

    Science.gov (United States)

    Junyi, L.; Ruffini, V.; Balint, D.

    2016-11-01

    Phononic crystals and elastic metamaterials are artificially engineered periodic structures that have several interesting properties, such as negative effective stiffness in certain frequency ranges. An interesting property of phononic crystals and elastic metamaterials is the presence of band gaps, which are bands of frequencies where elastic waves cannot propagate. The presence of band gaps gives this class of materials the potential to be used as vibration isolators. In many studies, the band structures were used to evaluate the band gaps. The presence of band gaps in a finite structure is commonly validated by measuring the frequency response as there are no direct methods of measuring the band structures. In this study, an experiment was conducted to determine the band structure of one dimension phononic crystals with two wave modes, such as a bi-material beam, using the frequency response at only 6 points to validate the wave superposition method (WSM) introduced in a previous study. A bi-material beam and an aluminium beam with varying geometry were studied. The experiment was performed by hanging the beams freely, exciting one end of the beams, and measuring the acceleration at consecutive unit cells. The measured transfer function of the beams agrees with the analytical solutions but minor discrepancies. The band structure was then determined using WSM and the band structure of one set of the waves was found to agree well with the analytical solutions. The measurements taken for the other set of waves, which are the evanescent waves in the bi-material beams, were inaccurate and noisy. The transfer functions at additional points of one of the beams were calculated from the measured band structure using WSM. The calculated transfer function agrees with the measured results except at the frequencies where the band structure was inaccurate. Lastly, a study of the potential sources of errors was also conducted using finite element modelling and the errors in

  10. Effects of Heat-Treated Wood Particles on the Physico-Mechanical Properties and Extended Creep Behavior of Wood/Recycled-HDPE Composites Using the Time–Temperature Superposition Principle

    Directory of Open Access Journals (Sweden)

    Teng-Chun Yang

    2017-03-01

    Full Text Available This study investigated the effectiveness of heat-treated wood particles for improving the physico-mechanical properties and creep performance of wood/recycled-HDPE composites. The results reveal that the composites with heat-treated wood particles had significantly decreased moisture content, water absorption, and thickness swelling, while no improvements of the flexural properties or the wood screw holding strength were observed, except for the internal bond strength. Additionally, creep tests were conducted at a series of elevated temperatures using the time–temperature superposition principle (TTSP, and the TTSP-predicted creep compliance curves fit well with the experimental data. The creep resistance values of composites with heat-treated wood particles were greater than those having untreated wood particles due to the hydrophobic character of the treated wood particles and improved interfacial compatibility between the wood particles and polymer matrix. At a reference temperature of 20 °C, the improvement of creep resistance (ICR of composites with heat-treated wood particles reached approximately 30% over a 30-year period, and it increased significantly with increasing reference temperature.

  11. Effects of Heat-Treated Wood Particles on the Physico-Mechanical Properties and Extended Creep Behavior of Wood/Recycled-HDPE Composites Using the Time–Temperature Superposition Principle

    Science.gov (United States)

    Yang, Teng-Chun; Chien, Yi-Chi; Wu, Tung-Lin; Hung, Ke-Chang; Wu, Jyh-Horng

    2017-01-01

    This study investigated the effectiveness of heat-treated wood particles for improving the physico-mechanical properties and creep performance of wood/recycled-HDPE composites. The results reveal that the composites with heat-treated wood particles had significantly decreased moisture content, water absorption, and thickness swelling, while no improvements of the flexural properties or the wood screw holding strength were observed, except for the internal bond strength. Additionally, creep tests were conducted at a series of elevated temperatures using the time–temperature superposition principle (TTSP), and the TTSP-predicted creep compliance curves fit well with the experimental data. The creep resistance values of composites with heat-treated wood particles were greater than those having untreated wood particles due to the hydrophobic character of the treated wood particles and improved interfacial compatibility between the wood particles and polymer matrix. At a reference temperature of 20 °C, the improvement of creep resistance (ICR) of composites with heat-treated wood particles reached approximately 30% over a 30-year period, and it increased significantly with increasing reference temperature. PMID:28772726

  12. The study on the Sensorless PMSM Control using the Superposition Theory

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Joung Pyo [Changwon National University, Changwon (Korea); Kwon, Soon Jae [Pukung National University, Seoul (Korea); Kim, Gyu Seob; Sohn, Mu Heon; Kim, Jong Dal [Dongmyung College, Pusan (Korea)

    2002-07-01

    This study presents a solution to control a Permanent Magnet Synchronous Motor without sensors. The control method is the presented superposition principle. This method of sensorless theory is very simple to compute estimated angle. Therefore computing time to estimate angle is shorter than other sensorless method. The use of this system yields enhanced operations, fewer system components, lower system cost, energy efficient control system design and increased deficiency. A practical solution is described and results are given in this Study. The performance of a Sensorless architecture allows an intelligent approach to reduce the complete system costs of digital motion control applications using cheaper electrical motors without sensors. This paper deals with an overview of sensorless solutions in PMSM control applications whereby the focus will be the new controller without sensors and its applications. (author). 6 refs., 16 figs., 1 tab.

  13. Babinet's principle in double-refraction systems

    Science.gov (United States)

    Ropars, Guy; Le Floch, Albert

    2014-06-01

    Babinet's principle applied to systems with double refraction is shown to involve spatial interchanges between the ordinary and extraordinary patterns observed through two complementary screens. As in the case of metamaterials, the extraordinary beam does not follow the Snell-Descartes refraction law, the superposition principle has to be applied simultaneously at two points. Surprisingly, by contrast to the intuitive impression, in the presence of the screen with an opaque region, we observe that the emerging extraordinary photon pattern, which however has undergone a deviation, remains fixed when a natural birefringent crystal is rotated while the ordinary one rotates with the crystal. The twofold application of Babinet's principle implies intensity and polarization interchanges but also spatial and dynamic interchanges which should occur in birefringent metamaterials.

  14. Quantum superposition of massive objects and collapse models

    International Nuclear Information System (INIS)

    Romero-Isart, Oriol

    2011-01-01

    We analyze the requirements to test some of the most paradigmatic collapse models with a protocol that prepares quantum superpositions of massive objects. This consists of coherently expanding the wave function of a ground-state-cooled mechanical resonator, performing a squared position measurement that acts as a double slit, and observing interference after further evolution. The analysis is performed in a general framework and takes into account only unavoidable sources of decoherence: blackbody radiation and scattering of environmental particles. We also discuss the limitations imposed by the experimental implementation of this protocol using cavity quantum optomechanics with levitating dielectric nanospheres.

  15. Quantum superposition of massive objects and collapse models

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Isart, Oriol [Max-Planck-Institut fuer Quantenoptik, Hans-Kopfermann-Str. 1, D-85748 Garching (Germany)

    2011-11-15

    We analyze the requirements to test some of the most paradigmatic collapse models with a protocol that prepares quantum superpositions of massive objects. This consists of coherently expanding the wave function of a ground-state-cooled mechanical resonator, performing a squared position measurement that acts as a double slit, and observing interference after further evolution. The analysis is performed in a general framework and takes into account only unavoidable sources of decoherence: blackbody radiation and scattering of environmental particles. We also discuss the limitations imposed by the experimental implementation of this protocol using cavity quantum optomechanics with levitating dielectric nanospheres.

  16. Principle of coincidence method and application in activity measurement

    International Nuclear Information System (INIS)

    Li Mou; Dai Yihua; Ni Jianzhong

    2008-01-01

    The basic principle of coincidence method was discussed. The basic principle was generalized by analysing the actual example, and the condition in theory of coincidence method was brought forward. The cause of variation of efficiency curve and the effect of dead-time in activity measurement were explained using the above principle and condition. This principle of coincidence method provides the foundation in theory for activity measurement. (authors)

  17. Risk measurement with equivalent utility principles

    NARCIS (Netherlands)

    Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.

    2006-01-01

    Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics

  18. Superposition of helical beams by using a Michelson interferometer.

    Science.gov (United States)

    Gao, Chunqing; Qi, Xiaoqing; Liu, Yidong; Weber, Horst

    2010-01-04

    Orbital angular momentum (OAM) of a helical beam is of great interests in the high density optical communication due to its infinite number of eigen-states. In this paper, an experimental setup is realized to the information encoding and decoding on the OAM eigen-states. A hologram designed by the iterative method is used to generate the helical beams, and a Michelson interferometer with two Porro prisms is used for the superposition of two helical beams. The experimental results of the collinear superposition of helical beams and their OAM eigen-states detection are presented.

  19. Practical purification scheme for decohered coherent-state superpositions via partial homodyne detection

    International Nuclear Information System (INIS)

    Suzuki, Shigenari; Takeoka, Masahiro; Sasaki, Masahide; Andersen, Ulrik L.; Kannari, Fumihiko

    2006-01-01

    We present a simple protocol to purify a coherent-state superposition that has undergone a linear lossy channel. The scheme constitutes only a single beam splitter and a homodyne detector, and thus is experimentally feasible. In practice, a superposition of coherent states is transformed into a classical mixture of coherent states by linear loss, which is usually the dominant decoherence mechanism in optical systems. We also address the possibility of producing a larger amplitude superposition state from decohered states, and show that in most cases the decoherence of the states are amplified along with the amplitude

  20. Decoherence of superposition states in trapped ions

    CSIR Research Space (South Africa)

    Uys, H

    2010-09-01

    Full Text Available This paper investigates the decoherence of superpositions of hyperfine states of 9Be+ ions due to spontaneous scattering of off-resonant light. It was found that, contrary to conventional wisdom, elastic Raleigh scattering can have major...

  1. Towards quantum superposition of a levitated nanodiamond with a NV center

    Science.gov (United States)

    Li, Tongcang

    2015-05-01

    Creating large Schrödinger's cat states with massive objects is one of the most challenging goals in quantum mechanics. We have previously achieved an important step of this goal by cooling the center-of-mass motion of a levitated microsphere from room temperature to millikelvin temperatures with feedback cooling. To generate spatial quantum superposition states with an optical cavity, however, requires a very strong quadratic coupling that is difficult to achieve. We proposed to optically trap a nanodiamond with a nitrogen-vacancy (NV) center in vacuum, and generate large spatial superposition states using the NV spin-optomechanical coupling in a strong magnetic gradient field. The large spatial superposition states can be used to study objective collapse theories of quantum mechanics. We have optically trapped nanodiamonds in air and are working towards this goal.

  2. Measurement of the quantum superposition state of an imaging ensemble of photons prepared in orbital angular momentum states using a phase-diversity method

    International Nuclear Information System (INIS)

    Uribe-Patarroyo, Nestor; Alvarez-Herrero, Alberto; Belenguer, Tomas

    2010-01-01

    We propose the use of a phase-diversity technique to estimate the orbital angular momentum (OAM) superposition state of an ensemble of photons that passes through an optical system, proceeding from an extended object. The phase-diversity technique permits the estimation of the optical transfer function (OTF) of an imaging optical system. As the OTF is derived directly from the wave-front characteristics of the observed light, we redefine the phase-diversity technique in terms of a superposition of OAM states. We test this new technique experimentally and find coherent results among different tests, which gives us confidence in the estimation of the photon ensemble state. We find that this technique not only allows us to estimate the square of the amplitude of each OAM state, but also the relative phases among all states, thus providing complete information about the quantum state of the photons. This technique could be used to measure the OAM spectrum of extended objects in astronomy or in an optical communication scheme using OAM states. In this sense, the use of extended images could lead to new techniques in which the communication is further multiplexed along the field.

  3. Improving the Yule-Nielsen modified Neugebauer model by dot surface coverages depending on the ink superposition conditions

    Science.gov (United States)

    Hersch, Roger David; Crete, Frederique

    2005-01-01

    Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In

  4. Partial Measurements and the Realization of Quantum-Mechanical Counterfactuals

    Science.gov (United States)

    Paraoanu, G. S.

    2011-07-01

    We propose partial measurements as a conceptual tool to understand how to operate with counterfactual claims in quantum physics. Indeed, unlike standard von Neumann measurements, partial measurements can be reversed probabilistically. We first analyze the consequences of this rather unusual feature for the principle of superposition, for the complementarity principle, and for the issue of hidden variables. Then we move on to exploring non-local contexts, by reformulating the EPR paradox, the quantum teleportation experiment, and the entanglement-swapping protocol for the situation in which one uses partial measurements followed by their stochastic reversal. This leads to a number of counter-intuitive results, which are shown to be resolved if we give up the idea of attributing reality to the wavefunction of a single quantum system.

  5. Universal uncertainty principle in the measurement operator formalism

    International Nuclear Information System (INIS)

    Ozawa, Masanao

    2005-01-01

    Heisenberg's uncertainty principle has been understood to set a limitation on measurements; however, the long-standing mathematical formulation established by Heisenberg, Kennard, and Robertson does not allow such an interpretation. Recently, a new relation was found to give a universally valid relation between noise and disturbance in general quantum measurements, and it has become clear that the new relation plays a role of the first principle to derive various quantum limits on measurement and information processing in a unified treatment. This paper examines the above development on the noise-disturbance uncertainty principle in the model-independent approach based on the measurement operator formalism, which is widely accepted to describe a class of generalized measurements in the field of quantum information. We obtain explicit formulae for the noise and disturbance of measurements given by measurement operators, and show that projective measurements do not satisfy the Heisenberg-type noise-disturbance relation that is typical in the gamma-ray microscope thought experiments. We also show that the disturbance on a Pauli operator of a projective measurement of another Pauli operator constantly equals √2, and examine how this measurement violates the Heisenberg-type relation but satisfies the new noise-disturbance relation

  6. Improved superposition schemes for approximate multi-caloron configurations

    International Nuclear Information System (INIS)

    Gerhold, P.; Ilgenfritz, E.-M.; Mueller-Preussker, M.

    2007-01-01

    Two improved superposition schemes for the construction of approximate multi-caloron-anti-caloron configurations, using exact single (anti-)caloron gauge fields as underlying building blocks, are introduced in this paper. The first improvement deals with possible monopole-Dirac string interactions between different calorons with non-trivial holonomy. The second one, based on the ADHM formalism, improves the (anti-)selfduality in the case of small caloron separations. It conforms with Shuryak's well-known ratio-ansatz when applied to instantons. Both superposition techniques provide a higher degree of (anti-)selfduality than the widely used sum-ansatz, which simply adds the (anti)caloron vector potentials in an appropriate gauge. Furthermore, the improved configurations (when discretized onto a lattice) are characterized by a higher stability when they are exposed to lattice cooling techniques

  7. Decision principles derived from risk measures

    NARCIS (Netherlands)

    Goovaerts, M.J.; Kaas, R.; Laeven, R.J.A.

    2010-01-01

    In this paper, we argue that a distinction exists between risk measures and decision principles. Though both are functionals assigning a real number to a random variable, we think there is a hierarchy between the two concepts. Risk measures operate on the first "level", quantifying the risk in the

  8. Use of the modal superposition technique for piping system blowdown analyses

    International Nuclear Information System (INIS)

    Ware, A.G.; Macek, R.W.

    1983-01-01

    A standard method of solving for the seismic response of piping systems is the modal superposition technique. Only a limited number of structural modes are considered (typically those up to 33 Hz in the U.S.), since the effect on the calculated response due to higher modes is generally small, and the method can result in considerable computer cost savings over the direct integration method. The modal superposition technique has also been applied to piping response problems in which the forcing functions are due to fluid excitation. Application of the technique to this case is somewhat more difficult, because a well defined cutoff frequency for determining structural modes to be included has not been established. This paper outlines a method for higher mode corrections, and suggests methods to determine suitable cutoff frequencies for piping system blowdown analyses. A numerical example illustrates how uncorrected modal superposition results can produce erroneous stress results

  9. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    Science.gov (United States)

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  10. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    International Nuclear Information System (INIS)

    Dubrovsky, V. G.; Topovsky, A. V.

    2013-01-01

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u (n) , n= 1, …, N are constructed via Zakharov and Manakov ∂-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u (n) and calculated by ∂-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schrödinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u (n) . It is shown that the sums u=u (k 1 ) +...+u (k m ) , 1 ⩽k 1 2 m ⩽N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schrödinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  11. Generation of picosecond pulsed coherent state superpositions

    DEFF Research Database (Denmark)

    Dong, Ruifang; Tipsmark, Anders; Laghaout, Amine

    2014-01-01

    We present the generation of approximated coherent state superpositions-referred to as Schrodinger cat states-by the process of subtracting single photons from picosecond pulsed squeezed states of light. The squeezed vacuum states are produced by spontaneous parametric down-conversion (SPDC...... which exhibit non-Gaussian behavior. (C) 2014 Optical Society of America...

  12. Synthetic Elucidation of Design Principles for Molecular Qubits

    Science.gov (United States)

    Graham, Michael James

    Quantum information processing (QIP) is an emerging computational paradigm with the potential to enable a vast increase in computational power, fundamentally transforming fields from structural biology to finance. QIP employs qubits, or quantum bits, as its fundamental units of information, which can exist in not just the classical states of 0 or 1, but in a superposition of the two. In order to successfully perform QIP, this superposition state must be sufficiently long-lived. One promising paradigm for the implementation of QIP involves employing unpaired electrons in coordination complexes as qubits. This architecture is highly tunable and scalable, however coordination complexes frequently suffer from short superposition lifetimes, or T2. In order to capitalize on the promise of molecular qubits, it is necessary to develop a set of design principles that allow the rational synthesis of complexes with sufficiently long values of T2. In this dissertation, I report efforts to use the synthesis of series of complexes to elucidate design principles for molecular qubits. Chapter 1 details previous work by our group and others in the field. Chapter 2 details the first efforts of our group to determine the impact of varying spin and spin-orbit coupling on T2. Chapter 3 examines the effect of removing nuclear spins on coherence time, and reports a series of vanadyl bis(dithiolene) complexes which exhibit extremely long coherence lifetimes, in excess of the 100 mus threshold for qubit viability. Chapters 4 and 5 form two complimentary halves of a study to determine the exact relationship between electronic spin-nuclear spin distance and the effect of the nuclear spins on T2. Finally, chapter 6 suggests next directions for the field as a whole, including the potential for work in this field to impact the development of other technologies as diverse as quantum sensors and magnetic resonance imaging contrast agents.

  13. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau; Shihada, Basem; Pin-Han Ho

    2013-01-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However

  14. Intercomparison of the GOS approach, superposition T-matrix method, and laboratory measurements for black carbon optical properties during aging

    International Nuclear Information System (INIS)

    He, Cenlin; Takano, Yoshi; Liou, Kuo-Nan; Yang, Ping; Li, Qinbin; Mackowski, Daniel W.

    2016-01-01

    We perform a comprehensive intercomparison of the geometric-optics surface-wave (GOS) approach, the superposition T-matrix method, and laboratory measurements for optical properties of fresh and coated/aged black carbon (BC) particles with complex structures. GOS and T-matrix calculations capture the measured optical (i.e., extinction, absorption, and scattering) cross sections of fresh BC aggregates, with 5–20% differences depending on particle size. We find that the T-matrix results tend to be lower than the measurements, due to uncertainty in theoretical approximations of realistic BC structures, particle property measurements, and numerical computations in the method. On the contrary, the GOS results are higher than the measurements (hence the T-matrix results) for BC radii 100 nm. We find good agreement (differences 100 nm. We find small deviations (≤10%) in asymmetry factors computed from the two methods for most BC coating structures and sizes, but several complex structures have 10–30% differences. This study provides the foundation for downstream application of the GOS approach in radiative transfer and climate studies. - Highlights: • The GOS and T-matrix methods capture laboratory measurements of BC optical properties. • The GOS results are consistent with the T-matrix results for BC optical properties. • BC optical properties vary remarkably with coating structures and sizes during aging.

  15. Coherent inflation for large quantum superpositions of levitated microspheres

    Science.gov (United States)

    Romero-Isart, Oriol

    2017-12-01

    We show that coherent inflation (CI), namely quantum dynamics generated by inverted conservative potentials acting on the center of mass of a massive object, is an enabling tool to prepare large spatial quantum superpositions in a double-slit experiment. Combined with cryogenic, extreme high vacuum, and low-vibration environments, we argue that it is experimentally feasible to exploit CI to prepare the center of mass of a micrometer-sized object in a spatial quantum superposition comparable to its size. In such a hitherto unexplored parameter regime gravitationally-induced decoherence could be unambiguously falsified. We present a protocol to implement CI in a double-slit experiment by letting a levitated microsphere traverse a static potential landscape. Such a protocol could be experimentally implemented with an all-magnetic scheme using superconducting microspheres.

  16. Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates

    Science.gov (United States)

    Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim

    2016-05-01

    We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.

  17. About simple nonlinear and linear superpositions of special exact solutions of Veselov-Novikov equation

    Energy Technology Data Exchange (ETDEWEB)

    Dubrovsky, V. G.; Topovsky, A. V. [Novosibirsk State Technical University, Karl Marx prosp. 20, Novosibirsk 630092 (Russian Federation)

    2013-03-15

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums of special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.

  18. Logarithmic superposition of force response with rapid length changes in relaxed porcine airway smooth muscle.

    Science.gov (United States)

    Ijpma, G; Al-Jumaily, A M; Cairns, S P; Sieck, G C

    2010-12-01

    We present a systematic quantitative analysis of power-law force relaxation and investigate logarithmic superposition of force response in relaxed porcine airway smooth muscle (ASM) strips in vitro. The term logarithmic superposition describes linear superposition on a logarithmic scale, which is equivalent to multiplication on a linear scale. Additionally, we examine whether the dynamic response of contracted and relaxed muscles is dominated by cross-bridge cycling or passive dynamics. The study shows the following main findings. For relaxed ASM, the force response to length steps of varying amplitude (0.25-4% of reference length, both lengthening and shortening) are well-fitted with power-law functions over several decades of time (10⁻² to 10³ s), and the force response after consecutive length changes is more accurately fitted assuming logarithmic superposition rather than linear superposition. Furthermore, for sinusoidal length oscillations in contracted and relaxed muscles, increasing the oscillation amplitude induces greater hysteresivity and asymmetry of force-length relationships, whereas increasing the frequency dampens hysteresivity but increases asymmetry. We conclude that logarithmic superposition is an important feature of relaxed ASM, which may facilitate a more accurate prediction of force responses in the continuous dynamic environment of the respiratory system. In addition, the single power-function response to length changes shows that the dynamics of cross-bridge cycling can be ignored in relaxed muscle. The similarity in response between relaxed and contracted states implies that the investigated passive dynamics play an important role in both states and should be taken into account.

  19. Generating superpositions of higher order bessel beams [Conference paper

    CSIR Research Space (South Africa)

    Vasilyeu, R

    2009-10-01

    Full Text Available An experimental setup to generate a superposition of higher-order Bessel beams by means of a spatial light modulator and ring aperture is presented. The experimentally produced fields are in good agreement with those calculated theoretically....

  20. Measuring multiple residual-stress components using the contour method and multiple cuts

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Swenson, Hunter [Los Alamos National Laboratory; Pagliaro, Pierluigi [U. PALERMO; Zuccarello, Bernardo [U. PALERMO

    2009-01-01

    The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.

  1. Linear Plasma Oscillation Described by Superposition of Normal Modes

    DEFF Research Database (Denmark)

    Pécseli, Hans

    1974-01-01

    The existence of steady‐state solutions to the linearized ion and electron Vlasov equation is demonstrated for longitudinal waves in an initially stable plasma. The evolution of an arbitrary initial perturbation can be described by superposition of these solutions. Some common approximations...

  2. Generating superpositions of higher–order Bessel beams [Journal article

    CSIR Research Space (South Africa)

    Vasilyeu, R

    2009-12-01

    Full Text Available The authors report the first experimental generation of the superposition of higher-order Bessel beams, by means of a spatial light modulator (SLM) and a ring slit aperture. They present illuminating a ring slit aperture with light which has...

  3. Spectral properties of superpositions of Ornstein-Uhlenbeck type processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Leonenko, N.N.

    2005-01-01

    Stationary processes with prescribed one-dimensional marginal laws and long-range dependence are constructed. The asymptotic properties of the spectral densities are studied. The possibility of Mittag-Leffler decay in the autocorrelation function of superpositions of Ornstein-Uhlenbeck type...... processes is proved....

  4. The measure and significance of Bateman's principles.

    Science.gov (United States)

    Collet, Julie M; Dean, Rebecca F; Worley, Kirsty; Richardson, David S; Pizzari, Tommaso

    2014-05-07

    Bateman's principles explain sex roles and sexual dimorphism through sex-specific variance in mating success, reproductive success and their relationships within sexes (Bateman gradients). Empirical tests of these principles, however, have come under intense scrutiny. Here, we experimentally show that in replicate groups of red junglefowl, Gallus gallus, mating and reproductive successes were more variable in males than in females, resulting in a steeper male Bateman gradient, consistent with Bateman's principles. However, we use novel quantitative techniques to reveal that current methods typically overestimate Bateman's principles because they (i) infer mating success indirectly from offspring parentage, and thus miss matings that fail to result in fertilization, and (ii) measure Bateman gradients through the univariate regression of reproductive over mating success, without considering the substantial influence of other components of male reproductive success, namely female fecundity and paternity share. We also find a significant female Bateman gradient but show that this likely emerges as spurious consequences of male preference for fecund females, emphasizing the need for experimental approaches to establish the causal relationship between reproductive and mating success. While providing qualitative support for Bateman's principles, our study demonstrates how current approaches can generate a misleading view of sex differences and roles.

  5. Nonclassical thermal-state superpositions: Analytical evolution law and decoherence behavior

    Science.gov (United States)

    Meng, Xiang-guo; Goan, Hsi-Sheng; Wang, Ji-suo; Zhang, Ran

    2018-03-01

    Employing the integration technique within normal products of bosonic operators, we present normal product representations of thermal-state superpositions and investigate their nonclassical features, such as quadrature squeezing, sub-Poissonian distribution, and partial negativity of the Wigner function. We also analytically and numerically investigate their evolution law and decoherence characteristics in an amplitude-decay model via the variations of the probability distributions and the negative volumes of Wigner functions in phase space. The results indicate that the evolution formulas of two thermal component states for amplitude decay can be viewed as the same integral form as a displaced thermal state ρ(V , d) , but governed by the combined action of photon loss and thermal noise. In addition, the larger values of the displacement d and noise V lead to faster decoherence for thermal-state superpositions.

  6. Superposition approach for description of electrical conductivity in sheared MWNT/polycarbonate melts

    Directory of Open Access Journals (Sweden)

    M. Saphiannikova

    2012-06-01

    Full Text Available The theoretical description of electrical properties of polymer melts, filled with attractively interacting conductive particles, represents a great challenge. Such filler particles tend to build a network-like structure which is very fragile and can be easily broken in a shear flow with shear rates of about 1 s–1. In this study, measured shear-induced changes in electrical conductivity of polymer composites are described using a superposition approach, in which the filler particles are separated into a highly conductive percolating and low conductive non-percolating phases. The latter is represented by separated well-dispersed filler particles. It is assumed that these phases determine the effective electrical properties of composites through a type of mixing rule involving the phase volume fractions. The conductivity of the percolating phase is described with the help of classical percolation theory, while the conductivity of non-percolating phase is given by the matrix conductivity enhanced by the presence of separate filler particles. The percolation theory is coupled with a kinetic equation for a scalar structural parameter which describes the current state of filler network under particular flow conditions. The superposition approach is applied to transient shear experiments carried out on polycarbonate composites filled with multi-wall carbon nanotubes.

  7. Adiabatic rotation, quantum search, and preparation of superposition states

    International Nuclear Information System (INIS)

    Siu, M. Stewart

    2007-01-01

    We introduce the idea of using adiabatic rotation to generate superpositions of a large class of quantum states. For quantum computing this is an interesting alternative to the well-studied 'straight line' adiabatic evolution. In ways that complement recent results, we show how to efficiently prepare three types of states: Kitaev's toric code state, the cluster state of the measurement-based computation model, and the history state used in the adiabatic simulation of a quantum circuit. We also show that the method, when adapted for quantum search, provides quadratic speedup as other optimal methods do with the advantages that the problem Hamiltonian is time independent and that the energy gap above the ground state is strictly nondecreasing with time. Likewise the method can be used for optimization as an alternative to the standard adiabatic algorithm

  8. Entanglement and discord of the superposition of Greenberger-Horne-Zeilinger states

    International Nuclear Information System (INIS)

    Parashar, Preeti; Rana, Swapan

    2011-01-01

    We calculate the analytic expression for geometric measure of entanglement for arbitrary superposition of two N-qubit canonical orthonormal Greenberger-Horne-Zeilinger (GHZ) states and the same for two W states. In the course of characterizing all kinds of nonclassical correlations, an explicit formula for quantum discord (via relative entropy) for the former class of states has been presented. Contrary to the GHZ state, the closest separable state to the W state is not classical. Therefore, in this case, the discord is different from the relative entropy of entanglement. We conjecture that the discord for the N-qubit W state is log 2 N.

  9. Superposition of configurations in semiempirical calculation of iron group ion spectra

    International Nuclear Information System (INIS)

    Kantseryavichyus, A.Yu.; Ramonas, A.A.

    1976-01-01

    The energy spectra of ions from the iron group in the dsup(N), dsup(N)s, dsup(N)p configurations are studied. A semiempirical method is used in which the effective hamiltonian contains configuration superposition. The sdsup(N+1), psup(4)dsup(N+2) quasidegenerated configurations, as well as configurations which differ by one electron are taken as correction configurations. It follows from the calculations that the most important role among the quasidegenerate configurations is played by the sdsup(N+1) correctional configuration. When it is taken into account, the introduction of the psup(4)dsup(N+2) correctional configuration practically does not affect the results. Account of the dsup(N-1)s configuration in the second order of the perturbation theory is equivalent to that of sdsup(N+1) in the sense that it results in the identical mean square deviation. As follows from the comparison of the results of the approximate and complete account of the configuration superposition, in many cases one can be satisfied with its approximate and complete account of the configuration superposition, in many cases one can be satisfied with its approximate version. The results are presented in the form of tables including the values of empirical parameters, radial integrals, mean square errors, etc

  10. Estimating Concentrations of Road-Salt Constituents in Highway-Runoff from Measurements of Specific Conductance

    Science.gov (United States)

    Granato, Gregory E.; Smith, Kirk P.

    1999-01-01

    Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for

  11. Noise-based logic hyperspace with the superposition of 2 states in a single wire

    Science.gov (United States)

    Kish, Laszlo B.; Khatri, Sunil; Sethuraman, Swaminathan

    2009-05-01

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have “on/off” states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2 orthogonal system states. This is equivalent to a multi-valued logic system with 2 logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O(√{M}) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  12. Analysis of magnetic damping problem by the coupled mode superposition method

    International Nuclear Information System (INIS)

    Horie, Tomoyoshi; Niho, Tomoya

    1997-01-01

    In this paper we describe the coupled mode superposition method for the magnetic damping problem, which is produced by the coupled effect between the deformation and the induced eddy current of the structures for future fusion reactors and magnetically levitated vehicles. The formulation of the coupled mode superposition method is based on the matrix equation for the eddy current and the structure using the coupled mode vectors. Symmetric form of the coupled matrix equation is obtained. Coupled problems of a thin plate are solved to verify the formulation and the computer code. These problems are solved efficiently by this method using only a few coupled modes. Consideration of the coupled mode vectors shows that the coupled effects are included completely in each coupled mode. (author)

  13. Superposition as a logical glue

    Directory of Open Access Journals (Sweden)

    Andrea Asperti

    2011-03-01

    Full Text Available The typical mathematical language systematically exploits notational and logical abuses whose resolution requires not just the knowledge of domain specific notation and conventions, but not trivial skills in the given mathematical discipline. A large part of this background knowledge is expressed in form of equalities and isomorphisms, allowing mathematicians to freely move between different incarnations of the same entity without even mentioning the transformation. Providing ITP-systems with similar capabilities seems to be a major way to improve their intelligence, and to ease the communication between the user and the machine. The present paper discusses our experience of integration of a superposition calculus within the Matita interactive prover, providing in particular a very flexible, "smart" application tactic, and a simple, innovative approach to automation.

  14. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  15. Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication

    Science.gov (United States)

    Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-09-01

    In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

  16. Noise-based logic hyperspace with the superposition of 2N states in a single wire

    International Nuclear Information System (INIS)

    Kish, Laszlo B.; Khatri, Sunil; Sethuraman, Swaminathan

    2009-01-01

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have 'on/off' states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2 N orthogonal system states. This is equivalent to a multi-valued logic system with 2 2 N logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O(√(M)) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  17. Superpositions of higher-order bessel beams and nondiffracting speckle fields

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-08-01

    Full Text Available speckle fields. The paper reports on illuminating a ring slit aperture with light which has an azimuthal phase dependence, such that the field produced is a superposition of two higher-order Bessel beams. In the case that the phase dependence of the light...

  18. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...

  19. Principles in selecting human capital measurements and metrics

    Directory of Open Access Journals (Sweden)

    Pharny D. Chrysler-Fox

    2014-09-01

    Research purpose: The study explored principles in selecting human capital measurements,drawing on the views and recommendations of human resource management professionals,all experts in human capital measurement. Motivation for the study: The motivation was to advance the understanding of selectingappropriate and strategic valid measurements, in order for human resource practitioners tocontribute to creating value and driving strategic change. Research design, approach and method: A qualitative approach, with purposively selectedcases from a selected panel of human capital measurement experts, generated a datasetthrough unstructured interviews, which were analysed thematically. Main findings: Nineteen themes were found. They represent a process that considers thecentrality of the business strategy and a systemic integration across multiple value chains inthe organisation through business partnering, in order to select measurements and generatemanagement level-appropriate information. Practical/managerial implications: Measurement practitioners, in partnership withmanagement from other functions, should integrate the business strategy across multiplevalue chains in order to select measurements. Analytics becomes critical in discoveringrelationships and formulating hypotheses to understand value creation. Higher educationinstitutions should produce graduates able to deal with systems thinking and to operatewithin complexity. Contribution: This study identified principles to select measurements and metrics. Noticeableis the move away from the interrelated scorecard perspectives to a systemic view of theorganisation in order to understand value creation. In addition, the findings may help toposition the human resource management function as a strategic asset.

  20. Continuous quantum measurements and the action uncertainty principle

    Science.gov (United States)

    Mensky, Michael B.

    1992-09-01

    The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.

  1. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters

    Science.gov (United States)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-01

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  2. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    Science.gov (United States)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  3. Resilience to decoherence of the macroscopic quantum superpositions generated by universally covariant optimal quantum cloning

    International Nuclear Information System (INIS)

    Spagnolo, Nicolo; Sciarrino, Fabio; De Martini, Francesco

    2010-01-01

    We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.

  4. Relaxation Behavior by Time-Salt and Time-Temperature Superpositions of Polyelectrolyte Complexes from Coacervate to Precipitate

    Directory of Open Access Journals (Sweden)

    Samim Ali

    2018-01-01

    Full Text Available Complexation between anionic and cationic polyelectrolytes results in solid-like precipitates or liquid-like coacervate depending on the added salt in the aqueous medium. However, the boundary between these polymer-rich phases is quite broad and the associated changes in the polymer relaxation in the complexes across the transition regime are poorly understood. In this work, the relaxation dynamics of complexes across this transition is probed over a wide timescale by measuring viscoelastic spectra and zero-shear viscosities at varying temperatures and salt concentrations for two different salt types. We find that the complexes exhibit time-temperature superposition (TTS at all salt concentrations, while the range of overlapped-frequencies for time-temperature-salt superposition (TTSS strongly depends on the salt concentration (Cs and gradually shifts to higher frequencies as Cs is decreased. The sticky-Rouse model describes the relaxation behavior at all Cs. However, collective relaxation of polyelectrolyte complexes gradually approaches a rubbery regime and eventually exhibits a gel-like response as Cs is decreased and limits the validity of TTSS.

  5. On some properties of the superposition operator on topological manifolds

    Directory of Open Access Journals (Sweden)

    Janusz Dronka

    2010-01-01

    Full Text Available In this paper the superposition operator in the space of vector-valued, bounded and continuous functions on a topological manifold is considered. The acting conditions and criteria of continuity and compactness are established. As an application, an existence result for the nonlinear Hammerstein integral equation is obtained.

  6. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  7. SUPERPOSITION OF STOCHASTIC PROCESSES AND THE RESULTING PARTICLE DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Schwadron, N. A.; Dayeh, M. A.; Desai, M.; Fahr, H.; Jokipii, J. R.; Lee, M. A.

    2010-01-01

    Many observations of suprathermal and energetic particles in the solar wind and the inner heliosheath show that distribution functions scale approximately with the inverse of particle speed (v) to the fifth power. Although there are exceptions to this behavior, there is a growing need to understand why this type of distribution function appears so frequently. This paper develops the concept that a superposition of exponential and Gaussian distributions with different characteristic speeds and temperatures show power-law tails. The particular type of distribution function, f ∝ v -5 , appears in a number of different ways: (1) a series of Poisson-like processes where entropy is maximized with the rates of individual processes inversely proportional to the characteristic exponential speed, (2) a series of Gaussian distributions where the entropy is maximized with the rates of individual processes inversely proportional to temperature and the density of individual Gaussian distributions proportional to temperature, and (3) a series of different diffusively accelerated energetic particle spectra with individual spectra derived from observations (1997-2002) of a multiplicity of different shocks. Thus, we develop a proof-of-concept for the superposition of stochastic processes that give rise to power-law distribution functions.

  8. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  9. Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis

    Science.gov (United States)

    LoPresto, Michael C.

    2013-01-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  10. Noise-based logic hyperspace with the superposition of 2{sup N} states in a single wire

    Energy Technology Data Exchange (ETDEWEB)

    Kish, Laszlo B. [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)], E-mail: laszlo.kish@ece.tamu.edu; Khatri, Sunil; Sethuraman, Swaminathan [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)

    2009-05-11

    In the introductory paper [L.B. Kish, Phys. Lett. A 373 (2009) 911], about noise-based logic, we showed how simple superpositions of single logic basis vectors can be achieved in a single wire. The superposition components were the N orthogonal logic basis vectors. Supposing that the different logic values have 'on/off' states only, the resultant discrete superposition state represents a single number with N bit accuracy in a single wire, where N is the number of orthogonal logic vectors in the base. In the present Letter, we show that the logic hyperspace (product) vectors defined in the introductory paper can be generalized to provide the discrete superposition of 2{sup N} orthogonal system states. This is equivalent to a multi-valued logic system with 2{sup 2{sup N}} logic values per wire. This is a similar situation to quantum informatics with N qubits, and hence we introduce the notion of noise-bit. This system has major differences compared to quantum informatics. The noise-based logic system is deterministic and each superposition element is instantly accessible with the high digital accuracy, via a real hardware parallelism, without decoherence and error correction, and without the requirement of repeating the logic operation many times to extract the probabilistic information. Moreover, the states in noise-based logic do not have to be normalized, and non-unitary operations can also be used. As an example, we introduce a string search algorithm which is O({radical}(M)) times faster than Grover's quantum algorithm (where M is the number of string entries), while it has the same hardware complexity class as the quantum algorithm.

  11. A comparison of two different sound intensity measurement principles

    DEFF Research Database (Denmark)

    Jacobsen, Finn; de Bree, Hans-Elias

    2005-01-01

    , and compares the two measurement principles with particular regard to the sources of error in sound power determination. It is shown that the phase calibration of intensity probes that combine different transducers is very critical below 500 Hz if the measurement surface is very close to the source under test...

  12. New principle for measuring arterial blood oxygenation, enabling motion-robust remote monitoring.

    Science.gov (United States)

    van Gastel, Mark; Stuijk, Sander; de Haan, Gerard

    2016-12-07

    Finger-oximeters are ubiquitously used for patient monitoring in hospitals worldwide. Recently, remote measurement of arterial blood oxygenation (SpO 2 ) with a camera has been demonstrated. Both contact and remote measurements, however, require the subject to remain static for accurate SpO 2 values. This is due to the use of the common ratio-of-ratios measurement principle that measures the relative pulsatility at different wavelengths. Since the amplitudes are small, they are easily corrupted by motion-induced variations. We introduce a new principle that allows accurate remote measurements even during significant subject motion. We demonstrate the main advantage of the principle, i.e. that the optimal signature remains the same even when the SNR of the PPG signal drops significantly due to motion or limited measurement area. The evaluation uses recordings with breath-holding events, which induce hypoxemia in healthy moving subjects. The events lead to clinically relevant SpO 2 levels in the range 80-100%. The new principle is shown to greatly outperform current remote ratio-of-ratios based methods. The mean-absolute SpO 2 -error (MAE) is about 2 percentage-points during head movements, where the benchmark method shows a MAE of 24 percentage-points. Consequently, we claim ours to be the first method to reliably measure SpO 2 remotely during significant subject motion.

  13. On Multiple Users Scheduling Using Superposition Coding over Rayleigh Fading Channels

    KAUST Repository

    Zafar, Ammar

    2013-02-20

    In this letter, numerical results are provided to analyze the gains of multiple users scheduling via superposition coding with successive interference cancellation in comparison with the conventional single user scheduling in Rayleigh blockfading broadcast channels. The information-theoretic optimal power, rate and decoding order allocation for the superposition coding scheme are considered and the corresponding histogram for the optimal number of scheduled users is evaluated. Results show that at optimality there is a high probability that only two or three users are scheduled per channel transmission block. Numerical results for the gains of multiple users scheduling in terms of the long term throughput under hard and proportional fairness as well as for fixed merit weights for the users are also provided. These results show that the performance gain of multiple users scheduling over single user scheduling increases when the total number of users in the network increases, and it can exceed 10% for high number of users

  14. Composite and case study analyses of the large-scale environments associated with West Pacific Polar and subtropical vertical jet superposition events

    Science.gov (United States)

    Handlos, Zachary J.

    Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific

  15. Environmental policy in brown coal mining in accordance with the precautionary measures principle and polluter pays principle

    International Nuclear Information System (INIS)

    Hamann, R.; Wacker, H.

    1993-01-01

    The precautionary measures principle and the polluter pays principle in brown coal mining are discussed. Ground water subsidence and landscape destruction are local or regional problems and thus easily detectable. If damage cannot be avoided, its authors are known and will pay. In spite of all this, the German brown coal industry is well able to compete on the world market with others who don't care about the environmental damage they may cause. (orig./HS)) [de

  16. Measuring coherence with entanglement concurrence

    Science.gov (United States)

    Qi, Xianfei; Gao, Ting; Yan, Fengli

    2017-07-01

    Quantum coherence is a fundamental manifestation of the quantum superposition principle. Recently, Baumgratz et al (2014 Phys. Rev. Lett. 113 140401) presented a rigorous framework to quantify coherence from the view of theory of physical resource. Here we propose a new valid quantum coherence measure which is a convex roof measure, for a quantum system of arbitrary dimension, essentially using the generalized Gell-Mann matrices. Rigorous proof shows that the proposed coherence measure, coherence concurrence, fulfills all the requirements dictated by the resource theory of quantum coherence measures. Moreover, strong links between the resource frameworks of coherence concurrence and entanglement concurrence is derived, which shows that any degree of coherence with respect to some reference basis can be converted to entanglement via incoherent operations. Our work provides a clear quantitative and operational connection between coherence and entanglement based on two kinds of concurrence. This new coherence measure, coherence concurrence, may also be beneficial to the study of quantum coherence.

  17. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    Science.gov (United States)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  18. Quantum-mechanical Green's functions and nonlinear superposition law

    International Nuclear Information System (INIS)

    Nassar, A.B.; Bassalo, J.M.F.; Antunes Neto, H.S.; Alencar, P. de T.S.

    1986-01-01

    The quantum-mechanical Green's function is derived for the problem of a time-dependent variable mass particle subject to a time-dependent forced harmonic oscillator potential by taking direct recourse of the corresponding Schroedinger equation. Through the usage of the nonlinear superposition law of Ray and Reid, it is shown that such a Green's function can be obtained from that for the problem of a particle with unit (constant) mass subject to either a forced harmonic potential with constant frequency or only to a time-dependent linear field. (Author) [pt

  19. Quantum-mechanical Green's function and nonlinear superposition law

    International Nuclear Information System (INIS)

    Nassar, A.B.; Bassalo, J.M.F.; Antunes Neto, H.S.; Alencar, P.T.S.

    1986-01-01

    It is derived the quantum-mechanical Green's function for the problem of a time-dependent variable mass particle subject to a time-dependent forced harmonic-oscillator potential by taking direct recourse of the corresponding Schroedinger equation. Through the usage of the nonlinear superposition law of Ray and Reid, it is shown that such a Green's function can be obtained from that for the problem of a particle with unit (constant) mass subject to either a forced harmonic potential with constant frequency or only to a time-dependent linear field

  20. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  1. Push-pull optical pumping of pure superposition states

    International Nuclear Information System (INIS)

    Jau, Y.-Y.; Miron, E.; Post, A.B.; Kuzma, N.N.; Happer, W.

    2004-01-01

    A new optical pumping method, 'push-pull pumping', can produce very nearly pure, coherent superposition states between the initial and the final sublevels of the important field-independent 0-0 clock resonance of alkali-metal atoms. The key requirement for push-pull pumping is the use of D1 resonant light which alternates between left and right circular polarization at the Bohr frequency of the state. The new pumping method works for a wide range of conditions, including atomic beams with almost no collisions, and atoms in buffer gases with pressures of many atmospheres

  2. A millimeter wave linear superposition oscillator in 0.18 μm CMOS technology

    International Nuclear Information System (INIS)

    Yan Dong; Mao Luhong; Su Qiujie; Xie Sheng; Zhang Shilin

    2014-01-01

    This paper presents a millimeter wave (mm-wave) oscillator that generates signal at 36.56 GHz. The mm-wave oscillator is realized in a UMC 0.18 μm CMOS process. The linear superposition (LS) technique breaks through the limit of cut-off frequency (f T ), and realizes a much higher oscillation than f T . Measurement results show that the LS oscillator produces a calibrated −37.17 dBm output power when biased at 1.8 V; the output power of fundamental signal is −10.85 dBm after calibration. The measured phase noise at 1 MHz frequency offset is −112.54 dBc/Hz at the frequency of 9.14 GHz. This circuit can be properly applied to mm-wave communication systems with advantages of low cost and high integration density. (semiconductor integrated circuits)

  3. Unveiling the curtain of superposition: Recent gedanken and laboratory experiments

    Science.gov (United States)

    Cohen, E.; Elitzur, A. C.

    2017-08-01

    What is the true meaning of quantum superposition? Can a particle genuinely reside in several places simultaneously? These questions lie at the heart of this paper which presents an updated survey of some important stages in the evolution of the three-boxes paradox, as well as novel conclusions drawn from it. We begin with the original thought experiment of Aharonov and Vaidman, and proceed to its non-counterfactual version. The latter was recently realized by Okamoto and Takeuchi using a quantum router. We then outline a dynamic version of this experiment, where a particle is shown to “disappear” and “re-appear” during the time evolution of the system. This surprising prediction based on self-cancellation of weak values is directly related to our notion of Quantum Oblivion. Finally, we present the non-counterfactual version of this disappearing-reappearing experiment. Within the near future, this last version of the experiment is likely to be realized in the lab, proving the existence of exotic hitherto unknown forms of superposition. With the aid of Bell’s theorem, we prove the inherent nonlocality and nontemporality underlying such pre- and post-selected systems, rendering anomalous weak values ontologically real.

  4. First-Principles Definition and Measurement of Planetary Electromagnetic-Energy Budget

    Science.gov (United States)

    Mishchenko, Michael I.; Lock, James A.; Lacis, Andrew A.; Travis, Larry D.; Cairns, Brian

    2016-01-01

    The imperative to quantify the Earths electromagnetic-energy budget with an extremely high accuracy has been widely recognized but has never been formulated in the framework of fundamental physics. In this paper we give a first-principles definition of the planetary electromagnetic-energy budget using the Poynting- vector formalism and discuss how it can, in principle, be measured. Our derivation is based on an absolute minimum of theoretical assumptions, is free of outdated notions of phenomenological radiometry, and naturally leads to the conceptual formulation of an instrument called the double hemispherical cavity radiometer (DHCR). The practical measurement of the planetary energy budget would require flying a constellation of several dozen planet-orbiting satellites hosting identical well-calibrated DHCRs.

  5. The action uncertainty principle for continuous measurements

    Science.gov (United States)

    Mensky, Michael B.

    1996-02-01

    The action uncertainty principle (AUP) for the specification of the most probable readouts of continuous quantum measurements is proved, formulated in different forms and analyzed (for nonlinear as well as linear systems). Continuous monitoring of an observable A(p,q,t) with resolution Δa( t) is considered. The influence of the measurement process on the evolution of the measured system (quantum measurement noise) is presented by an additional term δ F(t)A(p,q,t) in the Hamiltonian where the function δ F (generalized fictitious force) is restricted by the AUP ∫|δ F(t)| Δa( t) d t ≲ and arbitrary otherwise. Quantum-nondemolition (QND) measurements are analyzed with the help of the AUP. A simple uncertainty relation for continuous quantum measurements is derived. It states that the area of a certain band in the phase space should be of the order of. The width of the band depends on the measurement resolution while its length is determined by the deviation of the system, due to the measurement, from classical behavior.

  6. The action uncertainty principle for continuous measurements

    International Nuclear Information System (INIS)

    Mensky, M.B.

    1996-01-01

    The action uncertainty principle (AUP) for the specification of the most probable readouts of continuous quantum measurements is proved, formulated in different forms and analyzed (for nonlinear as well as linear systems). Continuous monitoring of an observable A(p,q,t) with resolution Δa(t) is considered. The influence of the measurement process on the evolution of the measured system (quantum measurement noise) is presented by an additional term δF(t) A(p,q,t) in the Hamiltonian where the function δF (generalized fictitious force) is restricted by the AUP ∫ vertical stroke δF(t) vertical stroke Δa(t)d t< or∼ℎ and arbitrary otherwise. Quantum-nondemolition (QND) measurements are analyzed with the help of the AUP. A simple uncertainty relation for continuous quantum measurements is derived. It states that the area of a certain band in the phase space should be of the order of ℎ. The width of the band depends on the measurement resolution while its length is determined by the deviation of the system, due to the measurement, from classical behavior. (orig.)

  7. Optical information encryption based on incoherent superposition with the help of the QR code

    Science.gov (United States)

    Qin, Yi; Gong, Qiong

    2014-01-01

    In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.

  8. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    Science.gov (United States)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  9. Transient change in the shape of premixed burner flame with the superposition of pulsed dielectric barrier discharge

    OpenAIRE

    Zaima, Kazunori; Sasaki, Koichi

    2016-01-01

    We investigated the transient phenomena in a premixed burner flame with the superposition of a pulsed dielectric barrier discharge (DBD). The length of the flame was shortened by the superposition of DBD, indicating the activation of combustion chemical reactions with the help of the plasma. In addition, we observed the modulation of the top position of the unburned gas region and the formations of local minimums in the axial distribution of the optical emission intensity of OH. These experim...

  10. Automatic superposition of drug molecules based on their common receptor site

    Science.gov (United States)

    Kato, Yuichi; Inoue, Atsushi; Yamada, Miho; Tomioka, Nobuo; Itai, Akiko

    1992-10-01

    We have prevously developed a new rational method for superposing molecules in terms of submolecular physical and chemical properties, but not in terms of atom positions or chemical structures as has been done in the conventional methods. The program was originally developed for interactive use on a three-dimensional graphic display, providing goodness-of-fit indices on molecular shape, hydrogen bonds, electrostatic interactions and others. Here, we report a new unbiased searching method for the best superposition of molecules, covering all the superposing modes and conformational freedom, as an additional function of the program. The function is based on a novel least-squares method which superposes the expected positions and orientations of hydrogen bonding partners in the receptor that are deduced from both molecules. The method not only gives reliability and reproducibility to the result of the superposition, but also allows us to save labor and time. It is demonstrated that this method is very efficient for finding the correct superposing mode in such systems where hydrogen bonds play important roles.

  11. The four principles: can they be measured and do they predict ethical decision making?

    Science.gov (United States)

    Page, Katie

    2012-05-20

    The four principles of Beauchamp and Childress--autonomy, non-maleficence, beneficence and justice--have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

  12. The four principles: Can they be measured and do they predict ethical decision making?

    Directory of Open Access Journals (Sweden)

    Page Katie

    2012-05-01

    Full Text Available Abstract Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed.

  13. The four principles: Can they be measured and do they predict ethical decision making?

    Science.gov (United States)

    2012-01-01

    Background The four principles of Beauchamp and Childress - autonomy, non-maleficence, beneficence and justice - have been extremely influential in the field of medical ethics, and are fundamental for understanding the current approach to ethical assessment in health care. This study tests whether these principles can be quantitatively measured on an individual level, and then subsequently if they are used in the decision making process when individuals are faced with ethical dilemmas. Methods The Analytic Hierarchy Process was used as a tool for the measurement of the principles. Four scenarios, which involved conflicts between the medical ethical principles, were presented to participants who then made judgments about the ethicality of the action in the scenario, and their intentions to act in the same manner if they were in the situation. Results Individual preferences for these medical ethical principles can be measured using the Analytic Hierarchy Process. This technique provides a useful tool in which to highlight individual medical ethical values. On average, individuals have a significant preference for non-maleficence over the other principles, however, and perhaps counter-intuitively, this preference does not seem to relate to applied ethical judgements in specific ethical dilemmas. Conclusions People state they value these medical ethical principles but they do not actually seem to use them directly in the decision making process. The reasons for this are explained through the lack of a behavioural model to account for the relevant situational factors not captured by the principles. The limitations of the principles in predicting ethical decision making are discussed. PMID:22606995

  14. Superpositions of higher-order bessel beams and nondiffracting speckle fields - (SAIP 2009)

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-07-01

    Full Text Available speckle fields. The paper reports on illuminating a ring slit aperture with light which has an azimuthal phase dependence, such that the field produced is a superposition of two higher-order Bessel beams. In the case that the phase dependence of the light...

  15. A cute and highly contrast-sensitive superposition eye : The diurnal owlfly Libelloides macaronius

    NARCIS (Netherlands)

    Belušič, Gregor; Pirih, Primož; Stavenga, Doekele G.

    The owlfly Libelloides macaronius (Insecta: Neuroptera) has large bipartite eyes of the superposition type. The spatial resolution and sensitivity of the photoreceptor array in the dorsofrontal eye part was studied with optical and electrophysiological methods. Using structured illumination

  16. Variational principle for the Pareto power law.

    Science.gov (United States)

    Chakraborti, Anirban; Patriarca, Marco

    2009-11-27

    A mechanism is proposed for the appearance of power-law distributions in various complex systems. It is shown that in a conservative mechanical system composed of subsystems with different numbers of degrees of freedom a robust power-law tail can appear in the equilibrium distribution of energy as a result of certain superpositions of the canonical equilibrium energy densities of the subsystems. The derivation only uses a variational principle based on the Boltzmann entropy, without assumptions outside the framework of canonical equilibrium statistical mechanics. Two examples are discussed, free diffusion on a complex network and a kinetic model of wealth exchange. The mechanism is illustrated in the general case through an exactly solvable mechanical model of a dimensionally heterogeneous system.

  17. Variability of residual stresses and superposition effect in multipass grinding of high-carbon high-chromium steel

    Science.gov (United States)

    Karabelchtchikova, Olga; Rivero, Iris V.

    2005-02-01

    The distribution of residual stresses (RS) and surface integrity generated in heat treatment and subsequent multipass grinding was investigated in this experimental study to examine the source of variability and the nature of the interactions of the experimental factors. A nested experimental design was implemented to (a) compare the sources of the RS variability, (b) to examine RS distribution and tensile peak location due to experimental factors, and (c) to analyze the superposition relationship in the RS distribution due to multipass grinding technique. To characterize the material responses, several techniques were used, including microstructural analysis, hardness-toughness and roughness examinations, and retained austenite and RS measurements using x-ray diffraction. The causality of the RS was explained through the strong correlation of the surface integrity characteristics and RS patterns. The main sources of variation were the depth of the RS distribution and the multipass grinding technique. The grinding effect on the RS was statistically significant; however, it was mostly predetermined by the preexisting RS induced in heat treatment. Regardless of the preceding treatments, the effect of the multipass grinding technique exhibited similar RS patterns, which suggests the existence of the superposition relationship and orthogonal memory between the passes of the grinding operation.

  18. On Kolmogorov's superpositions and Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  19. Nuclear grade cable thermal life model by time temperature superposition algorithm based on Matlab GUI

    International Nuclear Information System (INIS)

    Lu Yanyun; Gu Shenjie; Lou Tianyang

    2014-01-01

    Background: As nuclear grade cable must endure harsh environment within design life, it is critical to predict cable thermal life accurately owing to thermal aging, which is one of dominant factors of aging mechanism. Purpose: Using time temperature superposition (TTS) method, the aim is to construct nuclear grade cable thermal life model, predict cable residual life and develop life model interactive interface under Matlab GUI. Methods: According to TTS, nuclear grade cable thermal life model can be constructed by shifting data groups at various temperatures to preset reference temperature with translation factor which is determined by non linear programming optimization. Interactive interface of cable thermal life model developed under Matlab GUI consists of superposition mode and standard mode which include features such as optimization of translation factor, calculation of activation energy, construction of thermal aging curve and analysis of aging mechanism., Results: With calculation result comparison between superposition and standard method, the result with TTS has better accuracy than that with standard method. Furthermore, confidence level of nuclear grade cable thermal life with TTS is higher than that with standard method. Conclusion: The results show that TTS methodology is applicable to thermal life prediction of nuclear grade cable. Interactive Interface under Matlab GUI achieves anticipated functionalities. (authors)

  20. Authentication Protocol using Quantum Superposition States

    Energy Technology Data Exchange (ETDEWEB)

    Kanamori, Yoshito [University of Alaska; Yoo, Seong-Moo [University of Alabama, Huntsville; Gregory, Don A. [University of Alabama, Huntsville; Sheldon, Frederick T [ORNL

    2009-01-01

    When it became known that quantum computers could break the RSA (named for its creators - Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper.

  1. Integral superposition of paraxial Gaussian beams in inhomogeneous anisotropic layered structures in Cartesian coordinates

    Czech Academy of Sciences Publication Activity Database

    Červený, V.; Pšenčík, Ivan

    2015-01-01

    Roč. 25, - (2015), s. 109-155 ISSN 2336-3827 Institutional support: RVO:67985530 Keywords : integral superposition of paraxial Gaussian beams * inhomogeneous anisotropic media * S waves in weakly anisotropic media Subject RIV: DC - Siesmology, Volcanology, Earth Structure

  2. Evolution of superpositions of quantum states through a level crossing

    International Nuclear Information System (INIS)

    Torosov, B. T.; Vitanov, N. V.

    2011-01-01

    The Landau-Zener-Stueckelberg-Majorana (LZSM) model is widely used for estimating transition probabilities in the presence of crossing energy levels in quantum physics. This model, however, makes the unphysical assumption of an infinitely long constant interaction, which introduces a divergent phase in the propagator. This divergence remains hidden when estimating output probabilities for a single input state insofar as the divergent phase cancels out. In this paper we show that, because of this divergent phase, the LZSM model is inadequate to describe the evolution of pure or mixed superposition states across a level crossing. The LZSM model can be used only if the system is initially in a single state or in a completely mixed superposition state. To this end, we show that the more realistic Demkov-Kunike model, which assumes a hyperbolic-tangent level crossing and a hyperbolic-secant interaction envelope, is free of divergences and is a much more adequate tool for describing the evolution through a level crossing for an arbitrary input state. For multiple crossing energies which are reducible to one or more effective two-state systems (e.g., by the Majorana and Morris-Shore decompositions), similar conclusions apply: the LZSM model does not produce definite values of the populations and the coherences, and one should use the Demkov-Kunike model instead.

  3. Superposition of Planckian spectra and the distortions of the cosmic microwave background radiation

    International Nuclear Information System (INIS)

    Alexanian, M.

    1982-01-01

    A fit of the spectrum of the cosmic microwave background radiation (CMB) by means of a positive linear superposition of Planckian spectra implies an upper bound to the photon spectrum. The observed spectrum of the CMB gives a weighting function with a normalization greater than unity

  4. THE DEVELOPMENT OF AN INSTRUMENT FOR MEASURING THE UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES.

    Science.gov (United States)

    MCCORMICK, FLOYD G.

    THE PURPOSE OF THE STUDY WAS TO DEVELOP AN INSTRUMENT FOR MEASURING PROFIT-MAXIMIZING PRINCIPLES IN FARM MANAGEMENT WITH IMPLICATIONS FOR VOCATIONAL AGRICULTURE. PRINCIPLES WERE IDENTIFIED FROM LITERATURE SELECTED BY AGRICULTURAL ECONOMISTS. FORTY-FIVE MULTIPLE-CHOICE QUESTIONS WERE REFINED ON THE BASIS OF RESULTS OF THREE PRETESTS AND…

  5. A numerical dressing method for the nonlinear superposition of solutions of the KdV equation

    International Nuclear Information System (INIS)

    Trogdon, Thomas; Deconinck, Bernard

    2014-01-01

    In this paper we present the unification of two existing numerical methods for the construction of solutions of the Korteweg–de Vries (KdV) equation. The first method is used to solve the Cauchy initial-value problem on the line for rapidly decaying initial data. The second method is used to compute finite-genus solutions of the KdV equation. The combination of these numerical methods allows for the computation of exact solutions that are asymptotically (quasi-)periodic finite-gap solutions and are a nonlinear superposition of dispersive, soliton and (quasi-)periodic solutions in the finite (x, t)-plane. Such solutions are referred to as superposition solutions. We compute these solutions accurately for all values of x and t. (paper)

  6. Polyphony: superposition independent methods for ensemble-based drug discovery.

    Science.gov (United States)

    Pitt, William R; Montalvão, Rinaldo W; Blundell, Tom L

    2014-09-30

    Structure-based drug design is an iterative process, following cycles of structural biology, computer-aided design, synthetic chemistry and bioassay. In favorable circumstances, this process can lead to the structures of hundreds of protein-ligand crystal structures. In addition, molecular dynamics simulations are increasingly being used to further explore the conformational landscape of these complexes. Currently, methods capable of the analysis of ensembles of crystal structures and MD trajectories are limited and usually rely upon least squares superposition of coordinates. Novel methodologies are described for the analysis of multiple structures of a protein. Statistical approaches that rely upon residue equivalence, but not superposition, are developed. Tasks that can be performed include the identification of hinge regions, allosteric conformational changes and transient binding sites. The approaches are tested on crystal structures of CDK2 and other CMGC protein kinases and a simulation of p38α. Known interaction - conformational change relationships are highlighted but also new ones are revealed. A transient but druggable allosteric pocket in CDK2 is predicted to occur under the CMGC insert. Furthermore, an evolutionarily-conserved conformational link from the location of this pocket, via the αEF-αF loop, to phosphorylation sites on the activation loop is discovered. New methodologies are described and validated for the superimposition independent conformational analysis of large collections of structures or simulation snapshots of the same protein. The methodologies are encoded in a Python package called Polyphony, which is released as open source to accompany this paper [http://wrpitt.bitbucket.org/polyphony/].

  7. The Features of Moessbauer Spectra of Hemoglobins: Approximation by Superposition of Quadrupole Doublets or by Quadrupole Splitting Distribution?

    International Nuclear Information System (INIS)

    Oshtrakh, M. I.; Semionkin, V. A.

    2004-01-01

    Moessbauer spectra of hemoglobins have some features in the range of liquid nitrogen temperature: a non-Lorentzian asymmetric line shape for oxyhemoglobins and symmetric Lorentzian line shape for deoxyhemoglobins. A comparison of the approximation of the hemoglobin Moessbauer spectra by a superposition of two quadrupole doublets and by a distribution of the quadrupole splitting demonstrates that a superposition of two quadrupole doublets is more reliable and may reflect the non-equivalent iron electronic structure and the stereochemistry in the α- and β-subunits of hemoglobin tetramers.

  8. Nucleus-Nucleus Collision as Superposition of Nucleon-Nucleus Collisions

    International Nuclear Information System (INIS)

    Orlova, G.I.; Adamovich, M.I.; Aggarwal, M.M.; Alexandrov, Y.A.; Andreeva, N.P.; Badyal, S.K.; Basova, E.S.; Bhalla, K.B.; Bhasin, A.; Bhatia, V.S.; Bradnova, V.; Bubnov, V.I.; Cai, X.; Chasnikov, I.Y.; Chen, G.M.; Chernova, L.P.; Chernyavsky, M.M.; Dhamija, S.; Chenawi, K.El; Felea, D.; Feng, S.Q.; Gaitinov, A.S.; Ganssauge, E.R.; Garpman, S.; Gerassimov, S.G.; Gheata, A.; Gheata, M.; Grote, J.; Gulamov, K.G.; Gupta, S.K.; Gupta, V.K.; Henjes, U.; Jakobsson, B.; Kanygina, E.K.; Karabova, M.; Kharlamov, S.P.; Kovalenko, A.D.; Krasnov, S.A.; Kumar, V.; Larionova, V.G.; Li, Y.X.; Liu, L.S.; Lokanathan, S.; Lord, J.J.; Lukicheva, N.S.; Lu, Y.; Luo, S.B.; Mangotra, L.K.; Manhas, I.; Mittra, I.S.; Musaeva, A.K.; Nasyrov, S.Z.; Navotny, V.S.; Nystrand, J.; Otterlund, I.; Peresadko, N.G.; Qian, W.Y.; Qin, Y.M.; Raniwala, R.; Rao, N.K.; Roeper, M.; Rusakova, V.V.; Saidkhanov, N.; Salmanova, N.A.; Seitimbetov, A.M.; Sethi, R.; Singh, B.; Skelding, D.; Soderstrem, K.; Stenlund, E.; Svechnikova, L.N.; Svensson, T.; Tawfik, A.M.; Tothova, M.; Tretyakova, M.I.; Trofimova, T.P.; Tuleeva, U.I.; Vashisht, Vani; Vokal, S.; Vrlakova, J.; Wang, H.Q.; Wang, X.R.; Weng, Z.Q.; Wilkes, R.J.; Yang, C.B.; Yin, Z.B.; Yu, L.Z.; Zhang, D.H.; Zheng, P.Y.; Zhokhova, S.I.; Zhou, D.C.

    1999-01-01

    Angular distributions of charged particles produced in 16 O and 32 S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b NA , that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus

  9. Nucleus-Nucleus Collision as Superposition of Nucleon-Nucleus Collisions

    Energy Technology Data Exchange (ETDEWEB)

    Orlova, G I; Adamovich, M I; Aggarwal, M M; Alexandrov, Y A; Andreeva, N P; Badyal, S K; Basova, E S; Bhalla, K B; Bhasin, A; Bhatia, V S; Bradnova, V; Bubnov, V I; Cai, X; Chasnikov, I Y; Chen, G M; Chernova, L P; Chernyavsky, M M; Dhamija, S; Chenawi, K El; Felea, D; Feng, S Q; Gaitinov, A S; Ganssauge, E R; Garpman, S; Gerassimov, S G; Gheata, A; Gheata, M; Grote, J; Gulamov, K G; Gupta, S K; Gupta, V K; Henjes, U; Jakobsson, B; Kanygina, E K; Karabova, M; Kharlamov, S P; Kovalenko, A D; Krasnov, S A; Kumar, V; Larionova, V G; Li, Y X; Liu, L S; Lokanathan, S; Lord, J J; Lukicheva, N S; Lu, Y; Luo, S B; Mangotra, L K; Manhas, I; Mittra, I S; Musaeva, A K; Nasyrov, S Z; Navotny, V S; Nystrand, J; Otterlund, I; Peresadko, N G; Qian, W Y; Qin, Y M; Raniwala, R; Rao, N K; Roeper, M; Rusakova, V V; Saidkhanov, N; Salmanova, N A; Seitimbetov, A M; Sethi, R; Singh, B; Skelding, D; Soderstrem, K; Stenlund, E; Svechnikova, L N; Svensson, T; Tawfik, A M; Tothova, M; Tretyakova, M I; Trofimova, T P; Tuleeva, U I; Vashisht, Vani; Vokal, S; Vrlakova, J; Wang, H Q; Wang, X R; Weng, Z Q; Wilkes, R J; Yang, C B; Yin, Z B; Yu, L Z; Zhang, D H; Zheng, P Y; Zhokhova, S I; Zhou, D C

    1999-03-01

    Angular distributions of charged particles produced in {sup 16}O and {sup 32}S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b{sub NA}, that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus.

  10. Nucleus-nucleus collision as superposition of nucleon-nucleus collisions

    International Nuclear Information System (INIS)

    Orlova, G.I.; Adamovich, M.I.; Aggarwal, M.M.

    1999-01-01

    Angular distributions of charged particles produced in 16 O and 32 S collisions with nuclear track emulsion were studied at momenta 4.5 and 200 A GeV/c. Comparison with the angular distributions of charged particles produced in proton-nucleus collisions at the same momentum allows to draw the conclusion, that the angular distributions in nucleus-nucleus collisions can be seen as superposition of the angular distributions in nucleon-nucleus collisions taken at the same impact parameter b NA , that is mean impact parameter between the participating projectile nucleons and the center of the target nucleus. (orig.)

  11. A principle for the noninvasive measurement of steady-state heat transfer parameters in living tissues

    Directory of Open Access Journals (Sweden)

    S. Yu. Makarov

    2014-01-01

    Full Text Available Measuring the parameters of biological tissues (include in vivo is of great importance for medical diagnostics. For example, the value of the blood perfusion parameter is associated with the state of the blood microcirculation system and its functioning affects the state of the tissues of almost all organs. This work describes a previously proposed principle [1] in generalized terms. The principle is intended for noninvasive measuring the parameters of stationary heat transfer in biological tissues. The results of some experiments (natural and numeric are also presented in the research.For noninvasive measurement of thermophysical parameters a number of techniques have been developed using non-stationary thermal process in biological tissue [2][3]. But these techniques require the collecting a lot of data to represent the time-dependent thermal signal. In addition, subsequent processing with specialized algorithms is required for optimal selecting the parameters. The goal of this research is to develop an alternative approach using stationary thermal process for non-invasive measuring the parameters of stationary heat transfer in living tissues.A general principle can be formulated for the measurement methods based on this approach. Namely, the variations (changes of two physical values are measured in the experiment at the transition from one thermal stationary state to another. One of these two physical values unambiguously determines the stationary thermal field into the biological tissue under specified experimental conditions while the other one is unambiguously determined through the thermal field. Then, the parameters can be found from the numerical (or analytical functional dependencies linking the measured variations because the dependencies contain unknown parameters.The dependencies are expressed in terms of the formula:dqi = fi({pj},Ui dUi,Here dqi is a variation of a physical value q which is unambiguously determined from the

  12. Reciprocity principle for scattered fields from discontinuities in waveguides.

    Science.gov (United States)

    Pau, Annamaria; Capecchi, Danilo; Vestroni, Fabrizio

    2015-01-01

    This study investigates the scattering of guided waves from a discontinuity exploiting the principle of reciprocity in elastodynamics, written in a form that applies to waveguides. The coefficients of reflection and transmission for an arbitrary mode can be derived as long as the principle of reciprocity is satisfied at the discontinuity. Two elastodynamic states are related by the reciprocity. One is the response of the waveguide in the presence of the discontinuity, with the scattered fields expressed as a superposition of wave modes. The other state is the response of the waveguide in the absence of the discontinuity oscillating according to an arbitrary mode. The semi-analytical finite element method is applied to derive the needed dispersion relation and wave mode shapes. An application to a solid cylinder with a symmetric double change of cross-section is presented. This model is assumed to be representative of a damaged rod. The coefficients of reflection and transmission of longitudinal waves are investigated for selected values of notch length and varying depth. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. JaSTA-2: Second version of the Java Superposition T-matrix Application

    Science.gov (United States)

    Halder, Prithish; Das, Himadri Sekhar

    2017-12-01

    In this article, we announce the development of a new version of the Java Superposition T-matrix App (JaSTA-2), to study the light scattering properties of porous aggregate particles. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precision superposition T-matrix codes for multi-sphere clusters in random orientation, developed by Mackowski and Mischenko (1996). The new version consists of two options as part of the input parameters: (i) single wavelength and (ii) multiple wavelengths. The first option (which retains the applicability of older version of JaSTA) calculates the light scattering properties of aggregates of spheres for a single wavelength at a given instant of time whereas the second option can execute the code for a multiple numbers of wavelengths in a single run. JaSTA-2 provides convenient and quicker data analysis which can be used in diverse fields like Planetary Science, Atmospheric Physics, Nanoscience, etc. This version of the software is developed for Linux platform only, and it can be operated over all the cores of a processor using the multi-threading option.

  14. Teleportation of a Superposition of Three Orthogonal States of an Atom via Photon Interference

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shi-Biao

    2006-01-01

    We propose a scheme to teleport a superposition of three states of an atom trapped in a cavity to a second atom trapped in a remote cavity. The scheme is based on the detection of photons leaking from the cavities after the atom-cavity interaction.

  15. Capacity-Approaching Superposition Coding for Optical Fiber Links

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Tafur Monroy, Idelfonso

    2014-01-01

    We report on the first experimental demonstration of superposition coded modulation (SCM) for polarization-multiplexed coherent-detection optical fiber links. The proposed coded modulation scheme is combined with phase-shifted bit-to-symbol mapping (PSM) in order to achieve geometric and passive......-SCM) is employed in the framework of bit-interleaved coded modulation with iterative decoding (BICM-ID) for forward error correction. The fiber transmission system is characterized in terms of signal-to-noise ratio for back-to-back case and correlated with simulated results for ideal transmission over additive...... white Gaussian noise channel. Thereafter, successful demodulation and decoding after dispersion-unmanaged transmission over 240-km standard single mode fiber of dual-polarization 6-Gbaud 16-, 32- and 64-ary SCM-PSM is experimentally demonstrated....

  16. Double-contrast examination of the gastric antrum without Duodenal superposition

    International Nuclear Information System (INIS)

    Treugut, H.; Isper, J.

    1980-01-01

    By using a modified technique of double-contrast examination of the stomach it was possible in 75% to perform a study without superposition of the duodenum and jejunum on the distal stomach compared to 36% with the usual method. In this technique a small amount (50 ml) of Barium-suspension is given to the patient in left decubitus position by a straw or gastric tube after antiperistaltic medication. There was no difference in the quality of mucosa-coating compared to the technique using higher volumes of Barium. (orig.) [de

  17. The role and production of polar/subtropical jet superpositions in two high-impact weather events over North America

    Science.gov (United States)

    Winters, Andrew C.

    Careful observational work has demonstrated that the tropopause is typically characterized by a three-step pole-to-equator structure, with each break between steps in the tropopause height associated with a jet stream. While the two jet streams, the polar and subtropical jets, typically occupy different latitude bands, their separation can occasionally vanish, resulting in a vertical superposition of the two jets. A cursory examination of a number of historical and recent high-impact weather events over North America and the North Atlantic indicates that superposed jets can be an important component of their evolution. Consequently, this dissertation examines two recent jet superposition cases, the 18--20 December 2009 Mid-Atlantic Blizzard and the 1--3 May 2010 Nashville Flood, in an effort (1) to determine the specific influence that a superposed jet can have on the development of a high-impact weather event and (2) to illuminate the processes that facilitated the production of a superposition in each case. An examination of these cases from a basic-state variable and PV inversion perspective demonstrates that elements of both the remote and local synoptic environment are important to consider while diagnosing the development of a jet superposition. Specifically, the process of jet superposition begins with the remote production of a cyclonic (anticyclonic) tropopause disturbance at high (low) latitudes. The cyclonic circulation typically originates at polar latitudes, while organized tropical convection can encourage the development of an anticyclonic circulation anomaly within the tropical upper-troposphere. The concurrent advection of both anomalies towards middle latitudes subsequently allows their individual circulations to laterally displace the location of the individual tropopause breaks. Once the two circulation anomalies position the polar and subtropical tropopause breaks in close proximity to one another, elements within the local environment, such as

  18. PL-1 program system for generalized Patterson superpositions. [PL1GEN, SYMPL1, and ALSPL1, in PL/1 for IBM 360/65 computer

    Energy Technology Data Exchange (ETDEWEB)

    Hubbard, C.R.; Babich, M.W.; Jacobson, R.A.

    1977-01-01

    A new system of three programs written in PL/1 can calculate symmetry and Patterson superposition maps for triclinic, monoclinic, and orthorhombic space groups as well as any space group reducible to one of these three. These programs are based on a system of FORTRAN programs developed at Ames Laboratory, but are more general and have expanded utility, especially with regard to large unit cells. The program PLIGEN calculates a direct access data set, SYMPL1 calculates a direct access symmetry map, and ALSPL1 calculates a superposition map using one or multiple superpositions. A detailed description of the use of these programs including symbolic program listings is included. 2 tables.

  19. Theoretical aspects of the equivalence principle

    International Nuclear Information System (INIS)

    Damour, Thibault

    2012-01-01

    We review several theoretical aspects of the equivalence principle (EP). We emphasize the unsatisfactory fact that the EP maintains the absolute character of the coupling constants of physics, while general relativity and its generalizations (Kaluza–Klein, …, string theory) suggest that all absolute structures should be replaced by dynamical entities. We discuss the EP-violation phenomenology of dilaton-like models, which is likely to be dominated by the linear superposition of two effects: a signal proportional to the nuclear Coulomb energy, related to the variation of the fine-structure constant, and a signal proportional to the surface nuclear binding energy, related to the variation of the light quark masses. We recall various theoretical arguments (including a recently proposed anthropic argument) suggesting that the EP be violated at a small, but not unmeasurably small level. This motivates the need for improved tests of the EP. These tests are probing new territories in physics that are related to deep, and mysterious, issues in fundamental physics. (paper)

  20. New principle for measuring arterial blood oxygenation, enabling motion-robust remote monitoring

    OpenAIRE

    Mark van Gastel; Sander Stuijk; Gerard de Haan

    2016-01-01

    Finger-oximeters are ubiquitously used for patient monitoring in hospitals worldwide. Recently, remote measurement of arterial blood oxygenation (SpO2) with a camera has been demonstrated. Both contact and remote measurements, however, require the subject to remain static for accurate SpO2 values. This is due to the use of the common ratio-of-ratios measurement principle that measures the relative pulsatility at different wavelengths. Since the amplitudes are small, they are easily corrupted ...

  1. Transient change in the shape of premixed burner flame with the superposition of pulsed dielectric barrier discharge

    Science.gov (United States)

    Zaima, Kazunori; Sasaki, Koichi

    2016-08-01

    We investigated the transient phenomena in a premixed burner flame with the superposition of a pulsed dielectric barrier discharge (DBD). The length of the flame was shortened by the superposition of DBD, indicating the activation of combustion chemical reactions with the help of the plasma. In addition, we observed the modulation of the top position of the unburned gas region and the formations of local minimums in the axial distribution of the optical emission intensity of OH. These experimental results reveal the oscillation of the rates of combustion chemical reactions as a response to the activation by pulsed DBD. The cycle of the oscillation was 0.18-0.2 ms, which could be understood as the eigenfrequency of the plasma-assisted combustion reaction system.

  2. The denoising of Monte Carlo dose distributions using convolution superposition calculations

    International Nuclear Information System (INIS)

    El Naqa, I; Cui, J; Lindsay, P; Olivera, G; Deasy, J O

    2007-01-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction. (note)

  3. Superposition of Stress Fields in Diametrically Compressed Cylinders

    Directory of Open Access Journals (Sweden)

    João Augusto de Lima Rocha

    Full Text Available Abstract The theoretical analysis for the Brazilian test is a classical plane stress problem of elasticity theory, where a vertical force is applied to a horizontal plane, the boundary of a semi-infinite medium. Hypothesizing a normal radial stress field, the results of that model are correct. Nevertheless, the superposition of three stress fields, with two being based on prior results and the third based on a hydrostatic stress field, is incorrect. Indeed, this work shows that the Cauchy vectors (tractions are non-vanishing in the parallel planes in which the two opposing vertical forces are applied. The aim of this work is to detail the process used in the construction of the theoretical model for the three stress fields used, with the objective being to demonstrate the inconsistency often stated in the literature.

  4. Joint formation of dissimilar steels in pressure welding with superposition of ultrasonic oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Surovtsev, A P; Golovanenko, S A; Sukhanov, V E; Kazantsev, V F

    1983-12-01

    Investigation results of kinetics and quality of carbon steel joints with the steel 12Kh18N10T, obtained by pressure welding with superposition of ultrasonic oscillations with the frequency 16.5-18.0 kHz are given. The effect of ultrasonic oscillations on the process of physical contact development of the surfaces welded, formation of microstructure and impact viscosity of the compound, is shown.

  5. Simulation Analysis of DC and Switching Impulse Superposition Circuit

    Science.gov (United States)

    Zhang, Chenmeng; Xie, Shijun; Zhang, Yu; Mao, Yuxiang

    2018-03-01

    Surge capacitors running between the natural bus and the ground are affected by DC and impulse superposition voltage during operation in the converter station. This paper analyses the simulation aging circuit of surge capacitors by PSCAD electromagnetic transient simulation software. This paper also analyses the effect of the DC voltage to the waveform of the impulse voltage generation. The effect of coupling capacitor to the test voltage waveform is also studied. Testing results prove that the DC voltage has little effect on the waveform of the output of the surge voltage generator, and the value of the coupling capacitor has little effect on the voltage waveform of the sample. Simulation results show that surge capacitor DC and impulse superimposed aging test is feasible.

  6. The measurement of principled morality by the Kohlberg Moral Dilemma Questionnaire.

    Science.gov (United States)

    Heilbrun, A B; Georges, M

    1990-01-01

    The four stages preceding the postconventional level in the Kohlberg (1958, 1971, 1976) system of moral development are described as involving moral judgments that conform to external conditions of punishment, reward, social expectation, and conformity to the law. No special level of self-control seems necessary to behave in keeping with these conditions of external reinforcement. In contrast, the two stages of postconventional (principled) mortality involve defiance of majority opinion and defiance of the law--actions that would seem to require greater self-control. This study was concerned with whether postconventional moral reasoning, as measured by the Kohlberg Moral Dilemma Questionnaire (MDQ), can be associated with higher self-control. If so, prediction of principled moral behavior from the MDQ would be based not only on postconventional moral reasoning but bolstered by the necessary level of self-control as well. College students who came the closest to postconventional moral reasoning showed better self-control than college students who were more conventional or preconventional in their moral judgments. These results support the validity of the MDQ for predicting principled moral behavior.

  7. Fundamental Safety Principles

    International Nuclear Information System (INIS)

    Abdelmalik, W.E.Y.

    2011-01-01

    This work presents a summary of the IAEA Safety Standards Series publication No. SF-1 entitled F UDAMENTAL Safety PRINCIPLES p ublished on 2006. This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purposes. Safety measures and security measures have in common the aim of protecting human life and health and the environment. These safety principles are: 1) Responsibility for safety, 2) Role of the government, 3) Leadership and management for safety, 4) Justification of facilities and activities, 5) Optimization of protection, 6) Limitation of risks to individuals, 7) Protection of present and future generations, 8) Prevention of accidents, 9)Emergency preparedness and response and 10) Protective action to reduce existing or unregulated radiation risks. The safety principles concern the security of facilities and activities to the extent that they apply to measures that contribute to both safety and security. Safety measures and security measures must be designed and implemented in an integrated manner so that security measures do not compromise safety and safety measures do not compromise security.

  8. Constructing petal modes from the coherent superposition of Laguerre-Gaussian modes

    Science.gov (United States)

    Naidoo, Darryl; Forbes, Andrew; Ait-Ameur, Kamel; Brunel, Marc

    2011-03-01

    An experimental approach in generating Petal-like transverse modes, which are similar to what is seen in porro-prism resonators, has been successfully demonstrated. We hypothesize that the petal-like structures are generated from a coherent superposition of Laguerre-Gaussian modes of zero radial order and opposite azimuthal order. To verify this hypothesis, visually based comparisons such as petal peak to peak diameter and the angle between adjacent petals are drawn between experimental data and simulated data. The beam quality factor of the Petal-like transverse modes and an inner product interaction is also experimentally compared to numerical results.

  9. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    Energy Technology Data Exchange (ETDEWEB)

    Kish, Laszlo B. [Texas A and M University, Department of Electrical and Computer Engineering, College Station, TX 77843-3128 (United States)], E-mail: laszlo.kish@ece.tamu.edu

    2009-03-02

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  10. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    International Nuclear Information System (INIS)

    Kish, Laszlo B.

    2009-01-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart

  11. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    Science.gov (United States)

    Kish, Laszlo B.

    2009-03-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case ( N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also non-existent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinusoidal signals (different frequency or orthogonal phase) however that has an extra 1/N type slowdown compared to the noise-based logic system with increasing number of N furthermore it is less robust against time delay effects than the noise-based counterpart.

  12. Experimental generation and application of the superposition of higher-order Bessel beams

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-07-01

    Full Text Available Academy of Sciences of Belarus 4 School of Physics, University of Stellenbosch Presented at the 2009 South African Institute of Physics Annual Conference University of KwaZulu-Natal Durban, South Africa 6-10 July 2009 Page 2 © CSIR 2008... www.csir.co.za Generation of Bessel Fields: • METHOD 1: Ring Slit Aperture • METHOD 2: Axicon Adaptation of method 1 to produce superpositions of higher-order Bessel beams: J. Durnin, J.J. Miceli and J.H. Eberly, Phys. Rev. Lett. 58 1499...

  13. A New Principle in Physics: the Principle 'Finiteness', and Some Consequences

    International Nuclear Information System (INIS)

    Sternlieb, Abraham

    2010-01-01

    In this paper I propose a new principle in physics: the principle of 'finiteness'. It stems from the definition of physics as a science that deals (among other things) with measurable dimensional physical quantities. Since measurement results, including their errors, are always finite, the principle of finiteness postulates that the mathematical formulation of 'legitimate' laws of physics should prevent exactly zero or infinite solutions. Some consequences of the principle of finiteness are discussed, in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The consequences are derived independently of any other theory or principle in physics. I propose 'finiteness' as a postulate (like the constancy of the speed of light in vacuum, 'c'), as opposed to a notion whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories, or principles.

  14. Violation of a Leggett–Garg inequality with ideal non-invasive measurements

    Science.gov (United States)

    Knee, George C.; Simmons, Stephanie; Gauger, Erik M.; Morton, John J.L.; Riemann, Helge; Abrosimov, Nikolai V.; Becker, Peter; Pohl, Hans-Joachim; Itoh, Kohei M.; Thewalt, Mike L.W.; Briggs, G. Andrew D.; Benjamin, Simon C.

    2012-01-01

    The quantum superposition principle states that an entity can exist in two different states simultaneously, counter to our 'classical' intuition. Is it possible to understand a given system's behaviour without such a concept? A test designed by Leggett and Garg can rule out this possibility. The test, originally intended for macroscopic objects, has been implemented in various systems. However to date no experiment has employed the 'ideal negative result' measurements that are required for the most robust test. Here we introduce a general protocol for these special measurements using an ancillary system, which acts as a local measuring device but which need not be perfectly prepared. We report an experimental realization using spin-bearing phosphorus impurities in silicon. The results demonstrate the necessity of a non-classical picture for this class of microscopic system. Our procedure can be applied to systems of any size, whether individually controlled or in a spatial ensemble. PMID:22215081

  15. Field testing, comparison, and discussion of five aeolian sand transport measuring devices operating on different measuring principles

    NARCIS (Netherlands)

    Goossens, Dirk; Nolet, Corjan; Etyemezian, Vicken; Duarte-campos, Leonardo; Bakker, Gerben; Riksen, Michel

    2018-01-01

    Five types of sediment samplers designed to measure aeolian sand transport were tested during a wind erosion event on the Sand Motor, an area on the west coast of the Netherlands prone to severe wind erosion. Each of the samplers operates on a different principle. The MWAC (Modified Wilson And

  16. Adaptive phase measurements in linear optical quantum computation

    International Nuclear Information System (INIS)

    Ralph, T C; Lund, A P; Wiseman, H M

    2005-01-01

    Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state α vertical bar 0>+β vertical bar 1> can be prepared deterministically

  17. NOTE: The denoising of Monte Carlo dose distributions using convolution superposition calculations

    Science.gov (United States)

    El Naqa, I.; Cui, J.; Lindsay, P.; Olivera, G.; Deasy, J. O.

    2007-09-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.

  18. : Principles of safety measures of sports events organizers without the involvement of police

    OpenAIRE

    Buchalová, Kateřina

    2013-01-01

    Title: Principles of safety measures of sports events organizers without the involvement of police Objectives: The aim of this thesis is a description of security measures at sporting events organizers. Methods: The thesis theoretical style is focused on searching for available sources of study and research, and writing their summary comparing safety measures of the organizers. Results: This work describes the activities of the organizers of sports events and precautions that must be provided...

  19. Teleportation of a Coherent Superposition State Via a nonmaximally Entangled Coherent Xhannel

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ We investigate the problemm of teleportation of a superposition coherent state with nonmaximally entangled coherent channel. Two strategies are considered to complete the task. The first one uses entanglement concentration to purify the channel to a maximally entangled one. The second one teleports the state through the nonmaximally entangled coherent channel directly. We find that the probabilities of successful teleportations for the two strategies are depend on the amplitudes of the coherent states and the mean fidelity of teleportation using the first strategy is always less than that of the second strategy.

  20. Probing the conductance superposition law in single-molecule circuits with parallel paths.

    Science.gov (United States)

    Vazquez, H; Skouta, R; Schneebeli, S; Kamenetska, M; Breslow, R; Venkataraman, L; Hybertsen, M S

    2012-10-01

    According to Kirchhoff's circuit laws, the net conductance of two parallel components in an electronic circuit is the sum of the individual conductances. However, when the circuit dimensions are comparable to the electronic phase coherence length, quantum interference effects play a critical role, as exemplified by the Aharonov-Bohm effect in metal rings. At the molecular scale, interference effects dramatically reduce the electron transfer rate through a meta-connected benzene ring when compared with a para-connected benzene ring. For longer conjugated and cross-conjugated molecules, destructive interference effects have been observed in the tunnelling conductance through molecular junctions. Here, we investigate the conductance superposition law for parallel components in single-molecule circuits, particularly the role of interference. We synthesize a series of molecular systems that contain either one backbone or two backbones in parallel, bonded together cofacially by a common linker on each end. Single-molecule conductance measurements and transport calculations based on density functional theory show that the conductance of a double-backbone molecular junction can be more than twice that of a single-backbone junction, providing clear evidence for constructive interference.

  1. Green function as an integral superposition of Gaussian beams in inhomogeneous anisotropic layered structures in Cartesian coordinates

    Czech Academy of Sciences Publication Activity Database

    Červený, V.; Pšenčík, Ivan

    2016-01-01

    Roč. 26 (2016), s. 131-153 ISSN 2336-3827 R&D Projects: GA ČR(CZ) GA16-05237S Institutional support: RVO:67985530 Keywords : elastodynamic Green function * inhomogeneous anisotropic media * integral superposition of Gaussian beams Subject RIV: DC - Siesmology, Volcanology, Earth Structure

  2. Psychometric Principles in Measurement for Geoscience Education Research: A Climate Change Example

    Science.gov (United States)

    Libarkin, J. C.; Gold, A. U.; Harris, S. E.; McNeal, K.; Bowles, R.

    2015-12-01

    Understanding learning in geoscience classrooms requires that we use valid and reliable instruments aligned with intended learning outcomes. Nearly one hundred instruments assessing conceptual understanding in undergraduate science and engineering classrooms (often called concept inventories) have been published and are actively being used to investigate learning. The techniques used to develop these instruments vary widely, often with little attention to psychometric principles of measurement. This paper will discuss the importance of using psychometric principles to design, evaluate, and revise research instruments, with particular attention to the validity and reliability steps that must be undertaken to ensure that research instruments are providing meaningful measurement. An example from a climate change inventory developed by the authors will be used to exemplify the importance of validity and reliability, including the value of item response theory for instrument development. A 24-item instrument was developed based on published items, conceptions research, and instructor experience. Rasch analysis of over 1000 responses provided evidence for the removal of 5 items for misfit and one item for potential bias as measured via differential item functioning. The resulting 18-item instrument can be considered a valid and reliable measure based on pre- and post-implementation metrics. Consideration of the relationship between respondent demographics and concept inventory scores provides unique insight into the relationship between gender, religiosity, values and climate change understanding.

  3. Seismic analysis of structures of nuclear power plants by Lanczos mode superposition method

    International Nuclear Information System (INIS)

    Coutinho, A.L.G.A.; Alves, J.L.D.; Landau, L.; Lima, E.C.P. de; Ebecken, N.F.F.

    1986-01-01

    The Lanczos Mode Superposition Method is applied in the seismic analysis of nuclear power plants. The coordinate transformation matrix is generated by the Lanczos algorithm. It is shown that, through a convenient choice of the starting vector of the algorithm, modes with participation factors are automatically selected. It is performed the Response Spectra analysis of a typical reactor building. The obtained results are compared with those determined by the classical aproach stressing the remarkable computer effectiveness of the proposed methodology. (Author) [pt

  4. Evaluation of collapsed cone convolution superposition (CCCS algorithms in prowess treatment planning system for calculating symmetric and asymmetric field size

    Directory of Open Access Journals (Sweden)

    Tamer Dawod

    2015-01-01

    Full Text Available Purpose: This work investigated the accuracy of prowess treatment planning system (TPS in dose calculation in a homogenous phantom for symmetric and asymmetric field sizes using collapse cone convolution / superposition algorithm (CCCS. Methods: The measurements were carried out at source-to-surface distance (SSD set to 100 cm for 6 and 10 MV photon beams. Data for a full set of measurements for symmetric fields and asymmetric fields, including inplane and crossplane profiles at various depths and percentage depth doses (PDDs were obtained during measurements on the linear accelerator.Results: The results showed that the asymmetric collimation dose lead to significant errors (up to approximately 7% in dose calculations if changes in primary beam intensity and beam quality. It is obvious that the most difference in the isodose curves was found in buildup and the penumbra regions. Conclusion: The results showed that the dose calculation using Prowess TPS based on CCCS algorithm is generally in excellent agreement with measurements.

  5. Microwave measurement of electrical fields in different media – principles, methods and instrumentation

    International Nuclear Information System (INIS)

    St. Kliment Ohridski, Faculty of Physics, James Bourchier blvd., Sofia 1164 (Bulgaria))" data-affiliation=" (Sofia University St. Kliment Ohridski, Faculty of Physics, James Bourchier blvd., Sofia 1164 (Bulgaria))" >Dankov, Plamen I

    2014-01-01

    This paper, presented in the frame of 4th International Workshop and Summer School on Plasma Physics (IWSSPP'2010, Kiten, Bulgaria), is a brief review of the principles, methods and instrumentation of the microwave measurements of electrical fields in different media. The main part of the paper is connected with the description of the basic features of many field sensors and antennas – narrow-, broadband and ultra-wide band, miniaturized, reconfigurable and active sensors, etc. The main features and applicability of these sensors for determination of electric fields in different media is discussed. The last part of the paper presents the basic principles for utilization of electromagnetic 3-D simulators for E-field measurement purposes. Two illustrative examples have been given – the determination of the dielectric anisotropy of multi-layer materials and discussion of the selectivity of hairpin-probe for determination of the electron density in dense gaseous plasmas.

  6. Quantum tele-amplification with a continuous-variable superposition state

    DEFF Research Database (Denmark)

    Neergaard-Nielsen, Jonas S.; Eto, Yujiro; Lee, Chang-Woo

    2013-01-01

    -enhanced functions such as coherent-state quantum computing (CSQC), quantum metrology and a quantum repeater could be realized in the networks. Optical cat states are now routinely generated in laboratories. An important next challenge is to use them for implementing the aforementioned functions. Here, we......Optical coherent states are classical light fields with high purity, and are essential carriers of information in optical networks. If these states could be controlled in the quantum regime, allowing for their quantum superposition (referred to as a Schrödinger-cat state), then novel quantum...... demonstrate a basic CSQC protocol, where a cat state is used as an entanglement resource for teleporting a coherent state with an amplitude gain. We also show how this can be extended to a loss-tolerant quantum relay of multi-ary phase-shift keyed coherent states. These protocols could be useful in both...

  7. Multiparticle quantum superposition and stimulated entanglement by parity selective amplification of entangled states

    International Nuclear Information System (INIS)

    Martini, F. de; Giuseppe, G. di

    2001-01-01

    A multiparticle quantum superposition state has been generated by a novel phase-selective parametric amplifier of an entangled two-photon state. This realization is expected to open a new field of investigations on the persistence of the validity of the standard quantum theory for systems of increasing complexity, in a quasi decoherence-free environment. Because of its nonlocal structure the new system is expected to play a relevant role in the modern endeavor on quantum information and in the basic physics of entanglement. (orig.)

  8. Coherent population transfer and superposition of atomic states via stimulated Raman adiabatic passage using an excited-doublet four-level atom

    International Nuclear Information System (INIS)

    Jin Shiqi; Gong Shangqing; Li Ruxin; Xu Zhizhan

    2004-01-01

    Coherent population transfer and superposition of atomic states via a technique of stimulated Raman adiabatic passage in an excited-doublet four-level atomic system have been analyzed. It is shown that the behavior of adiabatic passage in this system depends crucially on the detunings between the laser frequencies and the corresponding atomic transition frequencies. Particularly, if both the fields are tuned to the center of the two upper levels, the four-level system has two degenerate dark states, although one of them contains the contribution from the excited atomic states. The nonadiabatic coupling of the two degenerate dark states is intrinsic, it originates from the energy difference of the two upper levels. An arbitrary superposition of atomic states can be prepared due to such nonadiabatic coupling effect

  9. Measures of Coupling between Neural Populations Based on Granger Causality Principle.

    Science.gov (United States)

    Kaminski, Maciej; Brzezicka, Aneta; Kaminski, Jan; Blinowska, Katarzyna J

    2016-01-01

    This paper shortly reviews the measures used to estimate neural synchronization in experimental settings. Our focus is on multivariate measures of dependence based on the Granger causality (G-causality) principle, their applications and performance in respect of robustness to noise, volume conduction, common driving, and presence of a "weak node." Application of G-causality measures to EEG, intracranial signals and fMRI time series is addressed. G-causality based measures defined in the frequency domain allow the synchronization between neural populations and the directed propagation of their electrical activity to be determined. The time-varying G-causality based measure Short-time Directed Transfer Function (SDTF) supplies information on the dynamics of synchronization and the organization of neural networks. Inspection of effective connectivity patterns indicates a modular structure of neural networks, with a stronger coupling within modules than between them. The hypothetical plausible mechanism of information processing, suggested by the identified synchronization patterns, is communication between tightly coupled modules intermitted by sparser interactions providing synchronization of distant structures.

  10. Classification of high-resolution remote sensing images based on multi-scale superposition

    Science.gov (United States)

    Wang, Jinliang; Gao, Wenjie; Liu, Guangjie

    2017-07-01

    Landscape structures and process on different scale show different characteristics. In the study of specific target landmarks, the most appropriate scale for images can be attained by scale conversion, which improves the accuracy and efficiency of feature identification and classification. In this paper, the authors carried out experiments on multi-scale classification by taking the Shangri-la area in the north-western Yunnan province as the research area and the images from SPOT5 HRG and GF-1 Satellite as date sources. Firstly, the authors upscaled the two images by cubic convolution, and calculated the optimal scale for different objects on the earth shown in images by variation functions. Then the authors conducted multi-scale superposition classification on it by Maximum Likelyhood, and evaluated the classification accuracy. The results indicates that: (1) for most of the object on the earth, the optimal scale appears in the bigger scale instead of the original one. To be specific, water has the biggest optimal scale, i.e. around 25-30m; farmland, grassland, brushwood, roads, settlement places and woodland follows with 20-24m. The optimal scale for shades and flood land is basically as the same as the original one, i.e. 8m and 10m respectively. (2) Regarding the classification of the multi-scale superposed images, the overall accuracy of the ones from SPOT5 HRG and GF-1 Satellite is 12.84% and 14.76% higher than that of the original multi-spectral images, respectively, and Kappa coefficient is 0.1306 and 0.1419 higher, respectively. Hence, the multi-scale superposition classification which was applied in the research area can enhance the classification accuracy of remote sensing images .

  11. Lasers: principles, applications and energetic measures

    International Nuclear Information System (INIS)

    Subran, C.; Sagaut, J.; Lapointe, S.

    2009-01-01

    After having recalled the principles of a laser and the properties of the laser beam, the authors describe the following different types of lasers: solid state lasers, fiber lasers, semiconductor lasers, dye lasers and gas lasers. Then, their applications are given. Very high energy lasers can reproduce the phenomenon of nuclear fusion of hydrogen atoms. (O.M.)

  12. Maximum coherent superposition state achievement using a non-resonant pulse train in non-degenerate three-level atoms

    International Nuclear Information System (INIS)

    Deng, Li; Niu, Yueping; Jin, Luling; Gong, Shangqing

    2010-01-01

    The coherent superposition state of the lower two levels in non-degenerate three-level Λ atoms is investigated using the accumulative effects of non-resonant pulse trains when the repetition period is smaller than the decay time of the upper level. First, using a rectangular pulse train, the accumulative effects are re-examined in the non-resonant two-level atoms and the modified constructive accumulation equation is analytically given. The equation shows that the relative phase and the repetition period are important in the accumulative effect. Next, under the modified equation in the non-degenerate three-level Λ atoms, we show that besides the constructive accumulation effect, the use of the partial constructive accumulation effect can also achieve the steady state of the maximum coherent superposition state of the lower two levels and the latter condition is relatively easier to manipulate. The analysis is verified by numerical calculations. The influence of the external levels in such a case is also considered and we find that it can be avoided effectively. The above analysis is also applicable to pulse trains with arbitrary envelopes.

  13. Magnetic Barkhausen Noise Measurements Using Tetrapole Probe Designs

    Science.gov (United States)

    McNairnay, Paul

    A magnetic Barkhausen noise (MBN) testing system was developed for Defence Research and Development Canada (DRDC) to perform MBN measurements on the Royal Canadian Navy's Victoria class submarine hulls that can be correlated with material properties, including residual stress. The DRDC system was based on the design of a MBN system developed by Steven White at Queen's University, which was capable of performing rapid angular dependent measurements through the implementation of a flux controlled tetrapole probe. In tetrapole probe designs, the magnetic excitation field is rotated in the surface plane of the sample under the assumption of linear superposition of two orthogonal magnetic fields. During the course of this work, however, the validity of flux superposition in ferromagnetic materials, for the purpose of measuring MBN, was brought into question. Consequently, a study of MBN anisotropy using tetrapole probes was performed. Results indicate that MBN anisotropy measured under flux superposition does not simulate MBN anisotropy data obtained through manual rotation of a single dipole excitation field. It is inferred that MBN anisotropy data obtained with tetrapole probes is the result of the magnetic domain structure's response to an orthogonal magnetization condition and not necessarily to any bulk superposition magnetization in the sample. A qualitative model for the domain configuration under two orthogonal magnetic fields is proposed to describe the results. An empirically derived fitting equation, that describes tetrapole MBN anisotropy data, is presented. The equation describes results in terms of two largely independent orthogonal fields, and includes interaction terms arising due to competing orthogonally magnetized domain structures and interactions with the sample's magnetic easy axis. The equation is used to fit results obtained from a number of samples and tetrapole orientations and in each case correctly identifies the samples' magnetic easy axis.

  14. Measures of coupling between neural populations based on Granger causality principle

    Directory of Open Access Journals (Sweden)

    Maciej Kaminski

    2016-10-01

    Full Text Available This paper shortly reviews the measures used to estimate neural synchronization in experimental settings. Our focus is on multivariate measures of dependence based on the Granger causality (G-causality principle, their applications and performance in respect of robustness to noise, volume conduction, common driving, and presence of a weak node. Application of G-causality measures to EEG, intracranial signals and fMRI time series is addressed. G-causality based measures defined in the frequency domain allow the synchronization between neural populations and the directed propagation of their electrical activity to be determined. The time-varying G-causality based measure Short-time Directed Transfer Function (SDTF supplies information on the dynamics of synchronization and the organization of neural networks. Inspection of effective connectivity patterns indicates a modular structure of neural networks, with a stronger coupling within modules than between them. The hypothetical plausible mechanism of information processing, suggested by the identified synchronization patterns, is communication between tightly coupled modules intermitted by sparser interactions providing synchronization of distant structures.

  15. Principles of the measurement of residual stress by neutron diffraction

    Energy Technology Data Exchange (ETDEWEB)

    Webster, G A; Ezeilo, A N [Imperial Coll. of Science and Technology, London (United Kingdom). Dept. of Mechanical Engineering

    1996-11-01

    The presence of residual stresses in engineering components can significantly affect their load carrying capacity and resistance to fracture. In order to quantify their effect it is necessary to know their magnitude and distribution. Neutron diffraction is the most suitable method of obtaining these stresses non-destructively in the interior of components. In this paper the principles of the technique are described. A monochromatic beam of neutrons, or time of flight measurements, can be employed. In each case, components of strain are determined directly from changes in the lattice spacings between crystals. Residual stresses can then be calculated from these strains. The experimental procedures for making the measurements are described and precautions for achieving reliable results discussed. These include choice of crystal planes on which to make measurements, extent of masking needed to identify a suitable sampling volume, type of detector and alignment procedure. Methods of achieving a stress free reference are also considered. A selection of practical examples is included to demonstrate the success of the technique. (author) 14 figs., 1 tab., 18 refs.

  16. Principles of the measurement of residual stress by neutron diffraction

    International Nuclear Information System (INIS)

    Webster, G.A.; Ezeilo, A.N.

    1996-01-01

    The presence of residual stresses in engineering components can significantly affect their load carrying capacity and resistance to fracture. In order to quantify their effect it is necessary to know their magnitude and distribution. Neutron diffraction is the most suitable method of obtaining these stresses non-destructively in the interior of components. In this paper the principles of the technique are described. A monochromatic beam of neutrons, or time of flight measurements, can be employed. In each case, components of strain are determined directly from changes in the lattice spacings between crystals. Residual stresses can then be calculated from these strains. The experimental procedures for making the measurements are described and precautions for achieving reliable results discussed. These include choice of crystal planes on which to make measurements, extent of masking needed to identify a suitable sampling volume, type of detector and alignment procedure. Methods of achieving a stress free reference are also considered. A selection of practical examples is included to demonstrate the success of the technique. (author) 14 figs., 1 tab., 18 refs

  17. Reducing Uncertainty: Implementation of Heisenberg Principle to Measure Company Performance

    Directory of Open Access Journals (Sweden)

    Anna Svirina

    2015-08-01

    Full Text Available The paper addresses the problem of uncertainty reduction in estimation of future company performance, which is a result of wide range of enterprise's intangible assets probable efficiency. To reduce this problem, the paper suggests to use quantum economy principles, i.e. implementation of Heisenberg principle to measure efficiency and potential of intangible assets of the company. It is proposed that for intangibles it is not possible to estimate both potential and efficiency at a certain time point. To provide a proof for these thesis, the data on resources potential and efficiency from mid-Russian companies was evaluated within deterministic approach, which did not allow to evaluate probability of achieving certain resource efficiency, and quantum approach, which allowed to estimate the central point around which the probable efficiency of resources in concentrated. Visualization of these approaches was performed by means of LabView software. It was proven that for tangible assets performance estimation a deterministic approach should be used; while for intangible assets the quantum approach allows better quality of future performance prediction. On the basis of these findings we proposed the holistic approach towards estimation of company resource efficiency in order to reduce uncertainty in modeling company performance.

  18. Fundamental Principle for Quantum Theory

    OpenAIRE

    Khrennikov, Andrei

    2002-01-01

    We propose the principle, the law of statistical balance for basic physical observables, which specifies quantum statistical theory among all other statistical theories of measurements. It seems that this principle might play in quantum theory the role that is similar to the role of Einstein's relativity principle.

  19. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves

    International Nuclear Information System (INIS)

    Faddegon, B.A.; Villarreal-Barajas, J.E.

    2005-01-01

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for a particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm 2 inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm 3 voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum

  20. Basing of principles and methods of operation of radiometric control and measurement systems

    International Nuclear Information System (INIS)

    Onishchenko, A.M.

    1995-01-01

    Six basic stages of optimization of radiometric systems, methods of defining the preset components of total error and the choice of principles and methods of measurement are described in succession. The possibility of simultaneous optimization of several stages, turning back to the already passed stages, is shown. It is suggested that components of the total error should be preset as identical ones for methodical, instrument, occasional and representativity errors and the greatest of the components should be decreased first of all. Comparative table for 64 radiometric methods of measurement by 11 indices of the methods quality is presented. 2 refs., 1 tab

  1. Ground movement and deformation due to dewatering and open pit excavation

    International Nuclear Information System (INIS)

    Liu, B.; Yang, J.; Zhang, J.

    1996-01-01

    In the application of stochastic medium theory, it is assumed that ground movement process has the property of Markov Process. Based on superposition principle and rock consolidation principle, the ground movement and deformation due to dewatering and open pit excavation can be calculated. The comparison between the field measurements in Morwell Open Pit, Latrobe Valley (Victoria, Australia) and the calculated results shows the validity of the method in this paper. 5 refs

  2. Are electrostatic potentials between regions of different chemical composition measurable? The Gibbs-Guggenheim Principle reconsidered, extended and its consequences revisited.

    Science.gov (United States)

    Pethica, Brian A

    2007-12-21

    As indicated by Gibbs and made explicit by Guggenheim, the electrical potential difference between two regions of different chemical composition cannot be measured. The Gibbs-Guggenheim Principle restricts the use of classical electrostatics in electrochemical theories as thermodynamically unsound with some few approximate exceptions, notably for dilute electrolyte solutions and concomitant low potentials where the linear limit for the exponential of the relevant Boltzmann distribution applies. The Principle invalidates the widespread use of forms of the Poisson-Boltzmann equation which do not include the non-electrostatic components of the chemical potentials of the ions. From a thermodynamic analysis of the parallel plate electrical condenser, employing only measurable electrical quantities and taking into account the chemical potentials of the components of the dielectric and their adsorption at the surfaces of the condenser plates, an experimental procedure to provide exceptions to the Principle has been proposed. This procedure is now reconsidered and rejected. No other related experimental procedures circumvent the Principle. Widely-used theoretical descriptions of electrolyte solutions, charged surfaces and colloid dispersions which neglect the Principle are briefly discussed. MD methods avoid the limitations of the Poisson-Bolzmann equation. Theoretical models which include the non-electrostatic components of the inter-ion and ion-surface interactions in solutions and colloid systems assume the additivity of dispersion and electrostatic forces. An experimental procedure to test this assumption is identified from the thermodynamics of condensers at microscopic plate separations. The available experimental data from Kelvin probe studies are preliminary, but tend against additivity. A corollary to the Gibbs-Guggenheim Principle is enunciated, and the Principle is restated that for any charged species, neither the difference in electrostatic potential nor the

  3. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-01-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  4. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-11-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  5. What happens to linear properties as we move from the Klein-Gordon equation to the sine-Gordon equation

    International Nuclear Information System (INIS)

    Kovalyov, Mikhail

    2010-01-01

    In this article the sets of solutions of the sine-Gordon equation and its linearization the Klein-Gordon equation are discussed and compared. It is shown that the set of solutions of the sine-Gordon equation possesses a richer structure which partly disappears during linearization. Just like the solutions of the Klein-Gordon equation satisfy the linear superposition principle, the solutions of the sine-Gordon equation satisfy a nonlinear superposition principle.

  6. Decoherence, environment-induced superselection, and classicality of a macroscopic quantum superposition generated by quantum cloning

    International Nuclear Information System (INIS)

    De Martini, Francesco; Sciarrino, Fabio; Spagnolo, Nicolo

    2009-01-01

    The high resilience to decoherence shown by a recently discovered macroscopic quantum superposition (MQS) generated by a quantum-injected optical parametric amplifier and involving a number of photons in excess of 5x10 4 motivates the present theoretical and numerical investigation. The results are analyzed in comparison with the properties of the MQS based on |α> and N-photon maximally entangled states (NOON), in the perspective of the comprehensive theory of the subject by Zurek. In that perspective the concepts of 'pointer state' and 'environment-induced superselection' are applied to the new scheme.

  7. Proportional fair scheduling with superposition coding in a cellular cooperative relay system

    DEFF Research Database (Denmark)

    Kaneko, Megumi; Hayashi, Kazunori; Popovski, Petar

    2013-01-01

    Many works have tackled on the problem of throughput and fairness optimization in cellular cooperative relaying systems. Considering firstly a two-user relay broadcast channel, we design a scheme based on superposition coding (SC) which maximizes the achievable sum-rate under a proportional...... fairness constraint. Unlike most relaying schemes where users are allocated orthogonally, our scheme serves the two users simultaneously on the same time-frequency resource unit by superposing their messages into three SC layers. The optimal power allocation parameters of each SC layer are derived...... by analysis. Next, we consider the general multi-user case in a cellular relay system, for which we design resource allocation algorithms based on proportional fair scheduling exploiting the proposed SC-based scheme. Numerical results show that the proposed algorithms allowing simultaneous user allocation...

  8. Mechanical effects of strong measurement: back-action noise and cooling

    Science.gov (United States)

    Schwab, Keith

    2007-03-01

    Our recent experiments show that it is now possible to prepare and measure mechanical systems with thermal occupation factors of N˜25 and perform continuous position measurements close to the limits required by the Heisenberg Uncertainty Principle (1). I will discuss our back-action measurements with nanomechanical structures strongly coupled to single electron transistors. We have been able to observe the stochastic back-action forces exerted by the SET as well as a cooling effect which has analogies to cooling in optical cavities. Furthermore, I will discuss progress using optical fields coupled to mechanical modes which show substantial cooling using the pondermotive effects of the photons impacting a flexible dielectric mirror (2). Both of these techniques pave the way to demonstrating the true quantum properties of a mechanical device: squeezed states, superposition states, and entangled states. (1) ``Quantum Measurement Backaction and Cooling Observed with a Nanomechanical Resonator,'' A. Naik, O. Buu, M.D. LaHaye, M.P. Blencowe, A.D. Armour, A.A. Clerk, K.C. Schwab, Nature 443, 193 (2006). (2) ``Self-cooling of a micro-mirror by radiation pressure,'' S. Gigan, H.R. Boehm, M. Patemostro, F. Blaser, G. Langer, J. Hertzberg, K. Schwab, D. Baeuerle, M. Aspelmeyer, A. Zeilinger, Nature 444, 67 (2006).

  9. Evaluation of fine ceramics raw powders with particle size analyzers having different measuring principle and its problem

    International Nuclear Information System (INIS)

    Hayakawa, Osamu; Nakahira, Kenji; Tsubaki, Junichiro.

    1995-01-01

    Many kinds of analyzers based on various principles have been developed for measuring particle size distribution of fine ceramics powders. But the reproducibility of the results, interchangeability of the models, reliability of the ends of the measured distribution have not been investigated for each principle. In this paper, these important points for particle size analysis were clarified by measuring raw material powders of fine ceramics. (1) in the case of laser diffraction and scattering method, the reproducibility in the same model is good, however, interchangeability of the different models is not so good, especially at the ends of the distribution. Submicron powders having high refractive index show such a tendency remarkably. (2) the photo sedimentation method has some problems to be conquered, especially in measuring submicron powders having high refractive index or flaky shape particles. The reproducibility of X-ray sedimentation method is much better than that of photo sedimentation. (3) the light obscuration and electrical sensing zone methods, show good reproducibility, however, sometime bad interchangeability is affected by calibration and so on. (author)

  10. Superposition of two optical vortices with opposite integer or non-integer orbital angular momentum

    Directory of Open Access Journals (Sweden)

    Carlos Fernando Díaz Meza

    2016-01-01

    Full Text Available This work develops a brief proposal to achieve the superposition of two opposite vortex beams, both with integer or non-integer mean value of the orbital angular momentum. The first part is about the generation of this kind of spatial light distributions through a modified Brown and Lohmann’s hologram. The inclusion of a simple mathematical expression into the pixelated grid’s transmittance function, based in Fourier domain properties, shifts the diffraction orders counterclockwise and clockwise to the same point and allows the addition of different modes. The strategy is theoretically and experimentally validated for the case of two opposite rotation helical wavefronts.

  11. The equivalence principle

    International Nuclear Information System (INIS)

    Smorodinskij, Ya.A.

    1980-01-01

    The prerelativistic history of the equivalence principle (EP) is presented briefly. Its role in history of the general relativity theory (G.R.T.) discovery is elucidated. A modern idea states that the ratio of inert and gravitational masses does not differ from 1 at least up to the 12 sign after comma. Attention is paid to the difference of the gravitational field from electromagnetic one. The difference is as follows, the energy of the gravitational field distributed in space is the source of the field. These fields always interact at superposition. Electromagnetic fields from different sources are put together. On the basis of EP it is established the Sun field interact with the Earth gravitational energy in the same way as with any other one. The latter proves the existence of gravitation of the very gravitational field to a heavy body. A problem on gyroscope movement in the Earth gravitational field is presented as a paradox. The calculation has shown that gyroscope at satellite makes a positive precession, and its axis turns in an angle equal to α during a turn of the satellite round the Earth, but because of the space curvature - into the angle two times larger than α. A resulting turn is equal to 3α. It is shown on the EP basis that the polarization plane in any coordinate system does not turn when the ray of light passes in the gravitational field. Together with the historical value of EP noted is the necessity to take into account the requirements claimed by the EP at description of the physical world

  12. The balance principle in scientific research.

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Wang, Qi

    2012-05-01

    The principles of balance, randomization, control and repetition, which are closely related, constitute the four principles of scientific research. The balance principle is the kernel of the four principles which runs through the other three. However, in scientific research, the balance principle is always overlooked. If the balance principle is not well performed, the research conclusion is easy to be denied, which may lead to the failure of the whole research. Therefore, it is essential to have a good command of the balance principle in scientific research. This article stresses the definition and function of the balance principle, the strategies and detailed measures to improve balance in scientific research, and the analysis of the common mistakes involving the use of the balance principle in scientific research.

  13. Scattering of an attractive Bose-Einstein condensate from a barrier: Formation of quantum superposition states

    International Nuclear Information System (INIS)

    Streltsov, Alexej I.; Alon, Ofir E.; Cederbaum, Lorenz S.

    2009-01-01

    Scattering in one dimension of an attractive ultracold bosonic cloud from a barrier can lead to the formation of two nonoverlapping clouds. Once formed, the clouds travel with constant velocity, in general different in magnitude from that of the incoming cloud, and do not disperse. The phenomenon and its mechanism - transformation of kinetic energy to internal energy of the scattered cloud - are obtained by solving the time-dependent many-boson Schroedinger equation. The analysis of the wave function shows that the object formed corresponds to a quantum superposition state of two distinct wave packets traveling through real space.

  14. Relativistic Inverse Scattering Problem for a Superposition of a Nonlocal Separable and a Local Quasipotential

    International Nuclear Information System (INIS)

    Chernichenko, Yu.D.

    2005-01-01

    Within the relativistic quasipotential approach to quantum field theory, the relativistic inverse scattering problem is solved for the case where the total quasipotential describing the interaction of two relativistic spinless particles having different masses is a superposition of a nonlocal separable and a local quasipotential. It is assumed that the local component of the total quasipotential is known and that there exist bound states in this local component. It is shown that the nonlocal separable component of the total interaction can be reconstructed provided that the local component, an increment of the phase shift, and the energies of bound states are known

  15. The Principle of Advertising as a Measure of the Essential Control of State Acts

    Directory of Open Access Journals (Sweden)

    Osvaldo Resende Neto Resende Neto

    2016-10-01

    Full Text Available Brazilian citizen has seen several scandals related to corruption, leading to an outcry for the adoption of effective measures to combat impunity. Emerges the importance of the principle of publicity as an important tool for democratic control, extending far beyond the limits of public administration in management and procedural situations. The undertaken goal here is to outline the importance of advertising in the effectiveness of legal measures for the prevention and repression of misuse of the exchequer. Using the inductive method, it was conducted a systematic research on national bibliography, exploring existing and revoked legislation on the subject.

  16. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  17. Some kinematics and dynamics from a superposition of two axisymmetric stellar systems

    International Nuclear Information System (INIS)

    Cubarsi i Morera, R.

    1990-01-01

    Some kinematic and dynamic implications of a superposition of two stellar systems are studied. In the general case of a stellar system in nonsteady states, Chandrasekhar's axially symmetrical model has been adopted for each one of the subsystems. The solution obtained for the potential function provides some kinematical constraints between the subsystems. These relationships are derived using the partial centered moments of the velocity distribution and the subcentroid velocities in order to study the velocity distribution. These relationships are used to prove that, only in a stellar system where the potential function is assumed to be stationary, the relative movement of the local subcentroids (not only in rotation), the vertex deviation phenomenon, and the whole set of the second-order-centered moments may be explained. A qualitative verification with three stellar samples in the solar neighborhood is carried out. 41 refs

  18. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  19. Physical principles of thermoluminescence and recent developments in its measurement

    International Nuclear Information System (INIS)

    Levy, P.W.

    1974-01-01

    The physical principles which are the basis of thermoluminescence techniques for dating and authenticating archaeological and fine art objects are described in non-technical terms. Included is a discussion of the interaction of alpha particles, beta rays, i.e., energetic electrons, and gamma rays with solids, particularly electron-hole ion pair formation, and the trapping of charges by crystal imperfections. Also described is the charge-release process induced by heating and the accompanying emission of luminescence resulting from charge recombination and retrapping. The basic procedure for dating and/or authenticating an artifact is described in a ''how it is done'' manner. Lastly, recently developed apparatus is described for simultaneously measuring luminescent light intensity and wavelength and sample temperature. Examples of studies made with this ''3-D'' apparatus are given and applications to dating and authenticating are described. (U.S.)

  20. Java application for the superposition T-matrix code to study the optical properties of cosmic dust aggregates

    Science.gov (United States)

    Halder, P.; Chakraborty, A.; Deb Roy, P.; Das, H. S.

    2014-09-01

    In this paper, we report the development of a java application for the Superposition T-matrix code, JaSTA (Java Superposition T-matrix App), to study the light scattering properties of aggregate structures. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precession superposition codes for multi-sphere clusters in random orientation developed by Mackowski and Mischenko (1996). It consists of a graphical user interface (GUI) in the front hand and a database of related data in the back hand. Both the interactive GUI and database package directly enable a user to model by self-monitoring respective input parameters (namely, wavelength, complex refractive indices, grain size, etc.) to study the related optical properties of cosmic dust (namely, extinction, polarization, etc.) instantly, i.e., with zero computational time. This increases the efficiency of the user. The database of JaSTA is now created for a few sets of input parameters with a plan to create a large database in future. This application also has an option where users can compile and run the scattering code directly for aggregates in GUI environment. The JaSTA aims to provide convenient and quicker data analysis of the optical properties which can be used in different fields like planetary science, atmospheric science, nano science, etc. The current version of this software is developed for the Linux and Windows platform to study the light scattering properties of small aggregates which will be extended for larger aggregates using parallel codes in future. Catalogue identifier: AETB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 571570 No. of bytes in distributed program

  1. On the uncertainty principle. V

    International Nuclear Information System (INIS)

    Halpern, O.

    1976-01-01

    The treatment of ideal experiments connected with the uncertainty principle is continued. The author analyzes successively measurements of momentum and position, and discusses the common reason why the results in all cases differ from the conventional ones. A similar difference exists for the measurement of field strengths. The interpretation given by Weizsaecker, who tried to interpret Bohr's complementarity principle by introducing a multi-valued logic is analyzed. The treatment of the uncertainty principle ΔE Δt is deferred to a later paper as is the interpretation of the method of variation of constants. Every ideal experiment discussed shows various lower limits for the value of the uncertainty product which limits depend on the experimental arrangement and are always (considerably) larger than h. (Auth.)

  2. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  3. Measuring and slowing decoherence in Electromagnetically induced transparency medium

    International Nuclear Information System (INIS)

    Shuker, M.; Firstenberg, O.; Sagi, Y.; Ben-Kish, A.; Fisher, A.; Ron, A.; Davidson, N.

    2005-01-01

    Full Text:Electromagnetically induced transparency is a unique light-matter interaction that exhibits extremely narrow-band spectroscopic features along with low absorption. Recent interest in this phenomenon is driven by its possible applications in quantum information (slow light, storage of light), atomic clocks and precise magnetometers. The Electromagnetically induced transparency phenomenon takes place when an atomic ensemble is driven to a coherent superposition of its ground state sub-levels by two phase-coherent radiation fields. A key parameter of the Electromagnetically induced transparency medium, that limits its applicability, is the coherence lifetime of this superposition (decoherence rate). We have developed a simple technique to measure decay rates within the ground state of an atomic ensemble, and specifically the decoherence rate of the Electromagnetically induced transparency coherent superposition. Detailed measurements were performed in a Rubidium vapor cell at 60 - 80 with 30 Torr of Neon buffer gas. We have found that the Electromagnetically induced transparency decoherence is dominated by spin-exchange collisions between Rubidium atoms. We discuss the sensitivity of various quantum states of the atomic ensemble to spin exchange decoherence, and find a set of quantum states that minimize this effect. Finally, we demonstrate a unique quantum state which is both insensitive to spin exchange decoherence and constitutes an Electromagnetically induced transparency state of the medium

  4. Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings

    Science.gov (United States)

    Krenn, Mario; Gu, Xuemei; Zeilinger, Anton

    2017-12-01

    We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory—such as Hall's marriage problem—are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).

  5. Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings.

    Science.gov (United States)

    Krenn, Mario; Gu, Xuemei; Zeilinger, Anton

    2017-12-15

    We show a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory. In these setups, the paths of photons are identified such that the photon-source information is never created. We find that each of these setups corresponds to an undirected graph, and every undirected graph corresponds to an experimental setup. Every term in the emerging quantum superposition corresponds to a perfect matching in the graph. Calculating the final quantum state is in the #P-complete complexity class, thus it cannot be done efficiently. To strengthen the link further, theorems from graph theory-such as Hall's marriage problem-are rephrased in the language of pair creation in quantum experiments. We show explicitly how this link allows one to answer questions about quantum experiments (such as which classes of entangled states can be created) with graph theoretical methods, and how to potentially simulate properties of graphs and networks with quantum experiments (such as critical exponents and phase transitions).

  6. AFG-MONSU. A program for calculating axial heterogeneities in cylindrical pin cells

    International Nuclear Information System (INIS)

    Neltrup, H.; Kirkegaard, P.

    1978-08-01

    The AGF-MONSU program complex is designed to calculate the flux in cylindrical fuel pin cells into which heterogeneities are introduced in a regular array. The theory - integral transport theory combined with Monte Carlo by help of a superposition principle - is described in some detail. Detailed derivation of the superposition principle as well as the formulas used in the DIT (Discrete Integral Transport) method is given in the appendices along with a description of the input structure of the AFG-MONSU program complex. (author)

  7. Measurement of carotid bifurcation pressure gradients using the Bernoulli principle.

    Science.gov (United States)

    Illig, K A; Ouriel, K; DeWeese, J A; Holen, J; Green, R M

    1996-04-01

    Current randomized prospective studies suggest that the degree of carotid stenosis is a critical element in deciding whether surgical or medical treatment is appropriate. Of potential interest is the actual pressure drop caused by the blockage, but no direct non-invasive means of quantifying the hemodynamic consequences of carotid artery stenoses currently exists. The present prospective study examined whether preoperative pulsed-Doppler duplex ultrasonographic velocity (v) measurements could be used to predict pressure gradients (delta P) caused by carotid artery stenoses, and whether such measurements could be used to predict angiographic percent diameter reduction. Preoperative Doppler velocity and intraoperative direct pressure measurements were obtained, and per cent diameter angiographic stenosis measured in 76 consecutive patients who underwent 77 elective carotid endarterectomies. Using the Bernoulli principle (delta P = 4v(2), pressure gradients across the stenoses were calculated. The predicted delta P, as well as absolute velocities and internal carotid artery/common carotid velocity ratios were compared with the actual delta P measured intraoperatively and with preoperative angiography and oculopneumoplethysmography (OPG) results. An end-diastolic velocity of > or = 1 m/s and an end-diastolic internal carotid artery/common carotid artery velocity ratio of > or = 10 predicted a 50% diameter angiographic stenosis with 100% specificity. Although statistical significance was reached, preoperative pressure gradients derived from the Bernoulli equation could not predict actual individual intraoperative pressure gradients with enough accuracy to allow decision making on an individual basis. Velocity measurements were as specific and more sensitive than OPG results. Delta P as predicted by the Bernoulli equation is not sufficiently accurate at the carotid bifurcation to be useful for clinical decision making on an individual basis. However, end

  8. Strong-field effects in Rabi oscillations between a single state and a superposition of states

    International Nuclear Information System (INIS)

    Zhdanovich, S.; Milner, V.; Hepburn, J. W.

    2011-01-01

    Rabi oscillations of quantum population are known to occur in two-level systems driven by spectrally narrow laser fields. In this work we study Rabi oscillations induced by shaped broadband femtosecond laser pulses. Due to the broad spectral width of the driving field, the oscillations are initiated between a ground state and a coherent superposition of excited states, or a ''wave packet,'' rather than a single excited state. Our experiments reveal an intricate dependence of the wave-packet phase on the intensity of the laser field. We confirm numerically that the effect is associated with the strong-field nature of the interaction and provide a qualitative picture by invoking a simple theoretical model.

  9. Thermographic Phosphors for High Temperature Measurements: Principles, Current State of the Art and Recent Applications

    Directory of Open Access Journals (Sweden)

    Konstantinos Kontis

    2008-09-01

    Full Text Available This paper reviews the state of phosphor thermometry, focusing on developments in the past 15 years. The fundamental principles and theory are presented, and the various spectral and temporal modes, including the lifetime decay, rise time and intensity ratio, are discussed. The entire phosphor measurement system, including relative advantages to conventional methods, choice of phosphors, bonding techniques, excitation sources and emission detection, is reviewed. Special attention is given to issues that may arise at high temperatures. A number of recent developments and applications are surveyed, with examples including: measurements in engines, hypersonic wind tunnel experiments, pyrolysis studies and droplet/spray/gas temperature determination. They show the technique is flexible and successful in measuring temperatures where conventional methods may prove to be unsuitable.

  10. SU-F-T-49: Dosimetry Parameters and TPS Commissioning for the CivaSheet Directional Pd-103 Brachytherapy Source

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, MJ [Tufts University School of Medicine, Boston, MA (United States)

    2016-06-15

    Purpose: The CivaSheet is a new LDR Pd-103 brachytherapy device offering directional-radiation for preferentially irradiating malignancies with healthy-tissue sparing. Observations are presented on dosimetric characterization, TPS commissioning, and evaluation of the dosesuperposition- principle for summing individual elements comprising a planar CivaSheet Methods: The CivaSheet comprises individual sources (CivaDots, 0.05cm thick and 0.25cm diam.) inside a flexible bioabsorbable substrate with a 0.8cm center-to-center rectangular array. All non-radioactive components were measured to ensure accuracy of manufacturer-provided dimensional information. The Pd spatial distribution was gleaned from radioactive and inert samples, then modeled with the MCNP6 radiation-transport-code. A 6×6 array CivaSheet was modeled to evaluate the dose superposition principle for treatment planning. Air-kerma-strength was estimated using the NIST WAFAC geometry. Absorbed dose was estimated in water with polar sampling covering 0.05≤r≤15cm in 0.05cm increments and 0°≤θ≤180° in 1° increments. These data were entered into VariSeed9.0 and tested for the dose-superposition-principle. Results: The dose-rate-constant was 0.579 cGy/h/U with g(r) determined along the rotational-axis of symmetry (0°) instead of 90°. gP(r) values at 0.1, 0.5, 2, 5, and 10cm were 1.884, 1.344, 0.558, 0.088, and 0.0046. F(r,θ) decreased between 0° and 180° by factors of 270, 23, and 5.1 at 0.1, 1, and 10cm. The highest dose-gradient was at 92°, changing by a factor of 3 within 1° due to Au-foil shielding. TPS commissioning from 0.1≤r≤11cm and 0°≤θ≤180° demonstrated 2% reproducibility of input data except at the high-dose-gradient where interpolations caused 3% differences. Dose superposition of CivaDots replicated a multi-source CivaSheet array within 2% except where another CivaDot was present. Following implantation, the device is not perfectly planar. TPS accuracy utilizing the dose-superposition-principle

  11. Regulating food law : risk analysis and the precautionary principle as general principles of EU food law

    NARCIS (Netherlands)

    Szajkowska, A.

    2012-01-01

    In food law scientific evidence occupies a central position. This study offers a legal insight into risk analysis and the precautionary principle, positioned in the EU as general principles applicable to all food safety measures, both national and EU. It develops a new method of looking at these

  12. Density measurement by means of once scattered gamma radiation the ETG probe, principles and equipment

    International Nuclear Information System (INIS)

    Joergensen, J.L.; Oelgaard, P.L.; Berg, F.

    1987-01-01

    The Department of Electrophysics, the Technical University of Denmark, and the Danish National Road Laboratory have together developed a new patent claimed device for measurements of the in situ density of materials. This report describes the principles of the system and some experimental results. The system is based on the once scattered gamma radiation. In a totally non-destructive and fast way it is possible to measure the density of up to 25 cm thick layers. Furthermore, an estimate of the density variation through the layer may be obtained. Thus the gauge represents a new generation of equipment for e.g. compaction control of road constructions. (author)

  13. Effect of the superposition of a dielectric barrier discharge onto a premixed gas burner flame

    Science.gov (United States)

    Zaima, Kazunori; Takada, Noriharu; Sasaki, Koichi

    2011-10-01

    We are investigating combustion control with the help of nonequilibrium plasma. In this work, we examined the effect of dielectric barrier discharge (DBD) on a premixed burner flame with CH4/O2/Ar gas mixture. The premixed burner flame was covered with a quartz tube. A copper electrode was attached on the outside of the quartz tube, and it was connected to a high-voltage power supply. DBD inside the quartz tube was obtained between the copper electrode and the grounded nozzle of the burner which was placed at the bottom of the quartz tube. We clearly observed that the flame length was shortened by superposing DBD onto the bottom part of the flame. The shortened flame length indicates the enhancement of the burning velocity. We measured the optical emission spectra from the bottom region of the flame. As a result, we observed clear line emissions from Ar, which were never observed from the flame without DBD. We evaluated the rotational temperatures of OH and CH radicals by spectral fitting. As a result, the rotational temperature of CH was not changed, and the rotational temperature of OH was decreased by the superposition of DBD. According to these results, it is considered that the enhancement of the burning velocity is not caused by gas heating. New reaction pathways are suggested.

  14. Field testing, comparison, and discussion of five aeolian sand transport measuring devices operating on different measuring principles

    Science.gov (United States)

    Goossens, Dirk; Nolet, Corjan; Etyemezian, Vicken; Duarte-Campos, Leonardo; Bakker, Gerben; Riksen, Michel

    2018-06-01

    Five types of sediment samplers designed to measure aeolian sand transport were tested during a wind erosion event on the Sand Motor, an area on the west coast of the Netherlands prone to severe wind erosion. Each of the samplers operates on a different principle. The MWAC (Modified Wilson And Cooke) is a passive segmented trap. The modified Leatherman sampler is a passive vertically integrating trap. The Saltiphone is an acoustic sampler that registers grain impacts on a microphone. The Wenglor sampler is an optical sensor that detects particles as they pass through a laser beam. The SANTRI (Standalone AeoliaN Transport Real-time Instrument) detects particles travelling through an infrared beam, but in different channels each associated with a particular grain size spectrum. A procedure is presented to transform the data output, which is different for each sampler, to a common standard so that the samplers can be objectively compared and their relative efficiency calculated. Results show that the efficiency of the samplers is comparable despite the differences in operating principle and the instrumental and environmental uncertainties associated to working with particle samplers in field conditions. The ability of the samplers to register the temporal evolution of a wind erosion event is investigated. The strengths and weaknesses of the samplers are discussed. Some problems inherent to optical sensors are looked at in more detail. Finally, suggestions are made for further improvement of the samplers.

  15. Observing the Progressive Decoherence of the open-quote open-quote Meter close-quote close-quote in a Quantum Measurement

    International Nuclear Information System (INIS)

    Brune, M.; Hagley, E.; Dreyer, J.; Maitre, X.; Maali, A.; Wunderlich, C.; Raimond, J.M.; Haroche, S.

    1996-01-01

    A mesoscopic superposition of quantum states involving radiation fields with classically distinct phases was created and its progressive decoherence observed. The experiment involved Rydberg atoms interacting one at a time with a few photon coherent fields trapped in a high Q microwave cavity. The mesoscopic superposition was the equivalent of an open-quote open-quote atom+measuringapparatus close-quote close-quote system in which the open-quote open-quote meter close-quote close-quote was pointing simultaneously towards two different directions emdash a open-quote open-quote Schroedinger cat.close-quote close-quote The decoherence phenomenon transforming this superposition into a statistical mixture was observed while it unfolded, providing a direct insight into a process at the heart of quantum measurement. copyright 1996 The American Physical Society

  16. Remark on Heisenberg's principle

    International Nuclear Information System (INIS)

    Noguez, G.

    1988-01-01

    Application of Heisenberg's principle to inertial frame transformations allows a distinction between three commutative groups of reciprocal transformations along one direction: Galilean transformations, dual transformations, and Lorentz transformations. These are three conjugate groups and for a given direction, the related commutators are all proportional to one single conjugation transformation which compensates for uniform and rectilinear motions. The three transformation groups correspond to three complementary ways of measuring space-time as a whole. Heisenberg's Principle then gets another explanation [fr

  17. On minimizers of causal variational principles

    International Nuclear Information System (INIS)

    Schiefeneder, Daniela

    2011-01-01

    Causal variational principles are a class of nonlinear minimization problems which arise in a formulation of relativistic quantum theory referred to as the fermionic projector approach. This thesis is devoted to a numerical and analytic study of the minimizers of a general class of causal variational principles. We begin with a numerical investigation of variational principles for the fermionic projector in discrete space-time. It is shown that for sufficiently many space-time points, the minimizing fermionic projector induces non-trivial causal relations on the space-time points. We then generalize the setting by introducing a class of causal variational principles for measures on a compact manifold. In our main result we prove under general assumptions that the support of a minimizing measure is either completely timelike, or it is singular in the sense that its interior is empty. In the examples of the circle, the sphere and certain flag manifolds, the general results are supplemented by a more detailed analysis of the minimizers. (orig.)

  18. Quantum properties of a superposition of squeezed displaced two-mode vacuum and single-photon states

    International Nuclear Information System (INIS)

    El-Orany, Faisal A A; Obada, A-S F; M Asker, Zafer; Perina, J

    2009-01-01

    In this paper, we study some quantum properties of a superposition of displaced squeezed two-mode vacuum and single-photon states, such as the second-order correlation function, the Cauchy-Schwarz inequality, quadrature squeezing, quasiprobability distribution functions and purity. These type of states include two mechanisms, namely interference in phase space and entanglement. We show that these states can exhibit sub-Poissonian statistics, squeezing and deviate from the classical Cauchy-Schwarz inequality. Moreover, the amount of entanglement in the system can be increased by increasing the squeezing mechanism. In the framework of the quasiprobability distribution functions, we show that the single-mode state can tend to the thermal state based on the correlation mechanism. A generation scheme for such states is given.

  19. The base rate principle and the fairness principle in social judgment.

    Science.gov (United States)

    Cao, Jack; Banaji, Mahzarin R

    2016-07-05

    Meet Jonathan and Elizabeth. One person is a doctor and the other is a nurse. Who is the doctor? When nothing else is known, the base rate principle favors Jonathan to be the doctor and the fairness principle favors both individuals equally. However, when individuating facts reveal who is actually the doctor, base rates and fairness become irrelevant, as the facts make the correct answer clear. In three experiments, explicit and implicit beliefs were measured before and after individuating facts were learned. These facts were either stereotypic (e.g., Jonathan is the doctor, Elizabeth is the nurse) or counterstereotypic (e.g., Elizabeth is the doctor, Jonathan is the nurse). Results showed that before individuating facts were learned, explicit beliefs followed the fairness principle, whereas implicit beliefs followed the base rate principle. After individuating facts were learned, explicit beliefs correctly aligned with stereotypic and counterstereotypic facts. Implicit beliefs, however, were immune to counterstereotypic facts and continued to follow the base rate principle. Having established the robustness and generality of these results, a fourth experiment verified that gender stereotypes played a causal role: when both individuals were male, explicit and implicit beliefs alike correctly converged with individuating facts. Taken together, these experiments demonstrate that explicit beliefs uphold fairness and incorporate obvious and relevant facts, but implicit beliefs uphold base rates and appear relatively impervious to counterstereotypic facts.

  20. Generation and measurement of nonclassical states by quantum Fock filter

    International Nuclear Information System (INIS)

    D'Ariano, G.M.; Maccone, L.; Paris, M.G.A.; Sacchi, M.F.

    1999-01-01

    We study a novel optical setup which selects a specific Fock component from a generic input state. The device allows to synthesize number states and superpositions of few number states, and to measure the photon distribution and the density matrix of a generic signal. (Authors)

  1. Experimental generation of optical coherence lattices

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yahong; Cai, Yangjian, E-mail: serpo@dal.ca, E-mail: yangjiancai@suda.edu.cn [College of Physics, Optoelectronics and Energy and Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou 215006 (China); Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province and Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou 215006 (China); Ponomarenko, Sergey A., E-mail: serpo@dal.ca, E-mail: yangjiancai@suda.edu.cn [Department of Electrical and Computer Engineering, Dalhousie University, Halifax, Nova Scotia B3J 2X4 (Canada)

    2016-08-08

    We report experimental generation and measurement of recently introduced optical coherence lattices. The presented optical coherence lattice realization technique hinges on a superposition of mutually uncorrelated partially coherent Schell-model beams with tailored coherence properties. We show theoretically that information can be encoded into and, in principle, recovered from the lattice degree of coherence. Our results can find applications to image transmission and optical encryption.

  2. Motion Estimation Using the Single-row Superposition-type Planar Compound-like Eye

    Directory of Open Access Journals (Sweden)

    Gwo-Long Lin

    2007-06-01

    Full Text Available How can the compound eye of insects capture the prey so accurately andquickly? This interesting issue is explored from the perspective of computer vision insteadof from the viewpoint of biology. The focus is on performance evaluation of noiseimmunity for motion recovery using the single-row superposition-type planar compound-like eye (SPCE. The SPCE owns a special symmetrical framework with tremendousamount of ommatidia inspired by compound eye of insects. The noise simulates possibleambiguity of image patterns caused by either environmental uncertainty or low resolutionof CCD devices. Results of extensive simulations indicate that this special visualconfiguration provides excellent motion estimation performance regardless of themagnitude of the noise. Even when the noise interference is serious, the SPCE is able todramatically reduce errors of motion recovery of the ego-translation without any type offilters. In other words, symmetrical, regular, and multiple vision sensing devices of thecompound-like eye have statistical averaging advantage to suppress possible noises. Thisdiscovery lays the basic foundation in terms of engineering approaches for the secret of thecompound eye of insects.

  3. Performance Analysis of Diversity-Controlled Multi-User Superposition Transmission for 5G Wireless Networks.

    Science.gov (United States)

    Yeom, Jeong Seon; Chu, Eunmi; Jung, Bang Chul; Jin, Hu

    2018-02-10

    In this paper, we propose a novel low-complexity multi-user superposition transmission (MUST) technique for 5G downlink networks, which allows multiple cell-edge users to be multiplexed with a single cell-center user. We call the proposed technique diversity-controlled MUST technique since the cell-center user enjoys the frequency diversity effect via signal repetition over multiple orthogonal frequency division multiplexing (OFDM) sub-carriers. We assume that a base station is equipped with a single antenna but users are equipped with multiple antennas. In addition, we assume that the quadrature phase shift keying (QPSK) modulation is used for users. We mathematically analyze the bit error rate (BER) of both cell-edge users and cell-center users, which is the first theoretical result in the literature to the best of our knowledge. The mathematical analysis is validated through extensive link-level simulations.

  4. Resorting the NIST undulator using simulated annealing for field error reduction

    International Nuclear Information System (INIS)

    Denbeaux, Greg; Johnson, Lewis E.; Madey, John M.J.

    2000-01-01

    We have used a simulated annealing algorithm to sort the samarium cobalt blocks and vanadium permendur poles in the hybrid NIST undulator to optimize the spectrum of the emitted light. While simulated annealing has proven highly effective in sorting of the SmCo blocks in pure REC undulators, the reliance on magnetically 'soft' poles operating near saturation to concentrate the flux in hybrid undulators introduces a pair of additional variables - the permeability and saturation induction of the poles - which limit the utility of the assumption of superposition on which most simulated annealing codes rely. Detailed magnetic measurements clearly demonstrated the failure of the superposition principle due to random variations in the permeability in the 'unsorted' NIST undulator. To deal with the issue, we measured both the magnetization of the REC blocks and the permeability of the NIST's integrated vanadium permendur poles, and implemented a sorting criteria which minimized the pole-to-pole variations in permeability to satisfy the criteria for realization of superposition on a nearest-neighbor basis. Though still imperfect, the computed spectrum of the radiation from the re-sorted and annealed NIST undulator is significantly superior to that of the original, unsorted device

  5. Characterization of invariant measures at the leading edge for competing particle systems

    CERN Document Server

    Ruzmaikina, A

    2004-01-01

    We study systems of particles on a line which have a maximum, are locally finite, and evolve with independent increments. `Quasi-stationary states' are defined as probability measures, on the $\\sigma$ algebra generated by the gap variables, for which the joint distribution of the gaps is invariant under the time evolution. Examples are provided by Poisson processes with densities of the form, $\\rho(dx) \\ =\\ e^{- s x} \\, s\\, dx$, with $ s > 0$, and linear superpositions of such measures. We show that conversely: any quasi-stationary state for the independent dynamics, with an exponentially bounded integrated density of particles, corresponds to a superposition of the above described probability measures, restricted to the relevant $\\sigma$-algebra. Among the systems for which this question is of some relevance are spin-glass models of statistical mechanics, where the point process represents the collection of the free energies of distinct ``pure states'', the time evolution corresponds to the addition of a spi...

  6. Nomenclature and principle of neutron humidistats design and methods of their checking

    International Nuclear Information System (INIS)

    Chaladze, A.P.; Melkumyan, V.E.

    1980-01-01

    The state of neutron hydrometry in ferrous metallurgy is considered. The nomenclature and technical characteristics of neutron humidistats and methods of their testing are presented as well as the local testing diagram for imitator certification and the testing of devices. Taking into account the design, neutron humidistats can be classified into two- and three-channel. As regards their structural realization, humidistats are classified into devices of the external type designed for measuring humidity in technological capacities and devices of the superposition type, designed for measuring the humidity of the material on a moving conveyer. The design of imitators for all types of humidistats is similar, that is the use of neutron retarders and absorbers, displaced relatively to each other [ru

  7. EPR, optical and superposition model study of Mn2+ doped L+ glutamic acid

    Science.gov (United States)

    Kripal, Ram; Singh, Manju

    2015-12-01

    Electron paramagnetic resonance (EPR) study of Mn2+ doped L+ glutamic acid single crystal is done at room temperature. Four interstitial sites are observed and the spin Hamiltonian parameters are calculated with the help of large number of resonant lines for various angular positions of external magnetic field. The optical absorption study is also done at room temperature. The energy values for different orbital levels are calculated, and observed bands are assigned as transitions from 6A1g(s) ground state to various excited states. With the help of these assigned bands, Racah inter-electronic repulsion parameters B = 869 cm-1, C = 2080 cm-1 and cubic crystal field splitting parameter Dq = 730 cm-1 are calculated. Zero field splitting (ZFS) parameters D and E are calculated by the perturbation formulae and crystal field parameters obtained using superposition model. The calculated values of ZFS parameters are in good agreement with the experimental values obtained by EPR.

  8. Quantum mechanics and the equivalence principle

    International Nuclear Information System (INIS)

    Davies, P C W

    2004-01-01

    A quantum particle moving in a gravitational field may penetrate the classically forbidden region of the gravitational potential. This raises the question of whether the time of flight of a quantum particle in a gravitational field might deviate systematically from that of a classical particle due to tunnelling delay, representing a violation of the weak equivalence principle. I investigate this using a model quantum clock to measure the time of flight of a quantum particle in a uniform gravitational field, and show that a violation of the equivalence principle does not occur when the measurement is made far from the turning point of the classical trajectory. The results are then confirmed using the so-called dwell time definition of quantum tunnelling. I conclude with some remarks about the strong equivalence principle in quantum mechanics

  9. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    Energy Technology Data Exchange (ETDEWEB)

    Chen Quan; Chen Mingli; Lu Weiguo [TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States)

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  10. A convolution-superposition dose calculation engine for GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe [Departement de genie informatique et genie logiciel, Ecole polytechnique de Montreal, 2500 Chemin de Polytechnique, Montreal, Quebec H3T 1J4 (Canada); Departement de radio-oncologie, CRCHUM-Centre hospitalier de l' Universite de Montreal, 1560 rue Sherbrooke Est, Montreal, Quebec H2L 4M1 (Canada)

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  11. Update heat exchanger designing principles

    International Nuclear Information System (INIS)

    Lipets, A.U.; Yampol'skij, A.E.

    1985-01-01

    Update heat exchanger design principles are analysed. Different coolant pattern in a heat exchanger are considered. It is suggested to rationally organize flow rates irregularity in it. Applying on heat exchanger designing measures on using really existing temperature and flow rate irregularities will permit to improve heat exchanger efficiency. It is expedient in some cases to artificially produce irregularities. In this connection some heat exchanger design principles must be reviewed now

  12. Correlation between mean transverse momentum and charged particle multiplicity based on geometrical superposition of p-Pb collisions

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jerome [Institut fuer Kernphysik, Goethe-Universitaet Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2015-07-01

    The mean transverse momentum left angle p{sub T} right angle as a function of the charged-particle multiplicity N{sub ch} in pp, p-Pb and Pb-Pb collisions was recently published by ALICE. While in pp and in p-Pb collisions a strong increase of left angle p{sub T} right angle with N{sub ch} is observed, Pb-Pb collisions show a saturation at a much lower left angle p{sub T} right angle. Efforts of reproducing this behaviour in Pb-Pb with a superpositon of nucleon-nucleon interactions do not succeed. A superposition of p-Pb collisions seems to be more promising, since the p-Pb data shows characteristics of both pp and Pb-Pb collisions. The geometric distribution of the p-Pb impact parameters is based on the Woods-Saxon density distribution. Using the correlation of the impact parameter and the multiplicity N{sub ch} in p-Pb collisions a multiplicity-spectrum was generated. Combining this spectrum with experimental p-Pb data we present left angle p{sub T} right angle as a function of N{sub ch} in simulated Pb-Pb collisions and compare it to the correlation measured in Pb-Pb by ALICE.

  13. Measurement of the orbital angular momentum density of Bessel beams by projection into a Laguerre–Gaussian basis

    CSIR Research Space (South Africa)

    Schulze, C

    2014-09-01

    Full Text Available We present the measurement of the orbital angular momentum (OAM) density of Bessel beams and superpositions thereof by projection into a Laguerre–Gaussian basis. This projection is performed by an all-optical inner product measurement performed...

  14. Is the Precautionary Principle Really Incoherent?

    Science.gov (United States)

    Boyer-Kassem, Thomas

    2017-11-01

    The Precautionary Principle has been an increasingly important principle in international treaties since the 1980s. Through varying formulations, it states that when an activity can lead to a catastrophe for human health or the environment, measures should be taken to prevent it even if the cause-and-effect relationship is not fully established scientifically. The Precautionary Principle has been critically discussed from many sides. This article concentrates on a theoretical argument by Peterson (2006) according to which the Precautionary Principle is incoherent with other desiderata of rational decision making, and thus cannot be used as a decision rule that selects an action among several ones. I claim here that Peterson's argument fails to establish the incoherence of the Precautionary Principle, by attacking three of its premises. I argue (i) that Peterson's treatment of uncertainties lacks generality, (ii) that his Archimedian condition is problematic for incommensurability reasons, and (iii) that his explication of the Precautionary Principle is not adequate. This leads me to conjecture that the Precautionary Principle can be envisaged as a coherent decision rule, again. © 2017 Society for Risk Analysis.

  15. Feedback Control of a Solid-State Qubit Using High-Fidelity Projective Measurement

    NARCIS (Netherlands)

    Riste, D.; Bultink, C.C.; Lehnert, K.W.; DiCarlo, L.

    2012-01-01

    We demonstrate feedback control of a superconducting transmon qubit using discrete, projective measurement and conditional coherent driving. Feedback realizes a fast and deterministic qubit reset to a target state with 2.4% error averaged over input superposition states, and allows concatenating

  16. Principles and applications of measurement and uncertainty analysis in research and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C.V.

    1992-11-01

    Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that ``The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.`` Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true? What kind of information should we include in a statement of uncertainty accompanying a calibrated value? How and where do we get the information to include in an uncertainty statement? How should we interpret and use measurement uncertainty information? This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.

  17. Principles and applications of measurement and uncertainty analysis in research and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Wells, C.V.

    1992-11-01

    Interest in Measurement Uncertainty Analysis has grown in the past several years as it has spread to new fields of application, and research and development of uncertainty methodologies have continued. This paper discusses the subject from the perspectives of both research and calibration environments. It presents a history of the development and an overview of the principles of uncertainty analysis embodied in the United States National Standard, ANSI/ASME PTC 19.1-1985, Measurement Uncertainty. Examples are presented in which uncertainty analysis was utilized or is needed to gain further knowledge of a particular measurement process and to characterize final results. Measurement uncertainty analysis provides a quantitative estimate of the interval about a measured value or an experiment result within which the true value of that quantity is expected to lie. Years ago, Harry Ku of the United States National Bureau of Standards stated that The informational content of the statement of uncertainty determines, to a large extent, the worth of the calibrated value.'' Today, that statement is just as true about calibration or research results as it was in 1968. Why is that true What kind of information should we include in a statement of uncertainty accompanying a calibrated value How and where do we get the information to include in an uncertainty statement How should we interpret and use measurement uncertainty information This discussion will provide answers to these and other questions about uncertainty in research and in calibration. The methodology to be described has been developed by national and international groups over the past nearly thirty years, and individuals were publishing information even earlier. Yet the work is largely unknown in many science and engineering arenas. I will illustrate various aspects of uncertainty analysis with some examples drawn from the radiometry measurement and calibration discipline from research activities.

  18. Enhancing quantum entanglement for continuous variables by a coherent superposition of photon subtraction and addition

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Su-Yong; Kim, Ho-Joon [Department of Physics, Texas A and M University at Qatar, P.O. Box 23874, Doha (Qatar); Ji, Se-Wan [School of Computational Sciences, Korea Institute for Advanced Study, Seoul 130-012 (Korea, Republic of); Nha, Hyunchul [Department of Physics, Texas A and M University at Qatar, P.O. Box 23874, Doha (Qatar); Institute fuer Quantenphysik, Universitaet Ulm, D-89069 Ulm (Germany)

    2011-07-15

    We investigate how the entanglement properties of a two-mode state can be improved by performing a coherent superposition operation ta+ra{sup {dagger}} of photon subtraction and addition, proposed by Lee and Nha [Phys. Rev. A 82, 053812 (2010)], on each mode. We show that the degree of entanglement, the Einstein-Podolsky-Rosen-type correlation, and the performance of quantum teleportation can be all enhanced for the output state when the coherent operation is applied to a two-mode squeezed state. The effects of the coherent operation are more prominent than those of the mere photon subtraction a and the addition a{sup {dagger}} particularly in the small-squeezing regime, whereas the optimal operation becomes the photon subtraction (case of r=0) in the large-squeezing regime.

  19. Neural Network Molecule: a Solution of the Inverse Biometry Problem through Software Support of Quantum Superposition on Outputs of the Network of Artificial Neurons

    Directory of Open Access Journals (Sweden)

    Vladimir I. Volchikhin

    2017-12-01

    Full Text Available Introduction: The aim of the study is to accelerate the solution of neural network biometrics inverse problem on an ordinary desktop computer. Materials and Methods: To speed up the calculations, the artificial neural network is introduced into the dynamic mode of “jittering” of the states of all 256 output bits. At the same time, too many output states of the neural network are logarithmically folded by transitioning to the Hamming distance space between the code of the image “Own” and the codes of the images “Alien”. From the database of images of “Alien” 2.5 % of the most similar images are selected. In the next generation, 97.5 % of the discarded images are restored with GOST R 52633.2-2010 procedures by crossing parent images and obtaining descendant images from them. Results: Over a period of about 10 minutes, 60 generations of directed search for the solution of the inverse problem can be realized that allows inversing matrices of neural network functionals of dimension 416 inputs to 256 outputs with restoration of up to 97 % information on unknown biometric parameters of the image “Own”. Discussion and Conclusions: Supporting for 10 minutes of computer time the 256 qubit quantum superposition allows on a conventional computer to bypass the actual infinity of analyzed states in 5050 (50 to 50 times more than the same computer could process realizing the usual calculations. The increase in the length of the supported quantum superposition by 40 qubits is equivalent to increasing the processor clock speed by about a billion times. It is for this reason that it is more profitable to increase the number of quantum superpositions supported by the software emulator in comparison with the creation of a more powerful processor.

  20. Can Topology and Geometry be Measured by an Operator Measurement in Quantum Gravity?

    Science.gov (United States)

    Berenstein, David; Miller, Alexandra

    2017-06-30

    In the context of Lin-Lunin-Maldacena geometries, we show that superpositions of classical coherent states of trivial topology can give rise to new classical limits where the topology of spacetime has changed. We argue that this phenomenon implies that neither the topology nor the geometry of spacetime can be the result of an operator measurement. We address how to reconcile these statements with the usual semiclassical analysis of low energy effective field theory for gravity.

  1. The 4th Thermodynamic Principle?

    International Nuclear Information System (INIS)

    Montero Garcia, Jose de la Luz; Novoa Blanco, Jesus Francisco

    2007-01-01

    It should be emphasized that the 4th Principle above formulated is a thermodynamic principle and, at the same time, is mechanical-quantum and relativist, as it should inevitably be and its absence has been one of main the theoretical limitations of the physical theory until today.We show that the theoretical discovery of Dimensional Primitive Octet of Matter, the 4th Thermodynamic Principle, the Quantum Hexet of Matter, the Global Hexagonal Subsystem of Fundamental Constants of Energy and the Measurement or Connected Global Scale or Universal Existential Interval of the Matter is that it is possible to be arrived at a global formulation of the four 'forces' or fundamental interactions of nature. The Einstein's golden dream is possible

  2. Probing Nuclear Spin Effects on Electronic Spin Coherence via EPR Measurements of Vanadium(IV) Complexes.

    Science.gov (United States)

    Graham, Michael J; Krzyaniak, Matthew D; Wasielewski, Michael R; Freedman, Danna E

    2017-07-17

    Quantum information processing (QIP) has the potential to transform numerous fields from cryptography, to finance, to the simulation of quantum systems. A promising implementation of QIP employs unpaired electronic spins as qubits, the fundamental units of information. Though molecular electronic spins offer many advantages, including chemical tunability and facile addressability, the development of design principles for the synthesis of complexes that exhibit long qubit superposition lifetimes (also known as coherence times, or T 2 ) remains a challenge. As nuclear spins in the local qubit environment are a primary cause of shortened superposition lifetimes, we recently conducted a study which employed a modular spin-free ligand scaffold to place a spin-laden propyl moiety at a series of fixed distances from an S = 1 / 2 vanadium(IV) ion in a series of vanadyl complexes. We found that, within a radius of 4.0(4)-6.6(6) Å from the metal center, nuclei did not contribute to decoherence. To assess the generality of this important design principle and test its efficacy in a different coordination geometry, we synthesized and investigated three vanadium tris(dithiolene) complexes with the same ligand set employed in our previous study: K 2 [V(C 5 H 6 S 4 ) 3 ] (1), K 2 [V(C 7 H 6 S 6 ) 3 ] (2), and K 2 [V(C 9 H 6 S 8 ) 3 ] (3). We specifically interrogated solutions of these complexes in DMF-d 7 /toluene-d 8 with pulsed electron paramagnetic resonance spectroscopy and electron nuclear double resonance spectroscopy and found that the distance dependence present in the previously synthesized vanadyl complexes holds true in this series. We further examined the coherence properties of the series in a different solvent, MeCN-d 3 /toluene-d 8 , and found that an additional property, the charge density of the complex, also affects decoherence across the series. These results highlight a previously unknown design principle for augmenting T 2 and open new pathways for the

  3. The action uncertainty principle and quantum gravity

    Science.gov (United States)

    Mensky, Michael B.

    1992-02-01

    Results of the path-integral approach to the quantum theory of continuous measurements have been formulated in a preceding paper in the form of an inequality of the type of the uncertainty principle. The new inequality was called the action uncertainty principle, AUP. It was shown that the AUP allows one to find in a simple what outputs of the continuous measurements will occur with high probability. Here a more simple form of the AUP will be formulated, δ S≳ħ. When applied to quantum gravity, it leads in a very simple way to the Rosenfeld inequality for measurability of the average curvature.

  4. Quantum measure of nonclassical light

    International Nuclear Information System (INIS)

    Kim, Ki Sik

    2003-01-01

    The nonclassical light and its properties are reviewed in the phase space representation. The quantitative measure of nonclassicality for a single-mode case is introduced and its physical significance is discussed in terms of the environmental effects on nonclassicality. The quantitative measure of nonclassical property is defined and used to classify the different nonclassical properties. The nonclassical measure is also extended to the multi-mode case. One of the distinctive features of multi-mode nonclassical light is entanglement, which is not possessed by a single-mode light, and the multi-mode nonclassical measure may reflect the contents of entanglement. The multi-mode nonclassical measure is calculated for the superposition through a beam spitter and compared with the single-mode nonclassical measure.

  5. A systematic methodology for creep master curve construction using the stepped isostress method (SSM): a numerical assessment

    Science.gov (United States)

    Miranda Guedes, Rui

    2018-02-01

    Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.

  6. Study of principle error sources in gamma spectrometry. Application to cross sections measurement

    International Nuclear Information System (INIS)

    Majah, M. Ibn.

    1985-01-01

    The principle error sources in gamma spectrometry have been studied in purpose to measure cross sections with great precision. Three error sources have been studied: dead time and pile up which depend on counting rate, and coincidence effect that depends on the disintegration scheme of the radionuclide in question. A constant frequency pulse generator has been used to correct the counting loss due to dead time and pile up in cases of long and short disintegration periods. The loss due to coincidence effect can reach 25% and over, depending on the disintegration scheme and on the distance source-detector. After establishing the correction formula and verifying its validity for four examples: iron 56, scandium 48, antimony 120 and gold 196 m, an application has been done by measuring cross sections of nuclear reactions that lead to long disintegration periods which need short distance source-detector counting and thus correcting the loss due to dead time effect, pile up and coincidence effect. 16 refs., 45 figs., 25 tabs. (author)

  7. Limited entropic uncertainty as new principle of quantum physics

    International Nuclear Information System (INIS)

    Ion, D.B.; Ion, M.L.

    2001-01-01

    The Uncertainty Principle (UP) of quantum mechanics discovered by Heisenberg, which constitute the corner-stone of quantum physics, asserts that: there is an irreducible lower bound on the uncertainty in the result of a simultaneous measurement of non-commuting observables. In order to avoid this state-dependence many authors proposed to use the information entropy as a measure of the uncertainty instead of above standard quantitative formulation of the Heisenberg uncertainty principle. In this paper the Principle of Limited Entropic Uncertainty (LEU-Principle), as a new principle in quantum physics, is proved. Then, consistent experimental tests of the LEU-principle, obtained by using the available 49 sets of the pion-nucleus phase shifts, are presented for both, extensive (q=1) and nonextensive (q=0.5 and q=2.0) cases. Some results obtained by the application of LEU-Principle to the diffraction phenomena are also discussed. The main results and conclusions of our paper can be summarized as follows: (i) We introduced a new principle in quantum physics namely the Principle of Limited Entropic Uncertainty (LEU-Principle). This new principle includes in a more general and exact form not only the old Heisenberg uncertainty principle but also introduce an upper limit on the magnitude of the uncertainty in the quantum physics. The LEU-Principle asserts that: 'there is an irreducible lower bound as well as an upper bound on the uncertainty in the result of a simultaneous measurement of non-commuting observables for any extensive and nonextensive (q ≥ 0) quantum systems'; (ii) Two important concrete realizations of the LEU-Principle are explicitly obtained in this paper, namely: (a) the LEU-inequalities for the quantum scattering of spinless particles and (b) the LEU-inequalities for the diffraction on single slit of width 2a. In particular from our general results, in the limit y → +1 we recover in an exact form all the results previously reported. In our paper an

  8. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    Science.gov (United States)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  9. Who has to pay for measures in the field of water management? A proposal for applying the polluter pays principle.

    Science.gov (United States)

    Grünebaum, Thomas; Schweder, Heinrich; Weyand, Michael

    2009-01-01

    There is no doubt about the fact that the implementation of the European Water Framework Directive (WFD) and the pursuit of its goal of good ecological status will give rise to measures in different fields of water management. However, a conclusive and transparent method of financing these measures is still missing up to now. Measures in the water management sector are no mere end in themselves; instead, they serve specific ends directed at human activities or they serve general environment objectives. Following the integrative approach of the WFD on looking upon river basins as a whole and its requirement to observe the polluter pays principle, all different groups within a river basin should contribute to the costs according to their cost-bearer roles as polluters, stakeholders with vested interests or beneficiaries via relevant yardsticks. In order to quantify the financial expenditure of each cost bearer, a special algorithm was developed and tested in the river basin of a small tributary of the Ruhr River. It was proved to be generally practicable with regard to its handling and the comprehension of the results. Therefore, the application of a cost bearer system based on the polluter-pays principle and thus in correspondence with the WFD's requirements should appear possible in order to finance future measures.

  10. Oscillatory Dynamics of One-Dimensional Homogeneous Granular Chains

    Science.gov (United States)

    Starosvetsky, Yuli; Jayaprakash, K. R.; Hasan, Md. Arif; Vakakis, Alexander F.

    The acoustics of the homogeneous granular chains has been studied extensively both numerically and experimentally in the references cited in the previous chapters. This chapter focuses on the oscillatory behavior of finite dimensional homogeneous granular chains. It is well known that normal vibration modes are the building blocks of the vibrations of linear systems due to the applicability of the principle of superposition. One the other hand, nonlinear theory is deprived of such a general superposition principle (although special cases of nonlinear superpositions do exist), but nonlinear normal modes ‒ NNMs still play an important role in the forced and resonance dynamics of these systems. In their basic definition [1], NNMs were defined as time-periodic nonlinear oscillations of discrete or continuous dynamical systems where all coordinates (degrees-of-freedom) oscillate in-unison with the same frequency; further extensions of this definition have been considered to account for NNMs of systems with internal resonances [2]...

  11. Measurement theory and the Schroedinger equation

    International Nuclear Information System (INIS)

    Schwarz, A.S.; Tyupkin, Yu.S.

    1987-01-01

    The paper is an analysis of the measuring process in quantum mechanics based on the Schroedinger equation. The arguments employed use an assumption reflecting, to some extent, the statistical properties of the vacuum. A description is given of the cases in which different incoherent superpositions of pure states in quantum mechanics are physically equivalent. The fundamental difference between quantum and classical mechanics as explained by the existence of unobservable variables is discussed. (U.K.)

  12. Itch Management: General Principles.

    Science.gov (United States)

    Misery, Laurent

    2016-01-01

    Like pain, itch is a challenging condition that needs to be managed. Within this setting, the first principle of itch management is to get an appropriate diagnosis to perform an etiology-oriented therapy. In several cases it is not possible to treat the cause, the etiology is undetermined, there are several causes, or the etiological treatment is not effective enough to alleviate itch completely. This is also why there is need for symptomatic treatment. In all patients, psychological support and associated pragmatic measures might be helpful. General principles and guidelines are required, yet patient-centered individual care remains fundamental. © 2016 S. Karger AG, Basel.

  13. Measuring Pancharatnam's relative phase for SO(3) evolutions using spin polarimetry

    International Nuclear Information System (INIS)

    Larsson, Peter; Sjoeqvist, Erik

    2003-01-01

    In polarimetry, a superposition of internal quantal states is exposed to a single Hamiltonian and information about the evolution of the quantal states is inferred from projection measurements on the final superposition. In this framework, we here extend the polarimetric test of Pancharatnam's relative phase for spin-(1/2) proposed by Wagh and Rakhecha [Phys. Lett. A 197, 112 (1995)] to spin j≥1 undergoing noncyclic SO(3) evolution. We demonstrate that the output intensity for higher spin values is a polynomial function of the corresponding spin-(1/2) intensity. We further propose a general method to extract the noncyclic SO(3) phase and visibility by rigid translation of two π/2 spin flippers. Polarimetry on higher spin states may in practice be done with spin polarized atomic beams

  14. Fundamental uncertainty limit of optical flow velocimetry according to Heisenberg's uncertainty principle.

    Science.gov (United States)

    Fischer, Andreas

    2016-11-01

    Optical flow velocity measurements are important for understanding the complex behavior of flows. Although a huge variety of methods exist, they are either based on a Doppler or a time-of-flight measurement principle. Doppler velocimetry evaluates the velocity-dependent frequency shift of light scattered at a moving particle, whereas time-of-flight velocimetry evaluates the traveled distance of a scattering particle per time interval. Regarding the aim of achieving a minimal measurement uncertainty, it is unclear if one principle allows to achieve lower uncertainties or if both principles can achieve equal uncertainties. For this reason, the natural, fundamental uncertainty limit according to Heisenberg's uncertainty principle is derived for Doppler and time-of-flight measurement principles, respectively. The obtained limits of the velocity uncertainty are qualitatively identical showing, e.g., a direct proportionality for the absolute value of the velocity to the power of 32 and an indirect proportionality to the square root of the scattered light power. Hence, both measurement principles have identical potentials regarding the fundamental uncertainty limit due to the quantum mechanical behavior of photons. This fundamental limit can be attained (at least asymptotically) in reality either with Doppler or time-of-flight methods, because the respective Cramér-Rao bounds for dominating photon shot noise, which is modeled as white Poissonian noise, are identical with the conclusions from Heisenberg's uncertainty principle.

  15. Bernoulli's Principle

    Science.gov (United States)

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  16. Principles and applications of tribology

    CERN Document Server

    Moore, Desmond F

    1975-01-01

    Principles and Applications of Tribology provides a mechanical engineering perspective of the fundamental understanding and applications of tribology. This book is organized into two parts encompassing 16 chapters that cover the principles of friction and different types of lubrication. Chapter 1 deals with the immense scope of tribology and the range of applications in the existing technology, and Chapter 2 is devoted entirely to the evaluation and measurement of surface texture. Chapters 3 to 5 present the fundamental concepts underlying the friction of metals, elastomers, and other material

  17. General principles of quantum mechanics

    International Nuclear Information System (INIS)

    Pauli, W.

    1980-01-01

    This book is a textbook for a course in quantum mechanics. Starting from the complementarity and the uncertainty principle Schroedingers equation is introduced together with the operator calculus. Then stationary states are treated as eigenvalue problems. Furthermore matrix mechanics are briefly discussed. Thereafter the theory of measurements is considered. Then as approximation methods perturbation theory and the WKB approximation are introduced. Then identical particles, spin, and the exclusion principle are discussed. There after the semiclassical theory of radiation and the relativistic one-particle problem are discussed. Finally an introduction is given into quantum electrodynamics. (HSI)

  18. The precautionary principle in international environmental law and international jurisprudence

    OpenAIRE

    Tubić, Bojan

    2014-01-01

    This paper analysis international regulation of the precautionary principle as one of environmental principles. This principle envisages that when there are threats of serious and irreparable harm, as a consequence of certain economic activity, the lack of scientific evidence and full certainty cannot be used as a reason for postponing efficient measures for preventing environmental harm. From economic point of view, the application of precautionary principle is problematic, because it create...

  19. Berman-Konsowa principle for reversible Markov jump processes

    NARCIS (Netherlands)

    Hollander, den W.Th.F.; Jansen, S.

    2013-01-01

    In this paper we prove a version of the Berman-Konsowa principle for reversible Markov jump processes on Polish spaces. The Berman-Konsowa principle provides a variational formula for the capacity of a pair of disjoint measurable sets. There are two versions, one involving a class of probability

  20. Dynamic principle for ensemble control tools.

    Science.gov (United States)

    Samoletov, A; Vasiev, B

    2017-11-28

    Dynamical equations describing physical systems in contact with a thermal bath are commonly extended by mathematical tools called "thermostats." These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and ensures invariance of the canonical measure. The principle covers both stochastic and deterministic thermostat schemes. Our method has a clear advantage over a range of proposed and widely used thermostat schemes that are based on formal mathematical reasoning. Following the derivation of the proposed principle, we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme.

  1. Dark matter and the equivalence principle

    Science.gov (United States)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  2. Difference Principle and Black-hole Thermodynamics

    OpenAIRE

    Martin, Pete

    2009-01-01

    The heuristic principle that constructive dynamics may arise wherever there exists a difference, or gradient, is discussed. Consideration of black-hole entropy appears to provide a clue for setting a lower bound on any extensive measure of such collective system difference, or potential to give rise to constructive dynamics. It is seen that the second-power dependence of black-hole entropy on mass is consistent with the difference principle, while consideration of Hawking radiation forces one...

  3. Recent status of numerical simulation studies for zeolites as highly-selective cesium adsorbents by first-principles calculation and Monte Carlo method

    International Nuclear Information System (INIS)

    Nakamura, Hiroki; Okumura, Masahiko; Machida, Masahiko

    2015-01-01

    The authors examined, based on first-principles calculation, the mechanism of mordenite as a species of zeolite to show high adsorption selectivity for Cs, with a focus on the pores as adsorption site. For increasing the adsorption selectivity for Cs, the following three conditions for mordenite were proposed: (1) to have many pores with a radius of about 3 Å, (2) relatively small ratio of Al and Si, and (3) uniform distribution of Al atoms around the pores to adsorb Cs. The superposition effect of the interaction obtained by embracing positive ions with all the pores was revealed to be important, which verified the importance of computational science. It was also successfully conducted to reproduce with Monte Carlo method the thermodynamic level data of ion exchange isotherms, which became engineering metrics after actual measurement. This method was able to reproduce the difference in properties shown by different zeolites, and also able to explain changes in the adsorption performance that depends on Al and Si ratio, which remained the findings from experience up to date, by utilizing the method to associate the result to microscopic factors. Based on these results, this paper discusses how far material development would be realized depending on the leadership of computational science, and what kinds of research and development would be required in the future. (A.O)

  4. Identification of distant drug off-targets by direct superposition of binding pocket surfaces.

    Science.gov (United States)

    Schumann, Marcel; Armen, Roger S

    2013-01-01

    Correctly predicting off-targets for a given molecular structure, which would have the ability to bind a large range of ligands, is both particularly difficult and important if they share no significant sequence or fold similarity with the respective molecular target ("distant off-targets"). A novel approach for identification of off-targets by direct superposition of protein binding pocket surfaces is presented and applied to a set of well-studied and highly relevant drug targets, including representative kinases and nuclear hormone receptors. The entire Protein Data Bank is searched for similar binding pockets and convincing distant off-target candidates were identified that share no significant sequence or fold similarity with the respective target structure. These putative target off-target pairs are further supported by the existence of compounds that bind strongly to both with high topological similarity, and in some cases, literature examples of individual compounds that bind to both. Also, our results clearly show that it is possible for binding pockets to exhibit a striking surface similarity, while the respective off-target shares neither significant sequence nor significant fold similarity with the respective molecular target ("distant off-target").

  5. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  6. General principles governing sampling and measurement techniques for monitoring radioactive effluents from nuclear facilities

    International Nuclear Information System (INIS)

    Fitoussi, L.

    1978-01-01

    An explanation is given of the need to monitor the release of radioactive gases and liquid effluents from nuclear facilities, with particular emphasis on the ICRP recommendations and on the interest in this problem shown by the larger international organizations. This is followed by a description of the classes of radionuclides that are normally monitored in this way. The characteristics of monitoring 'in line' and 'by sample taking' are described; the disadvantages of in line monitoring and the problem of sample representativity are discussed. There follows an account of the general principles for measuring gaseous and liquid effluents that are applied in the techniques normally employed at nuclear facilities. Standards relating to the specifications for monitoring instruments are at present being devised by the International Electrotechnical Commission, and there are still major differences in national practices, at least as far as measurement thresholds are concerned. In conclusion, it is shown that harmonization of practices and standardization of equipment would probably help to make international relations in the field more productive. (author)

  7. Multimodality 3D Superposition and Automated Whole Brain Tractography: Comprehensive Printing of the Functional Brain.

    Science.gov (United States)

    Konakondla, Sanjay; Brimley, Cameron J; Sublett, Jesna Mathew; Stefanowicz, Edward; Flora, Sarah; Mongelluzzo, Gino; Schirmer, Clemens M

    2017-09-29

    Whole brain tractography using diffusion tensor imaging (DTI) sequences can be used to map cerebral connectivity; however, this can be time-consuming due to the manual component of image manipulation required, calling for the need for a standardized, automated, and accurate fiber tracking protocol with automatic whole brain tractography (AWBT). Interpreting conventional two-dimensional (2D) images, such as computed tomography (CT) and magnetic resonance imaging (MRI), as an intraoperative three-dimensional (3D) environment is a difficult task with recognized inter-operator variability. Three-dimensional printing in neurosurgery has gained significant traction in the past decade, and as software, equipment, and practices become more refined, trainee education, surgical skills, research endeavors, innovation, patient education, and outcomes via valued care is projected to improve. We describe a novel multimodality 3D superposition (MMTS) technique, which fuses multiple imaging sequences alongside cerebral tractography into one patient-specific 3D printed model. Inferences on cost and improved outcomes fueled by encouraging patient engagement are explored.

  8. Quantum-phase dynamics of two-component Bose-Einstein condensates: Collapse-revival of macroscopic superposition states

    International Nuclear Information System (INIS)

    Nakano, Masayoshi; Kishi, Ryohei; Ohta, Suguru; Takahashi, Hideaki; Furukawa, Shin-ichi; Yamaguchi, Kizashi

    2005-01-01

    We investigate the long-time dynamics of two-component dilute gas Bose-Einstein condensates with relatively different two-body interactions and Josephson couplings between the two components. Although in certain parameter regimes the quantum state of the system is known to evolve into macroscopic superposition, i.e., Schroedinger cat state, of two states with relative atom number differences between the two components, the Schroedinger cat state is also found to repeat the collapse and revival behavior in the long-time region. The dynamical behavior of the Pegg-Barnett phase difference between the two components is shown to be closely connected with the dynamics of the relative atom number difference for different parameters. The variation in the relative magnitude between the Josephson coupling and intra- and inter-component two-body interaction difference turns out to significantly change not only the size of the Schroedinger cat state but also its collapse-revival period, i.e., the lifetime of the Schroedinger cat state

  9. Modeling and Simulation of Voids in Composite Tape Winding Process Based on Domain Superposition Technique

    Science.gov (United States)

    Deng, Bo; Shi, Yaoyao

    2017-11-01

    The tape winding technology is an effective way to fabricate rotationally composite materials. Nevertheless, some inevitable defects will seriously influence the performance of winding products. One of the crucial ways to identify the quality of fiber-reinforced composite material products is examining its void content. Significant improvement in products' mechanical properties can be achieved by minimizing the void defect. Two methods were applied in this study, finite element analysis and experimental testing, respectively, to investigate the mechanism of how void forming in composite tape winding processing. Based on the theories of interlayer intimate contact and Domain Superposition Technique (DST), a three-dimensional model of prepreg tape void with SolidWorks has been modeled in this paper. Whereafter, ABAQUS simulation software was used to simulate the void content change with pressure and temperature. Finally, a series of experiments were performed to determine the accuracy of the model-based predictions. The results showed that the model is effective for predicting the void content in the composite tape winding process.

  10. Digital coherent superposition of optical OFDM subcarrier pairs with Hermitian symmetry for phase noise mitigation.

    Science.gov (United States)

    Yi, Xingwen; Chen, Xuemei; Sharma, Dinesh; Li, Chao; Luo, Ming; Yang, Qi; Li, Zhaohui; Qiu, Kun

    2014-06-02

    Digital coherent superposition (DCS) provides an approach to combat fiber nonlinearities by trading off the spectrum efficiency. In analogy, we extend the concept of DCS to the optical OFDM subcarrier pairs with Hermitian symmetry to combat the linear and nonlinear phase noise. At the transmitter, we simply use a real-valued OFDM signal to drive a Mach-Zehnder (MZ) intensity modulator biased at the null point and the so-generated OFDM signal is Hermitian in the frequency domain. At receiver, after the conventional OFDM signal processing, we conduct DCS of the optical OFDM subcarrier pairs, which requires only conjugation and summation. We show that the inter-carrier-interference (ICI) due to phase noise can be reduced because of the Hermitain symmetry. In a simulation, this method improves the tolerance to the laser phase noise. In a nonlinear WDM transmission experiment, this method also achieves better performance under the influence of cross phase modulation (XPM).

  11. Dynamic properties of human incudostapedial joint-Experimental measurement and finite element modeling.

    Science.gov (United States)

    Jiang, Shangyuan; Gan, Rong Z

    2018-04-01

    The incudostapedial joint (ISJ) is a synovial joint connecting the incus and stapes in the middle ear. Mechanical properties of the ISJ directly affect sound transmission from the tympanic membrane to the cochlea. However, how ISJ properties change with frequency has not been investigated. In this paper, we report the dynamic properties of the human ISJ measured in eight samples using a dynamic mechanical analyzer (DMA) for frequencies from 1 to 80 Hz at three temperatures of 5, 25 and 37 °C. The frequency-temperature superposition (FTS) principle was used to extrapolate the results to 8 kHz. The complex modulus of ISJ was measured with a mean storage modulus of 1.14 MPa at 1 Hz that increased to 3.01 MPa at 8 kHz, and a loss modulus that increased from 0.07 to 0.47 MPa. A 3-dimensional finite element (FE) model consisting of the articular cartilage, joint capsule and synovial fluid was then constructed to derive mechanical properties of ISJ components by matching the model results to experimental data. Modeling results showed that mechanical properties of the joint capsule and synovial fluid affected the dynamic behavior of the joint. This study contributes to a better understanding of the structure-function relationship of the ISJ for sound transmission. Copyright © 2018. Published by Elsevier Ltd.

  12. QUANTUM COMPUTING: Quantum Entangled Bits Step Closer to IT.

    Science.gov (United States)

    Zeilinger, A

    2000-07-21

    In contrast to today's computers, quantum computers and information technologies may in future be able to store and transmit information not only in the state "0" or "1," but also in superpositions of the two; information will then be stored and transmitted in entangled quantum states. Zeilinger discusses recent advances toward using this principle for quantum cryptography and highlights studies into the entanglement (or controlled superposition) of several photons, atoms, or ions.

  13. [Application criteria of the precautionary principle].

    Science.gov (United States)

    Moccaldi, R

    2011-01-01

    The precautionary principle, according to the European Commission (February 2, 2000) must be applied when there is a possibility of a danger to humans, animals and/or environment health, i.e. when the potential harmful effects have been identified by a scientific and objective evaluation, but this evaluation does not allow the risk to be determined with sufficient certainty. However this principle has been invoked, without the identification, even partial, of harmful effects, to justify preventive and protective measures deemed necessary by policy maker mainly due to a high (but unjustified) risk perception by the population. We analyze the examples of the limits imposed by Italian legislation for the protection from EMF, and measures of "prudent avoidance" in the use of mobile phones.

  14. Reformulation of a stochastic action principle for irregular dynamics

    International Nuclear Information System (INIS)

    Wang, Q.A.; Bangoup, S.; Dzangue, F.; Jeatsa, A.; Tsobnang, F.; Le Mehaute, A.

    2009-01-01

    A stochastic action principle for random dynamics is revisited. Numerical diffusion experiments are carried out to show that the diffusion path probability depends exponentially on the Lagrangian action A=∫ a b Ldt. This result is then used to derive the Shannon measure for path uncertainty. It is shown that the maximum entropy principle and the least action principle of classical mechanics can be unified into δA-bar=0 where the average is calculated over all possible paths of the stochastic motion between two configuration points a and b. It is argued that this action principle and the maximum entropy principle are a consequence of the mechanical equilibrium condition extended to the case of stochastic dynamics.

  15. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  16. The normative basis of the Precautionary Principle

    Energy Technology Data Exchange (ETDEWEB)

    Schomberg, Rene von [European Commission, Directorate General for Research, Brussels (Belgium)

    2006-09-15

    Precautionary measures are provisional measures by nature, and need to be regularly reviewed when scientific information either calls for relaxation or strengthening of those measures. Within the EU context, these provisional measures do not have a prefixed 'expiry' date: one can only lift precautionary measures if scientific knowledge has progressed to a point that one would be able to translate (former) uncertainties in terms of risk and adverse effects in terms of defined, consensual levels of harm/damage. Precautionary frameworks facilitate in particular deliberation at the science/policy/society interfaces to which risk management is fully connected. Applying the precautionary principle is to be seen as a normative risk management exercise which builds upon scientific risk assessments. An ongoing scientific and normative deliberation at the science/policy interface involves a shift in science centred debates on the probability of risks towards a science informed debate on uncertainties and plausible adverse effects: this means that decisions should not only be based on available data but on a broad scientific knowledge base including a variety of scientific disciplines. The invocation, implementation and application of the precautionary principle follows a progressive line of different levels of deliberations (which obviously can be interconnected to each other but are distinguished here for analytical purposes). I have listed these levels of deliberation in a table. The table provides a model for guiding all the relevant normative levels of deliberation which are all needed in order to eventually make the legitimate conclusions on the acceptability of products or processes. The table provides a progressive line of those levels of deliberations from the initial invocation of the precautionary principle at the political level down to level of risk management decisions but at the same time show their inter relatedness. Although the table may suggest a

  17. The normative basis of the Precautionary Principle

    International Nuclear Information System (INIS)

    Schomberg, Rene von

    2006-01-01

    Precautionary measures are provisional measures by nature, and need to be regularly reviewed when scientific information either calls for relaxation or strengthening of those measures. Within the EU context, these provisional measures do not have a prefixed 'expiry' date: one can only lift precautionary measures if scientific knowledge has progressed to a point that one would be able to translate (former) uncertainties in terms of risk and adverse effects in terms of defined, consensual levels of harm/damage. Precautionary frameworks facilitate in particular deliberation at the science/policy/society interfaces to which risk management is fully connected. Applying the precautionary principle is to be seen as a normative risk management exercise which builds upon scientific risk assessments. An ongoing scientific and normative deliberation at the science/policy interface involves a shift in science centred debates on the probability of risks towards a science informed debate on uncertainties and plausible adverse effects: this means that decisions should not only be based on available data but on a broad scientific knowledge base including a variety of scientific disciplines. The invocation, implementation and application of the precautionary principle follows a progressive line of different levels of deliberations (which obviously can be interconnected to each other but are distinguished here for analytical purposes). I have listed these levels of deliberation in a table. The table provides a model for guiding all the relevant normative levels of deliberation which are all needed in order to eventually make the legitimate conclusions on the acceptability of products or processes. The table provides a progressive line of those levels of deliberations from the initial invocation of the precautionary principle at the political level down to level of risk management decisions but at the same time show their inter relatedness. Although the table may suggest a particular

  18. Characterization of quantum logics

    International Nuclear Information System (INIS)

    Lahti, P.J.

    1980-01-01

    The quantum logic approach to axiomatic quantum mechanics is used to analyze the conceptual foundations of the traditional quantum theory. The universal quantum of action h>0 is incorporated into the theory by introducing the uncertainty principle, the complementarity principle, and the superposition principle into the framework. A characterization of those quantum logics (L,S) which may provide quantum descriptions is then given. (author)

  19. Acceleration Measurements Using Smartphone Sensors: Dealing with the Equivalence Principle

    OpenAIRE

    Monteiro, Martín; Cabeza, Cecilia; Martí, Arturo C.

    2014-01-01

    Acceleration sensors built into smartphones, i-pads or tablets can conveniently be used in the physics laboratory. By virtue of the equivalence principle, a sensor fixed in a non-inertial reference frame cannot discern between a gravitational field and an accelerated system. Accordingly, acceleration values read by these sensors must be corrected for the gravitational component. A physical pendulum was studied by way of example, and absolute acceleration and rotation angle values were derived...

  20. Calculation of media temperatures for nuclear sources in geologic depositories by a finite-length line source superposition model (FLLSSM)

    Energy Technology Data Exchange (ETDEWEB)

    Kays, W M; Hossaini-Hashemi, F [Stanford Univ., Palo Alto, CA (USA). Dept. of Mechanical Engineering; Busch, J S [Kaiser Engineers, Oakland, CA (USA)

    1982-02-01

    A linearized transient thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high-level waste or spent fuel assemblies are represented as finite-length line sources in a continuous medium. The combined effects of multiple canisters in a representative storage pattern can be established in the medium at selected point of interest by superposition of the temperature rises calculated for each canister. A mathematical solution of the calculation for each separate source is given in this article, permitting a slow hand calculation. The full report, ONWI-94, contains the details of the computer code FLLSSM and its use, yielding the total solution in one computer output.

  1. Principle of accelerator mass spectrometry

    International Nuclear Information System (INIS)

    Matsuzaki, Hiroyuki

    2007-01-01

    The principle of accelerator mass spectrometry (AMS) is described mainly on technical aspects: hardware construction of AMS, measurement of isotope ratio, sensitivity of measurement (measuring limit), measuring accuracy, and application of data. The content may be summarized as follows: rare isotope (often long-lived radioactive isotope) can be detected by various use of the ion energy obtained by the acceleration of ions, a measurable isotope ratio is one of rare isotope to abundant isotopes, and a measured value of isotope ratio is uncertainty to true one. Such a fact must be kept in mind on the use of AMS data to application research. (M.H.)

  2. Fundamental principles of a new EM tool for in-situ resistivity measurement. 2; Denji yudoho ni yoru gen`ichi hiteiko sokutei sochi no kento. 2

    Energy Technology Data Exchange (ETDEWEB)

    Noguchi, K; Aoki, H [Waseda University, Tokyo (Japan). School of Science and Engineering; Saito, A [Mitsui Mineral Development Engineering Co. Ltd., Tokyo (Japan)

    1997-10-22

    In-situ resistivity measuring devices are tested for performance in relation to the principle of focusing. After numerical calculation, it is shown that in the absence of focusing the primary magnetic field will prevail and that changes in the separate-mode component will be difficult to detect in actual measurement because the in-phase component assumes a value far larger than the out-of-phase component. Concerning the transmission loop radius, the study reveals that a larger radius will yield a stronger response and that such will remove the influence of near-surface layers. Two types of devices are constructed, one applying the principle of focusing and the other not, and both are activated to measure the response from a saline solution medium. The results are compared and it is found that focusing eliminates the influence of the primary magnetic field and that it enables the measurement of changes in resistivity of the medium which cannot be detected in the absence of focusing. 3 refs., 9 figs.

  3. Investigation of the dual-gauge principle for eliminating measurement interference in nuclear density and moisture gauges

    International Nuclear Information System (INIS)

    Dunn, W.L.

    1974-07-01

    The development of mathematical models for an application of the dual-gauge principle to surface neutron moisture content gauges were made under an Agency co-ordinated research programme. The response of a detector (such as a BF 3 proportional counter) to low-energy neutrons is dependent on the hydrogen present in the sample in the form of water. Other factors which affect the gauge response are sample density, composition (particularly with regard to the presence of strong thermal neutron absorbers), and bound hydrogen content. In this work mathematical models for epicadmium and bare BF 3 detector response have been developed for surface neutron moisture content gauges. These models are based on epithermal and thermal line and area flux models obtained from Diffusion Theory and Transport Theory, where flux as a function of radial distance, r, from the source is phi(r), line flux ∫ phi (r) dr, and area flux is ∫ phi (r)rdr. All models have been checked by calculation and comparison to experimental results except for the Transport Theory thermal flux models. The computer calculations were made on an IBM 370/165 system. In addition, the dual-gauge principle was applied and demonstrated as a means of minimizing the composition measurement interference

  4. Modeling decoherence with qubits

    Science.gov (United States)

    Heusler, Stefan; Dür, Wolfgang

    2018-03-01

    Quantum effects like the superposition principle contradict our experience of daily life. Decoherence can be viewed as a possible explanation why we do not observe quantum superposition states in the macroscopic world. In this article, we use the qubit ansatz to discuss decoherence in the simplest possible model system and propose a visualization for the microscopic origin of decoherence, and the emergence of a so-called pointer basis. Finally, we discuss the possibility of ‘macroscopic’ quantum effects.

  5. Improving ability measurement in surveys by following the principles of IRT: The Wordsum vocabulary test in the General Social Survey.

    Science.gov (United States)

    Cor, M Ken; Haertel, Edward; Krosnick, Jon A; Malhotra, Neil

    2012-09-01

    Survey researchers often administer batteries of questions to measure respondents' abilities, but these batteries are not always designed in keeping with the principles of optimal test construction. This paper illustrates one instance in which following these principles can improve a measurement tool used widely in the social and behavioral sciences: the GSS's vocabulary test called "Wordsum". This ten-item test is composed of very difficult items and very easy items, and item response theory (IRT) suggests that the omission of moderately difficult items is likely to have handicapped Wordsum's effectiveness. Analyses of data from national samples of thousands of American adults show that after adding four moderately difficult items to create a 14-item battery, "Wordsumplus" (1) outperformed the original battery in terms of quality indicators suggested by classical test theory; (2) reduced the standard error of IRT ability estimates in the middle of the latent ability dimension; and (3) exhibited higher concurrent validity. These findings show how to improve Wordsum and suggest that analysts should use a score based on all 14 items instead of using the summary score provided by the GSS, which is based on only the original 10 items. These results also show more generally how surveys measuring abilities (and other constructs) can benefit from careful application of insights from the contemporary educational testing literature. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Uncertainty principle for angular position and angular momentum

    International Nuclear Information System (INIS)

    Franke-Arnold, Sonja; Barnett, Stephen M; Yao, Eric; Leach, Jonathan; Courtial, Johannes; Padgett, Miles

    2004-01-01

    The uncertainty principle places fundamental limits on the accuracy with which we are able to measure the values of different physical quantities (Heisenberg 1949 The Physical Principles of the Quantum Theory (New York: Dover); Robertson 1929 Phys. Rev. 34 127). This has profound effects not only on the microscopic but also on the macroscopic level of physical systems. The most familiar form of the uncertainty principle relates the uncertainties in position and linear momentum. Other manifestations include those relating uncertainty in energy to uncertainty in time duration, phase of an electromagnetic field to photon number and angular position to angular momentum (Vaccaro and Pegg 1990 J. Mod. Opt. 37 17; Barnett and Pegg 1990 Phys. Rev. A 41 3427). In this paper, we report the first observation of the last of these uncertainty relations and derive the associated states that satisfy the equality in the uncertainty relation. We confirm the form of these states by detailed measurement of the angular momentum of a light beam after passage through an appropriate angular aperture. The angular uncertainty principle applies to all physical systems and is particularly important for systems with cylindrical symmetry

  7. Soft magnetic tweezers: a proof of principle.

    Science.gov (United States)

    Mosconi, Francesco; Allemand, Jean François; Croquette, Vincent

    2011-03-01

    We present here the principle of soft magnetic tweezers which improve the traditional magnetic tweezers allowing the simultaneous application and measurement of an arbitrary torque to a deoxyribonucleic acid (DNA) molecule. They take advantage of a nonlinear coupling regime that appears when a fast rotating magnetic field is applied to a superparamagnetic bead immersed in a viscous fluid. In this work, we present the development of the technique and we compare it with other techniques capable of measuring the torque applied to the DNA molecule. In this proof of principle, we use standard electromagnets to achieve our experiments. Despite technical difficulties related to the present implementation of these electromagnets, the agreement of measurements with previous experiments is remarkable. Finally, we propose a simple way to modify the experimental design of electromagnets that should bring the performances of the device to a competitive level.

  8. Thermodynamics of Weakly Measured Quantum Systems.

    Science.gov (United States)

    Alonso, Jose Joaquin; Lutz, Eric; Romito, Alessandro

    2016-02-26

    We consider continuously monitored quantum systems and introduce definitions of work and heat along individual quantum trajectories that are valid for coherent superposition of energy eigenstates. We use these quantities to extend the first and second laws of stochastic thermodynamics to the quantum domain. We illustrate our results with the case of a weakly measured driven two-level system and show how to distinguish between quantum work and heat contributions. We finally employ quantum feedback control to suppress detector backaction and determine the work statistics.

  9. Principles of electromigration measurements

    International Nuclear Information System (INIS)

    Roesch, F.

    1988-01-01

    Basing on experimental applications of a modified version of on line electromigration measurements of γ-emitting radionuclides in homogeneous aqueous electrolytes free of supporting materials conceptions on calculation of stoichiometric and thermodynamic stability constants are carried out. Normalized ion mobilities were discussed, reflecting changes of the overall ion mobility of the radioelement in its equilibrium reaction in respect to the individual ion mobilities of the central ion at identic electrolyte parameters (temperature, overall ionic strength). With model reactions as well as with complex formations of Tl(I) with bromide and sulfate, respectively, examples of practical realizations of the conceptions are shown. (author)

  10. Principles of development of the industry of technogenic waste processing

    Directory of Open Access Journals (Sweden)

    Maria A. Bayeva

    2014-01-01

    Full Text Available Objective to identify and substantiate the principles of development of the industry of technogenic waste processing. Methods systemic analysis and synthesis method of analogy. Results basing on the analysis of the Russian and foreign experience in the field of waste management and environmental protection the basic principles of development activities on technogenic waste processing are formulated the principle of legal regulation the principle of efficiency technologies the principle of ecological safety the principle of economic support. The importance of each principle is substantiated by the description of the situation in this area identifying the main problems and ways of their solution. Scientific novelty the fundamental principles of development of the industry of the industrial wastes processing are revealed the measures of state support are proposed. Practical value the presented theoretical conclusions and proposals are aimed primarily on theoretical and methodological substantiation and practical solutions to modern problems in the sphere of development of the industry of technogenic waste processing.

  11. [The General Principles of Suicide Prevention Policy from the perspective of clinical psychiatry].

    Science.gov (United States)

    Cho, Yoshinori; Inagaki, Masatoshi

    2014-01-01

    In view of the fact that the suicide rate in Japan has remained high since 1998, the Basic Act on Suicide Prevention was implemented in 2006 with the objective of comprehensively promoting suicide prevention measures on a national scale. Based on this Basic Act, in 2007, the Japanese government formulated the General Principles of Suicide Prevention Policy as a guideline for recommended suicide prevention measures. These General Principles were revised in 2012 in accordance with the initial plan of holding a review after five years. The Basic Act places an emphasis on the various social factors that underlie suicides and takes the perspective that suicide prevention measures are also social measures. The slogan of the revised General Principles is "Toward Realization of a Society in which Nobody is Driven to Commit Suicide". The General Principles list various measures that are able to be used universally. These contents would be sufficient if the objective of the General Principles were "realization of a society that is easy to live in"; however, the absence of information on the effectiveness and order of priority for each measure may limit the specific effectiveness of the measures in relation to the actual prevention of suicide. In addition, considering that nearly 90% of suicide victims are in a state at the time of committing suicide in which a psychiatric disorder would be diagnosed, it would appear from a psychiatric standpoint that measures related to mental health, including expansion of psychiatric services, should be the top priority in suicide prevention measures. However, this is not the case in the General Principles, in either its original or revised form. Revisions to the General Principles related to clinical psychiatry provide more detailed descriptions of measures for individuals who unsuccessfully attempt suicide and identify newly targeted mental disorders other than depression; however, the overall proportion of contents relating to

  12. Creep investigation of GFRP RC Beams - Part B: a theoretical framework

    Directory of Open Access Journals (Sweden)

    masmoudi abdelmonem

    2014-11-01

    Full Text Available This paper presents an analytical study about the viscoelastic time-dependent (creep behavior of pultruded GFRP elements made of polyester and E-glass fibres. Experimental results reported in Part A are firstly used for material characterization by means of empirical and phenomenological formulations.   The superposition principles by adopting the law of creep following the Eurocode 2 recommendations are also investigated. Analytical study was also conducted including creep under constant stress; successions of increasing stress superposition principle equivalent time and the return creep reloading. The results of this study revealed that Beams reinforced with GFRP are less marked with creep phenomenon.  This investigation should guide the civil engineer/designer for a better understanding creep phenomenon in GFRP reinforced concrete members.

  13. A scheme of quantum state discrimination over specified states via weak-value measurement

    Science.gov (United States)

    Chen, Xi; Dai, Hong-Yi; Liu, Bo-Yang; Zhang, Ming

    2018-04-01

    The commonly adopted projective measurements are invalid in the specified task of quantum state discrimination when the discriminated states are superposition of planar-position basis states whose complex-number probability amplitudes have the same magnitude but different phases. Therefore we propose a corresponding scheme via weak-value measurement and examine the feasibility of this scheme. Furthermore, the role of the weak-value measurement in quantum state discrimination is analyzed and compared with one in quantum state tomography in this Letter.

  14. Improvement of ozone yield by a multi-discharge type ozonizer using superposition of silent discharge plasma

    International Nuclear Information System (INIS)

    Song, Hyun-Jig; Chun, Byung-Joon; Lee, Kwang-Sik

    2004-01-01

    In order to improve ozone generation, we experimentally investigated the silent discharge plasma and ozone generation characteristics of a multi-discharge type ozonizer. Ozone in a multi-discharge type ozonizer is generated by superposition of a silent discharge plasma, which is simultaneously generated in separated discharge spaces. A multi-discharge type ozonizer is composed of three different kinds of superposed silent discharge type ozonizers, depending on the method of applying power to each electrode. We observed that the discharge period of the current pulse for a multi discharge type ozonizer can be longer than that of silent discharge type ozonizer with two electrodes and one gap. Hence, ozone generation is improved up to 17185 ppm and 783 g/kwh in the case of the superposed silent discharge type ozonizer for which an AC high voltages with a 180 .deg. phase difference were applied to the internal electrode and the external electrode, respectively, with the central electrode being grounded.

  15. A Bethe ansatz solvable model for superpositions of Cooper pairs and condensed molecular bosons

    International Nuclear Information System (INIS)

    Hibberd, K.E.; Dunning, C.; Links, J.

    2006-01-01

    We introduce a general Hamiltonian describing coherent superpositions of Cooper pairs and condensed molecular bosons. For particular choices of the coupling parameters, the model is integrable. One integrable manifold, as well as the Bethe ansatz solution, was found by Dukelsky et al. [J. Dukelsky, G.G. Dussel, C. Esebbag, S. Pittel, Phys. Rev. Lett. 93 (2004) 050403]. Here we show that there is a second integrable manifold, established using the boundary quantum inverse scattering method. In this manner we obtain the exact solution by means of the algebraic Bethe ansatz. In the case where the Cooper pair energies are degenerate we examine the relationship between the spectrum of these integrable Hamiltonians and the quasi-exactly solvable spectrum of particular Schrodinger operators. For the solution we derive here the potential of the Schrodinger operator is given in terms of hyperbolic functions. For the solution derived by Dukelsky et al., loc. cit. the potential is sextic and the wavefunctions obey PT-symmetric boundary conditions. This latter case provides a novel example of an integrable Hermitian Hamiltonian acting on a Fock space whose states map into a Hilbert space of PT-symmetric wavefunctions defined on a contour in the complex plane

  16. Experimental observation of constructive superposition of wakefields generated by electron bunches in a dielectric-lined waveguide

    Directory of Open Access Journals (Sweden)

    S. V. Shchelkunov

    2006-01-01

    Full Text Available We report results from an experiment that demonstrates the successful superposition of wakefields excited by 50 MeV bunches which travel ∼50  cm along the axis of a cylindrical waveguide which is lined with alumina. The bunches are prepared by splitting a single laser pulse prior to focusing it onto the cathode of an rf gun into two pulses and inserting an optical delay in the path of one of them. Wakefields from two short (5–6 psec 0.15–0.35 nC bunches are superimposed and the energy loss of each bunch is measured as the separation between the bunches is varied so as to encompass approximately one wakefield period (∼21   cm. A spectrum of ∼40   TM_{0m} eigenmodes is excited by the bunch. A substantial retarding wakefield (2.65   MV/m·nC for just the first bunch is developed because of the short bunch length and the narrow vacuum channel diameter (3 mm through which they move. The energy loss of the second bunch exhibits a narrow peak when the bunch spacing is varied by only 4 mm (13.5 psec. This experiment is compared with a related experiment reported by a group at the Argonne National Laboratory where the bunch spacing was not varied and a much weaker retarding wakefield (∼0.1   MV/m·nC for the first bunch comprising only about 10 eigenmodes was excited by a train of long (∼9   mm bunches.

  17. Basic principles

    International Nuclear Information System (INIS)

    Wilson, P.D.

    1996-01-01

    Some basic explanations are given of the principles underlying the nuclear fuel cycle, starting with the physics of atomic and nuclear structure and continuing with nuclear energy and reactors, fuel and waste management and finally a discussion of economics and the future. An important aspect of the fuel cycle concerns the possibility of ''closing the back end'' i.e. reprocessing the waste or unused fuel in order to re-use it in reactors of various kinds. The alternative, the ''oncethrough'' cycle, discards the discharged fuel completely. An interim measure involves the prolonged storage of highly radioactive waste fuel. (UK)

  18. Expanding Uncertainty Principle to Certainty-Uncertainty Principles with Neutrosophy and Quad-stage Method

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2015-03-01

    Full Text Available The most famous contribution of Heisenberg is uncertainty principle. But the original uncertainty principle is improper. Considering all the possible situations (including the case that people can create laws and applying Neutrosophy and Quad-stage Method, this paper presents "certainty-uncertainty principles" with general form and variable dimension fractal form. According to the classification of Neutrosophy, "certainty-uncertainty principles" can be divided into three principles in different conditions: "certainty principle", namely a particle’s position and momentum can be known simultaneously; "uncertainty principle", namely a particle’s position and momentum cannot be known simultaneously; and neutral (fuzzy "indeterminacy principle", namely whether or not a particle’s position and momentum can be known simultaneously is undetermined. The special cases of "certain ty-uncertainty principles" include the original uncertainty principle and Ozawa inequality. In addition, in accordance with the original uncertainty principle, discussing high-speed particle’s speed and track with Newton mechanics is unreasonable; but according to "certaintyuncertainty principles", Newton mechanics can be used to discuss the problem of gravitational defection of a photon orbit around the Sun (it gives the same result of deflection angle as given by general relativity. Finally, for the reason that in physics the principles, laws and the like that are regardless of the principle (law of conservation of energy may be invalid; therefore "certaintyuncertainty principles" should be restricted (or constrained by principle (law of conservation of energy, and thus it can satisfy the principle (law of conservation of energy.

  19. The Uncertainty Principle in the Presence of Quantum Memory

    Science.gov (United States)

    Renes, Joseph M.; Berta, Mario; Christandl, Matthias; Colbeck, Roger; Renner, Renato

    2010-03-01

    One consequence of Heisenberg's uncertainty principle is that no observer can predict the outcomes of two incompatible measurements performed on a system to arbitrary precision. However, this implication is invalid if the the observer possesses a quantum memory, a distinct possibility in light of recent technological advances. Entanglement between the system and the memory is responsible for the breakdown of the uncertainty principle, as illustrated by the EPR paradox. In this work we present an improved uncertainty principle which takes this entanglement into account. By quantifying uncertainty using entropy, we show that the sum of the entropies associated with incompatible measurements must exceed a quantity which depends on the degree of incompatibility and the amount of entanglement between system and memory. Apart from its foundational significance, the uncertainty principle motivated the first proposals for quantum cryptography, though the possibility of an eavesdropper having a quantum memory rules out using the original version to argue that these proposals are secure. The uncertainty relation introduced here alleviates this problem and paves the way for its widespread use in quantum cryptography.

  20. A weak equivalence principle test on a suborbital rocket

    Energy Technology Data Exchange (ETDEWEB)

    Reasenberg, Robert D; Phillips, James D, E-mail: reasenberg@cfa.harvard.ed [Smithsonian Astrophysical Observatory, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States)

    2010-05-07

    We describe a Galilean test of the weak equivalence principle, to be conducted during the free fall portion of a sounding rocket flight. The test of a single pair of substances is aimed at a measurement uncertainty of sigma(eta) < 10{sup -16} after averaging the results of eight separate drops. The weak equivalence principle measurement is made with a set of four laser gauges that are expected to achieve 0.1 pm Hz{sup -1/2}. The discovery of a violation (eta not = 0) would have profound implications for physics, astrophysics and cosmology.

  1. MCNP Techniques for Modeling Sodium Iodide Spectra of Kiwi Surveys

    International Nuclear Information System (INIS)

    Robert B Hayes

    2007-01-01

    This work demonstrates how MCNP can be used to predict the response of mobile search and survey equipment from base principles. The instrumentation evaluated comes from the U.S. Department of Energy's Aerial Measurement Systems. Through reconstructing detector responses to various point-source measurements, detector responses to distributed sources can be estimated through superposition. Use of this methodology for currently deployed systems allows predictive determinations of activity levels and distributions for common configurations of interest. This work helps determine the quality and efficacy of certain surveys in fully characterizing an effected site following a radiological event of national interest

  2. Principle of minimum distance in space of states as new principle in quantum physics

    International Nuclear Information System (INIS)

    Ion, D. B.; Ion, M. L. D.

    2007-01-01

    The mathematician Leonhard Euler (1707-1783) appears to have been a philosophical optimist having written: 'Since the fabric of universe is the most perfect and is the work of the most wise Creator, nothing whatsoever take place in this universe in which some relation of maximum or minimum does not appear. Wherefore, there is absolutely no doubt that every effect in universe can be explained as satisfactory from final causes themselves the aid of the method of Maxima and Minima, as can from the effective causes'. Having in mind this kind of optimism in the papers mentioned in this work we introduced and investigated the possibility to construct a predictive analytic theory of the elementary particle interaction based on the principle of minimum distance in the space of quantum states (PMD-SQS). So, choosing the partial transition amplitudes as the system variational variables and the distance in the space of the quantum states as a measure of the system effectiveness, we obtained the results presented in this paper. These results proved that the principle of minimum distance in space of quantum states (PMD-SQS) can be chosen as variational principle by which we can find the analytic expressions of the partial transition amplitudes. In this paper we present a description of hadron-hadron scattering via principle of minimum distance PMD-SQS when the distance in space of states is minimized with two directional constraints: dσ/dΩ(±1) = fixed. Then by using the available experimental (pion-nucleon and kaon-nucleon) phase shifts we obtained not only consistent experimental tests of the PMD-SQS optimality, but also strong experimental evidences for new principles in hadronic physics such as: Principle of nonextensivity conjugation via the Riesz-Thorin relation (1/2p + 1/2q = 1) and a new Principle of limited uncertainty in nonextensive quantum physics. The strong experimental evidence obtained here for the nonextensive statistical behavior of the [J,

  3. Dimensional cosmological principles

    International Nuclear Information System (INIS)

    Chi, L.K.

    1985-01-01

    The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle

  4. Metaphysics of the principle of least action

    Science.gov (United States)

    Terekhovich, Vladislav

    2018-05-01

    Despite the importance of the variational principles of physics, there have been relatively few attempts to consider them for a realistic framework. In addition to the old teleological question, this paper continues the recent discussion regarding the modal involvement of the principle of least action and its relations with the Humean view of the laws of nature. The reality of possible paths in the principle of least action is examined from the perspectives of the contemporary metaphysics of modality and Leibniz's concept of essences or possibles striving for existence. I elaborate a modal interpretation of the principle of least action that replaces a classical representation of a system's motion along a single history in the actual modality by simultaneous motions along an infinite set of all possible histories in the possible modality. This model is based on an intuition that deep ontological connections exist between the possible paths in the principle of least action and possible quantum histories in the Feynman path integral. I interpret the action as a physical measure of the essence of every possible history. Therefore only one actual history has the highest degree of the essence and minimal action. To address the issue of necessity, I assume that the principle of least action has a general physical necessity and lies between the laws of motion with a limited physical necessity and certain laws with a metaphysical necessity.

  5. Demonstrating Fermat's Principle in Optics

    Science.gov (United States)

    Paleiov, Orr; Pupko, Ofir; Lipson, S. G.

    2011-01-01

    We demonstrate Fermat's principle in optics by a simple experiment using reflection from an arbitrarily shaped one-dimensional reflector. We investigated a range of possible light paths from a lamp to a fixed slit by reflection in a curved reflector and showed by direct measurement that the paths along which light is concentrated have either…

  6. Superconducting analogs of quantum optical phenomena: Macroscopic quantum superpositions and squeezing in a superconducting quantum-interference device ring

    International Nuclear Information System (INIS)

    Everitt, M.J.; Clark, T.D.; Stiffell, P.B.; Prance, R.J.; Prance, H.; Vourdas, A.; Ralph, J.F.

    2004-01-01

    In this paper we explore the quantum behavior of a superconducting quantum-interference device (SQUID) ring which has a significant Josephson coupling energy. We show that the eigenfunctions of the Hamiltonian for the ring can be used to create macroscopic quantum superposition states of the ring. We also show that the ring potential may be utilized to squeeze coherent states. With the SQUID ring as a strong contender as a device for manipulating quantum information, such properties may be of great utility in the future. However, as with all candidate systems for quantum technologies, decoherence is a fundamental problem. In this paper we apply an open systems approach to model the effect of coupling a quantum-mechanical SQUID ring to a thermal bath. We use this model to demonstrate the manner in which decoherence affects the quantum states of the ring

  7. Are Quantum Models for Order Effects Quantum?

    Science.gov (United States)

    Moreira, Catarina; Wichert, Andreas

    2017-12-01

    The application of principles of Quantum Mechanics in areas outside of physics has been getting increasing attention in the scientific community in an emergent disciplined called Quantum Cognition. These principles have been applied to explain paradoxical situations that cannot be easily explained through classical theory. In quantum probability, events are characterised by a superposition state, which is represented by a state vector in a N-dimensional vector space. The probability of an event is given by the squared magnitude of the projection of this superposition state into the desired subspace. This geometric approach is very useful to explain paradoxical findings that involve order effects, but do we really need quantum principles for models that only involve projections? This work has two main goals. First, it is still not clear in the literature if a quantum projection model has any advantage towards a classical projection. We compared both models and concluded that the Quantum Projection model achieves the same results as its classical counterpart, because the quantum interference effects play no role in the computation of the probabilities. Second, it intends to propose an alternative relativistic interpretation for rotation parameters that are involved in both classical and quantum models. In the end, instead of interpreting these parameters as a similarity measure between questions, we propose that they emerge due to the lack of knowledge concerned with a personal basis state and also due to uncertainties towards the state of world and towards the context of the questions.

  8. Topological Principles of Borosilicate Glass Chemistry

    DEFF Research Database (Denmark)

    Smedskjær, Morten Mattrup; Mauro, J. C.; Youngman, R. E.

    2011-01-01

    and laboratory glassware to high-tech applications such as liquid crystal displays. In this paper, we investigate the topological principles of borosilicate glass chemistry covering the extremes from pure borate to pure silicate end members. Based on NMR measurements, we present a two-state statistical...

  9. Near-field interferometry of a free-falling nanoparticle from a point-like source

    Science.gov (United States)

    Bateman, James; Nimmrichter, Stefan; Hornberger, Klaus; Ulbricht, Hendrik

    2014-09-01

    Matter-wave interferometry performed with massive objects elucidates their wave nature and thus tests the quantum superposition principle at large scales. Whereas standard quantum theory places no limit on particle size, alternative, yet untested theories—conceived to explain the apparent quantum to classical transition—forbid macroscopic superpositions. Here we propose an interferometer with a levitated, optically cooled and then free-falling silicon nanoparticle in the mass range of one million atomic mass units, delocalized over >150 nm. The scheme employs the near-field Talbot effect with a single standing-wave laser pulse as a phase grating. Our analysis, which accounts for all relevant sources of decoherence, indicates that this is a viable route towards macroscopic high-mass superpositions using available technology.

  10. Multidimensional Heat Conduction

    DEFF Research Database (Denmark)

    Rode, Carsten

    1998-01-01

    Analytical theory of multidimensional heat conduction. General heat conduction equation in three dimensions. Steay state, analytical solutions. The Laplace equation. Method of separation of variables. Principle of superposition. Shape factors. Transient, multidimensional heat conduction....

  11. Physical acoustics v.8 principles and methods

    CERN Document Server

    Mason, Warren P

    1971-01-01

    Physical Acoustics: Principles and Methods, Volume VIII discusses a number of themes on physical acoustics that are divided into seven chapters. Chapter 1 describes the principles and applications of a tool for investigating phonons in dielectric crystals, the spin phonon spectrometer. The next chapter discusses the use of ultrasound in investigating Landau quantum oscillations in the presence of a magnetic field and their relation to the strain dependence of the Fermi surface of metals. The third chapter focuses on the ultrasonic measurements that are made by pulsing methods with velo

  12. Safety principles for nuclear power plants

    International Nuclear Information System (INIS)

    Vuorinen, A.

    1993-01-01

    The role and purpose of safety principles for nuclear power plants are discussed. A brief information is presented on safety objectives as given in the INSAG documents. The possible linkage is discussed between the two mentioned elements of nuclear safety and safety culture. Safety culture is a rather new concept and there is more than one interpretation of the definition given by INSAG. The defence in depth is defined by INSAG as a fundamental principle of safety technology of nuclear power. Discussed is the overall strategy for safety measures, and features of nuclear power plants provided by the defence-in-depth concept. (Z.S.) 7 refs

  13. Principles of modern radar systems

    CERN Document Server

    Carpentier, Michel H

    1988-01-01

    Introduction to random functions ; signal and noise : the ideal receiver ; performance of radar systems equipped with ideal receivers ; analysis of the operating principles of some types of radar ; behavior of real targets, fluctuation of targets ; angle measurement using radar ; data processing of radar information, radar coverage ; applications to electronic scanning antennas to radar ; introduction to Hilbert spaces.

  14. A first-principles approach to finite temperature elastic constants

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y; Wang, J J; Zhang, H; Manga, V R; Shang, S L; Chen, L-Q; Liu, Z-K [Department of Materials Science and Engineering, Pennsylvania State University, University Park, PA 16802 (United States)

    2010-06-09

    A first-principles approach to calculating the elastic stiffness coefficients at finite temperatures was proposed. It is based on the assumption that the temperature dependence of elastic stiffness coefficients mainly results from volume change as a function of temperature; it combines the first-principles calculations of elastic constants at 0 K and the first-principles phonon theory of thermal expansion. Its applications to elastic constants of Al, Cu, Ni, Mo, Ta, NiAl, and Ni{sub 3}Al from 0 K up to their respective melting points show excellent agreement between the predicted values and existing experimental measurements.

  15. A first-principles approach to finite temperature elastic constants

    International Nuclear Information System (INIS)

    Wang, Y; Wang, J J; Zhang, H; Manga, V R; Shang, S L; Chen, L-Q; Liu, Z-K

    2010-01-01

    A first-principles approach to calculating the elastic stiffness coefficients at finite temperatures was proposed. It is based on the assumption that the temperature dependence of elastic stiffness coefficients mainly results from volume change as a function of temperature; it combines the first-principles calculations of elastic constants at 0 K and the first-principles phonon theory of thermal expansion. Its applications to elastic constants of Al, Cu, Ni, Mo, Ta, NiAl, and Ni 3 Al from 0 K up to their respective melting points show excellent agreement between the predicted values and existing experimental measurements.

  16. The demonstration of nonlinear analytic model for the strain field induced by thermal copper filled TSVs (through silicon via

    Directory of Open Access Journals (Sweden)

    M. H. Liao

    2013-08-01

    Full Text Available The thermo-elastic strain is induced by through silicon vias (TSV due to the difference of thermal expansion coefficients between the copper (∼18 ppm/ °C and silicon (∼2.8 ppm/ °C when the structure is exposed to a thermal ramp budget in the three dimensional integrated circuit (3DIC process. These thermal expansion stresses are high enough to introduce the delamination on the interfaces between the copper, silicon, and isolated dielectric. A compact analytic model for the strain field induced by different layouts of thermal copper filled TSVs with the linear superposition principle is found to have large errors due to the strong stress interaction between TSVs. In this work, a nonlinear stress analytic model with different TSV layouts is demonstrated by the finite element method and the analysis of the Mohr's circle. The characteristics of stress are also measured by the atomic force microscope-raman technique with nanometer level space resolution. The change of the electron mobility with the consideration of this nonlinear stress model for the strong interactions between TSVs is ∼2–6% smaller in comparison with those from the consideration of the linear stress superposition principle only.

  17. The principles of measuring forest fire danger

    Science.gov (United States)

    H. T. Gisborne

    1936-01-01

    Research in fire danger measurement was commenced in 1922 at the Northern Rocky Mountain Forest and Range Experiment Station of the U. S. Forest Service, with headquarters at Missoula, Mont. Since then investigations have been made concerning ( 1) what to measure, (2) how to measure, and ( 3) field use of these measurements. In all cases the laboratory or restricted...

  18. Electricity electron measurement

    International Nuclear Information System (INIS)

    Kim, Sang Jin; Sung, Rak Jin

    1985-11-01

    This book deals with measurement of electricity and electron. It is divided into fourteen chapters, which depicts basic of electricity measurement, unit and standard, important electron circuit for measurement, instrument of electricity, impedance measurement, power and power amount measurement, frequency and time measurement, waveform measurement, record instrument and direct viewing instrument, super high frequency measurement, digital measurement on analog-digital convert, magnetic measurement on classification by principle of measurement, measurement of electricity application with principle sensors and systematization of measurement.

  19. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  20. Quantum Action Principle with Generalized Uncertainty Principle

    OpenAIRE

    Gu, Jie

    2013-01-01

    One of the common features in all promising candidates of quantum gravity is the existence of a minimal length scale, which naturally emerges with a generalized uncertainty principle, or equivalently a modified commutation relation. Schwinger's quantum action principle was modified to incorporate this modification, and was applied to the calculation of the kernel of a free particle, partly recovering the result previously studied using path integral.

  1. The role of general relativity in the uncertainty principle

    International Nuclear Information System (INIS)

    Padmanabhan, T.

    1986-01-01

    The role played by general relativity in quantum mechanics (especially as regards the uncertainty principle) is investigated. It is confirmed that the validity of time-energy uncertainty does depend on gravitational time dilation. It is also shown that there exists an intrinsic lower bound to the accuracy with which acceleration due to gravity can be measured. The motion of equivalence principle in quantum mechanics is clarified. (author)

  2. Principles of fluid mechanics

    International Nuclear Information System (INIS)

    Kreider, J.F.

    1985-01-01

    This book is an introduction on fluid mechanics incorporating computer applications. Topics covered are as follows: brief history; what is a fluid; two classes of fluids: liquids and gases; the continuum model of a fluid; methods of analyzing fluid flows; important characteristics of fluids; fundamentals and equations of motion; fluid statics; dimensional analysis and the similarity principle; laminar internal flows; ideal flow; external laminar and channel flows; turbulent flow; compressible flow; fluid flow measurements

  3. Atoms in the secondary school

    International Nuclear Information System (INIS)

    Marx, G.

    1976-01-01

    A basic nuclear physics teaching programme, at present being used in a number of Hungarian secondary schools, is described in this and a previous article (Marx. Phys. Educ.; 11: 409 (1976)). Simple notions of quantum theory and general principles of superposition and de Broglie wavelength, the uncertainty relation and the exclusion principle are used. Using these principles, teaching of concepts concerning the excited states of the hydrogen atom, Pauli's exclusion principle, ion formation, covariant bonding and the shape of molecules, are discussed. (U.K.)

  4. PRINCIPLES OF THE SUPPLY CHAIN PERFORMANCE MEASUREMENT

    OpenAIRE

    BEATA ŒLUSARCZYK; SEBASTIAN KOT

    2012-01-01

    Measurement of performance in every business management is a crucial activity allowing for effectiveness increase. The lack of suitable performance measurement is especially noticed in complex systems as supply chains. Responsible persons cannot manage effectively without suitable set of measures those are base for comparison to previous data or effects of other supply chain functioning. The analysis shows that it is very hard to find balanced set of supply chain performance measures those sh...

  5. Fundamental safety principles. Safety fundamentals

    International Nuclear Information System (INIS)

    2007-01-01

    This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purpose. The fundamental safety objective - to protect people and the environment from harmful effects of ionizing radiation - applies to all circumstances that give rise to radiation risks. The safety principles are applicable, as relevant, throughout the entire lifetime of all facilities and activities - existing and new - utilized for peaceful purposes, and to protective actions to reduce existing radiation risks. They provide the basis for requirements and measures for the protection of people and the environment against radiation risks and for the safety of facilities and activities that give rise to radiation risks, including, in particular, nuclear installations and uses of radiation and radioactive sources, the transport of radioactive material and the management of radioactive waste

  6. Fundamental safety principles. Safety fundamentals

    International Nuclear Information System (INIS)

    2006-01-01

    This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purpose. The fundamental safety objective - to protect people and the environment from harmful effects of ionizing radiation - applies to all circumstances that give rise to radiation risks. The safety principles are applicable, as relevant, throughout the entire lifetime of all facilities and activities - existing and new - utilized for peaceful purposes, and to protective actions to reduce existing radiation risks. They provide the basis for requirements and measures for the protection of people and the environment against radiation risks and for the safety of facilities and activities that give rise to radiation risks, including, in particular, nuclear installations and uses of radiation and radioactive sources, the transport of radioactive material and the management of radioactive waste

  7. Separability criteria and method of measurement for entanglement

    Science.gov (United States)

    Mohd, Siti Munirah; Idrus, Bahari; Mukhtar, Muriati

    2014-06-01

    Quantum computers have the potentials to solve certain problems faster than classical computers. In quantum computer, entanglement is one of the elements beside superposition. Recently, with the advent of quantum information theory, entanglement has become an important resource for Quantum Information and Computation. The purpose of this paper is to discuss the separability criteria and method of measurement for entanglement. This paper is aimed at viewing the method that has been proposed in previous works in bipartite and multipartite entanglement. The outcome of this paper is to classify the different method that used to measure entanglement for bipartite and multipartite cases including the advantage and disadvantage of each method.

  8. Separability criteria and method of measurement for entanglement

    Energy Technology Data Exchange (ETDEWEB)

    Mohd, Siti Munirah; Idrus, Bahari; Mukhtar, Muriati [Industrial Computing Research Group, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)

    2014-06-19

    Quantum computers have the potentials to solve certain problems faster than classical computers. In quantum computer, entanglement is one of the elements beside superposition. Recently, with the advent of quantum information theory, entanglement has become an important resource for Quantum Information and Computation. The purpose of this paper is to discuss the separability criteria and method of measurement for entanglement. This paper is aimed at viewing the method that has been proposed in previous works in bipartite and multipartite entanglement. The outcome of this paper is to classify the different method that used to measure entanglement for bipartite and multipartite cases including the advantage and disadvantage of each method.

  9. Separability criteria and method of measurement for entanglement

    International Nuclear Information System (INIS)

    Mohd, Siti Munirah; Idrus, Bahari; Mukhtar, Muriati

    2014-01-01

    Quantum computers have the potentials to solve certain problems faster than classical computers. In quantum computer, entanglement is one of the elements beside superposition. Recently, with the advent of quantum information theory, entanglement has become an important resource for Quantum Information and Computation. The purpose of this paper is to discuss the separability criteria and method of measurement for entanglement. This paper is aimed at viewing the method that has been proposed in previous works in bipartite and multipartite entanglement. The outcome of this paper is to classify the different method that used to measure entanglement for bipartite and multipartite cases including the advantage and disadvantage of each method

  10. Thermionics basic principles of electronics

    CERN Document Server

    Jenkins, J; Ashhurst, W

    2013-01-01

    Basic Principles of Electronics, Volume I : Thermionics serves as a textbook for students in physics. It focuses on thermionic devices. The book covers topics on electron dynamics, electron emission, and the themionic vacuum diode and triode. Power amplifiers, oscillators, and electronic measuring equipment are studied as well. The text will be of great use to physics and electronics students, and inventors.

  11. Generalized uncertainty principle as a consequence of the effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Faizal, Mir, E-mail: mirfaizalmir@gmail.com [Irving K. Barber School of Arts and Sciences, University of British Columbia – Okanagan, Kelowna, British Columbia V1V 1V7 (Canada); Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada); Ali, Ahmed Farag, E-mail: ahmed.ali@fsc.bu.edu.eg [Department of Physics, Faculty of Science, Benha University, Benha, 13518 (Egypt); Netherlands Institute for Advanced Study, Korte Spinhuissteeg 3, 1012 CG Amsterdam (Netherlands); Nassar, Ali, E-mail: anassar@zewailcity.edu.eg [Department of Physics, Zewail City of Science and Technology, 12588, Giza (Egypt)

    2017-02-10

    We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

  12. Generalized uncertainty principle as a consequence of the effective field theory

    Directory of Open Access Journals (Sweden)

    Mir Faizal

    2017-02-01

    Full Text Available We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

  13. Experimental application of design principles in corrosion research

    International Nuclear Information System (INIS)

    Smyrl, W.H.; Pohlman, S.L.

    1977-01-01

    Experimental design criteria for corrosion investigations are based on established principles for systems that have uniform, or nearly uniform, corrosive attack. Scale-up or scale-down may be accomplished by proper use of dimensionless groups that measure the relative importance of interfacial kinetics, solution conductivity, and mass transfer. These principles have been applied to different fields of corrosion which include materials selection testing and protection; and to a specific corrosion problem involving attack of a substrate through holes in a protective overplate

  14. Three Principles of Water Flow in Soils

    Science.gov (United States)

    Guo, L.; Lin, H.

    2016-12-01

    -based dynamics of water flow, and the third principle combines macroscopic and microscopic consideration to explain a mosaic-like flow regime in soils. Integration of above principles can advance flow theory, measurement, and modeling and can improve management of soil and water resources.

  15. Nuclear detectors. Physical principles of operation

    International Nuclear Information System (INIS)

    Pochet, Th.

    2005-01-01

    Nuclear detection is used in several domains of activity from the physics research, the nuclear industry, the medical and industrial sectors, the security etc. The particles of interest are the α, β, X, γ and neutrons. This article treats of the basic physical properties of radiation detection, the general characteristics of the different classes of existing detectors and the particle/matter interactions: 1 - general considerations; 2 - measurement types and definitions: pulse mode, current mode, definitions; 3 - physical principles of direct detection: introduction and general problem, materials used in detection, simple device, junction semiconductor device, charges generation and transport inside matter, signal generation; 4 - physical principles of indirect detection: introduction, scintillation mechanisms, definition and properties of scintillators. (J.S.)

  16. Behavior Modification: Basic Principles. Third Edition

    Science.gov (United States)

    Lee, David L.; Axelrod, Saul

    2005-01-01

    This classic book presents the basic principles of behavior emphasizing the use of preventive techniques as well as consequences naturally available in the home, business, or school environment to change important behaviors. This book, and its companion piece, "Measurement of Behavior," represents more than 30 years of research and strategies in…

  17. APPLYING THE PRINCIPLES OF ACCOUNTING IN

    OpenAIRE

    NAGY CRISTINA MIHAELA; SABĂU CRĂCIUN; ”Tibiscus” University of Timişoara, Faculty of Economic Science

    2015-01-01

    The application of accounting principles (accounting principle on accrual basis; principle of business continuity; method consistency principle; prudence principle; independence principle; the principle of separate valuation of assets and liabilities; intangibility principle; non-compensation principle; the principle of substance over form; the principle of threshold significance) to companies that are in bankruptcy procedure has a number of particularities. Thus, some principl...

  18. Dynamics and Rheology of Soft Colloidal Glasses

    KAUST Repository

    Wen, Yu Ho; Schaefer, Jennifer L.; Archer, Lynden A.

    2015-01-01

    © 2015 American Chemical Society. The linear viscoelastic (LVE) spectrum of a soft colloidal glass is accessed with the aid of a time-concentration superposition (TCS) principle, which unveils the glassy particle dynamics from in-cage rattling

  19. Models for universal reduction of macroscopic quantum fluctuations

    International Nuclear Information System (INIS)

    Diosi, L.

    1988-10-01

    If quantum mechanics is universal, then macroscopic bodies would, in principle, possess macroscopic quantum fluctuations (MQF) in their positions, orientations, densities etc. Such MQF, however, are not observed in nature. The hypothesis is adopted that the absence of MQF is due to a certain universal mechanism. Gravitational measures were applied for reducing MQF of the mass density. This model leads to classical trajectories in the macroscopic limit of translational motion. For massive objects, unwanted macroscopic superpositions of quantum states will be destroyed within short times. (R.P.) 34 refs

  20. Narrowing of the balance function with centrality in Au + Au collisions at √sNN

    International Nuclear Information System (INIS)

    Adams, J.; Alder, C.; Ahammed, Z.; Allgower, C.; Amonett, J.; Anderson, B.D.; Anderson, M.; Averichev, G.S.; Balewski, J.; Barannikova, O.; Barnby, L.S.; Baudot, J.; Bekele, S.; Belaga, V.V.; Bellwied, R.; Berger, J.; Bichsel, H.; Billmeier, A.; Bland, L.C.; Blyth, C.O.; Bonner, B.E.; Boucham, A.; Brandin, A.; Bravar, A.; Cadman, R.V.; Caines, H.; Calderonde la Barca Sanchez, M.; Cardenas, A.; Carroll, J.; Castillo, J.; Castro, M.; Cebra, D.; Chaloupka, P.; Chattopadhyay, S.; Chen, Y.; Chernenko, S.P.; Cherney, M.; Chikanian, A.; Choi, B.; Christie, W.; Coffin, J.P.; Cormier, T.M.; Corral, M.M.; Cramer, J.G.; Crawford, H.J.; Derevschikov, A.A.; Didenko, L.; Dietel, T.; Draper, J.E.; Dunin, V.B.; Dunlop, J.C.; Eckardt, V.; Efimov, L.G.; Emelianov, V.; Engelage, J.; Eppley, G.; Erazmus, B.; Fachini, P.; Faine, V.; Faivre, J.; Fatemi, R.; Filimonov, K.; Finch, E.; Fisyak, Y.; Flierl, D.; Foley, K.J.; Fu, J.; Gagliardi, C.A.; Gagunashvili, N.; Gans, J.; Gaudichet, L.; Germain, M.; Geurts, F.; Ghazikhanian, V.; Grachov, O.; Grigoriev, V.; Guedon, M.; Guertin, S.M.; Gushin, E.; Hallman, T.J.; Hardtke, D.; Harris, J.W.; Heinz, M.; Henry, T.W.; Heppelmann, S.; Herston, T.; Hippolyte, B.; Hirsch, A.; Hjort, E.; Hoffmann, G.W.; Horsley, M.; Huang, H.Z.; Humanic, T.J.; Igo, G.; Ishihara, A.; Ivanshin, Yu.I.; Jacobs, P.; Jacobs, W.W.; Janik, M.; Johnson, I.; Jones, P.G.; Judd, E.G.; Kaneta, M.; Kaplan, M.; Keane, D.; Kiryluk, J.; Kisiel, A.; Klay, J.; Klein, S.R.; Klyachko, A.; Kollegger, T.; Konstantinov, A.S.; Kopytine, M.; Kotchenda, L.; Kovalenko, A.D.; Kramer, M.; Kravtsov, P.; Krueger, K.; Kuhn, C.; Kulikov, A.I.; Kunde, G.J.; Kunz, C.L.; Kutuev, R.Kh.; Kuznetsov, A.A.; Lamont, M.A.C.; Landgraf, J.M.; Lange, S.; Lansdell, C.P.; Lasiuk, B.; Laue, F.; Lauret, J.; Lebedev, A.; Lednicky, R.; Leontiev, V.M.; LeVine, M.J.; Li, Q.; Lindenbaum, S.J.; Lisa, M.A.; Liu, F.; Liu, L.; Liu, Z.; Liu, Q.J.; Ljubicic, T.; Llope, W.J.; Long, H.

    2003-01-01

    The balance function is a new observable based on the principle that charge is locally conserved when particles are pair produced. Balance functions have been measured for charged particle pairs and identified charged pion pairs in Au + Au collisions at √sNN = 130 GeV at the Relativistic Heavy Ion Collider using STAR. Balance functions for peripheral collisions have widths consistent with model predictions based on a superposition of nucleon-nucleon scattering. Widths in central collisions are smaller, consistent with trends predicted by models incorporating late hadronization

  1. Narrowing of the balance function with centrality in Au+Au collisions at the square root of SNN = 130 GeV.

    Science.gov (United States)

    Adams, J; Adler, C; Ahammed, Z; Allgower, C; Amonett, J; Anderson, B D; Anderson, M; Averichev, G S; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellwied, R; Berger, J; Bichsel, H; Billmeier, A; Bland, L C; Blyth, C O; Bonner, B E; Boucham, A; Brandin, A; Bravar, A; Cadman, R V; Caines, H; Calderónde la Barca Sánchez, M; Cardenas, A; Carroll, J; Castillo, J; Castro, M; Cebra, D; Chaloupka, P; Chattopadhyay, S; Chen, Y; Chernenko, S P; Cherney, M; Chikanian, A; Choi, B; Christie, W; Coffin, J P; Cormier, T M; Corral, M M; Cramer, J G; Crawford, H J; Derevschikov, A A; Didenko, L; Dietel, T; Draper, J E; Dunin, V B; Dunlop, J C; Eckardt, V; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Fachini, P; Faine, V; Faivre, J; Fatemi, R; Filimonov, K; Finch, E; Fisyak, Y; Flierl, D; Foley, K J; Fu, J; Gagliardi, C A; Gagunashvili, N; Gans, J; Gaudichet, L; Germain, M; Geurts, F; Ghazikhanian, V; Grachov, O; Grigoriev, V; Guedon, M; Guertin, S M; Gushin, E; Hallman, T J; Hardtke, D; Harris, J W; Heinz, M; Henry, T W; Heppelmann, S; Herston, T; Hippolyte, B; Hirsch, A; Hjort, E; Hoffmann, G W; Horsley, M; Huang, H Z; Humanic, T J; Igo, G; Ishihara, A; Ivanshin, Yu I; Jacobs, P; Jacobs, W W; Janik, M; Johnson, I; Jones, P G; Judd, E G; Kaneta, M; Kaplan, M; Keane, D; Kiryluk, J; Kisiel, A; Klay, J; Klein, S R; Klyachko, A; Kollegger, T; Konstantinov, A S; Kopytine, M; Kotchenda, L; Kovalenko, A D; Kramer, M; Kravtsov, P; Krueger, K; Kuhn, C; Kulikov, A I; Kunde, G J; Kunz, C L; Kutuev, R Kh; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; Lansdell, C P; Lasiuk, B; Laue, F; Lauret, J; Lebedev, A; Lednický, R; Leontiev, V M; LeVine, M J; Li, Q; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, L; Liu, Z; Liu, Q J; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Lopez-Noriega, M; Love, W A; Ludlam, T; Lynn, D; Ma, J; Magestro, D; Majka, R; Margetis, S; Markert, C; Martin, L; Marx, J; Matis, H S; Matulenko, Yu A; McShane, T S; Meissner, F; Melnick, Yu; Meschanin, A; Messer, M; Miller, M L; Milosevich, Z; Minaev, N G; Mitchell, J; Moore, C F; Morozov, V; de Moura, M M; Munhoz, M G; Nelson, J M; Nevski, P; Nikitin, V A; Nogach, L V; Norman, B; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Paic, G; Pandey, S U; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Perevoztchikov, V; Peryt, W; Petrov, V A; Planinic, M; Pluta, J; Porile, N; Porter, J; Poskanzer, A M; Potrebenikova, E; Prindle, D; Pruneau, C; Putschke, J; Rai, G; Rakness, G; Ravel, O; Ray, R L; Razin, S V; Reichhold, D; Reid, J G; Renault, G; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevski, O V; Romero, J L; Rose, A; Roy, C; Rykov, V; Sakrejda, I; Salur, S; Sandweiss, J; Savin, I; Schambach, J; Scharenberg, R P; Schmitz, N; Schroeder, L S; Schüttauf, A; Schweda, K; Seger, J; Seliverstov, D; Seyboth, P; Shahaliev, E; Shestermanov, K E; Shimanskii, S S; Simon, F; Skoro, G; Smirnov, N; Snellings, R; Sorensen, P; Sowinski, J; Spinka, H M; Srivastava, B; Stephenson, E J; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Struck, C; Suaide, A A P; Sugarbaker, E; Suire, C; Sumbera, M; Surrow, B; Symons, T J M; de Toledo, A Szanto; Szarwas, P; Tai, A; Takahashi, J; Tang, A H; Thein, D; Thomas, J H; Thompson, M; Tikhomirov, V; Tokarev, M; Tonjes, M B; Trainor, T A; Trentalange, S; Tribble, R E; Trofimov, V; Tsai, O; Ullrich, T; Underwood, D G; Van Buren, G; Vander Molen, A M; Vasilevski, I M; Vasiliev, A N; Vigdor, S E; Voloshin, S A; Wang, F; Ward, H; Watson, J W; Wells, R; Westfall, G D; Whitten, C; Wieman, H; Willson, R; Wissink, S W; Witt, R; Wood, J; Xu, N; Xu, Z; Yakutin, A E; Yamamoto, E; Yang, J; Yepes, P; Yurevich, V I; Zanevski, Y V; Zborovský, I; Zhang, H; Zhang, W M; Zoulkarneev, R; Zubarev, A N

    2003-05-02

    The balance function is a new observable based on the principle that charge is locally conserved when particles are pair produced. Balance functions have been measured for charged particle pairs and identified charged pion pairs in Au+Au collisions at the square root of SNN = 130 GeV at the Relativistic Heavy Ion Collider using STAR. Balance functions for peripheral collisions have widths consistent with model predictions based on a superposition of nucleon-nucleon scattering. Widths in central collisions are smaller, consistent with trends predicted by models incorporating late hadronization.

  2. Mach's holographic principle

    International Nuclear Information System (INIS)

    Khoury, Justin; Parikh, Maulik

    2009-01-01

    Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.

  3. Cosmological principle

    International Nuclear Information System (INIS)

    Wesson, P.S.

    1979-01-01

    The Cosmological Principle states: the universe looks the same to all observers regardless of where they are located. To most astronomers today the Cosmological Principle means the universe looks the same to all observers because density of the galaxies is the same in all places. A new Cosmological Principle is proposed. It is called the Dimensional Cosmological Principle. It uses the properties of matter in the universe: density (rho), pressure (p), and mass (m) within some region of space of length (l). The laws of physics require incorporation of constants for gravity (G) and the speed of light (C). After combining the six parameters into dimensionless numbers, the best choices are: 8πGl 2 rho/c 2 , 8πGl 2 rho/c 4 , and 2 Gm/c 2 l (the Schwarzchild factor). The Dimensional Cosmological Principal came about because old ideas conflicted with the rapidly-growing body of observational evidence indicating that galaxies in the universe have a clumpy rather than uniform distribution

  4. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-01-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time

  5. Precautionary Principles: General Definitions and Specific Applications to Genetically Modified Organisms

    Science.gov (United States)

    Lofstedt, Ragnar E.; Fischhoff, Baruch; Fischhoff, Ilya R.

    2002-01-01

    Precautionary principles have been proposed as a fundamental element of sound risk management. Their advocates see them as guiding action in the face of uncertainty, encouraging the adoption of measures that reduce serious risks to health, safety, and the environment. Their opponents may reject the very idea of precautionary principles, find…

  6. Executive Financial Reporting: Seven Principles to Use in Developing Effective Reports.

    Science.gov (United States)

    Jenkins, William A.; Fischer, Mary

    1991-01-01

    Higher education institution business officers need to follow principles of presentation, judgment, and measurement in developing effective executive financial reports. Principles include (1) keep the statement simple; (2) be consistent in reporting from year to year; (3) determine user needs and interests; (4) limit data; (5) provide trend lines;…

  7. The precautionary principle in international environmental law and international jurisprudence

    Directory of Open Access Journals (Sweden)

    Tubić Bojan

    2014-01-01

    Full Text Available This paper analysis international regulation of the precautionary principle as one of environmental principles. This principle envisages that when there are threats of serious and irreparable harm, as a consequence of certain economic activity, the lack of scientific evidence and full certainty cannot be used as a reason for postponing efficient measures for preventing environmental harm. From economic point of view, the application of precautionary principle is problematic, because it creates larger responsibility for those who create possible risks, comparing to the previous period. The precautionary principle can be found in numerous international treaties in this field, which regulate it in a very similar manner. There is no consensus in doctrine whether this principle has reached the level of international customary law, because it was interpreted differently and it was not accepted by large number of countries in their national legislations. It represents a developing concept which is consisted of changing positions on adequate roles of science, economy, politics and law in the field of environmental protection. This principle has been discussed in several cases before International Court of Justice and International Tribunal for the Law of the Sea.

  8. On vector analogs of the modified Volterra lattice

    Energy Technology Data Exchange (ETDEWEB)

    Adler, V E; Postnikov, V V [L D Landau Institute for Theoretical Physics, 1a Semenov pr, 142432 Chernogolovka (Russian Federation); Sochi Branch of Peoples' Friendship University of Russia, 32 Kuibyshev str, 354000 Sochi (Russian Federation)], E-mail: adler@itp.ac.ru, E-mail: postnikovvv@rambler.ru

    2008-11-14

    The zero curvature representations, Baecklund transformations, nonlinear superposition principle and the simplest explicit solutions of soliton and breather type are presented for two vector generalizations of modified Volterra lattice. The relations with some other integrable equations are established.

  9. Radioactivity measurements principles and practice

    CERN Document Server

    Mann, W B; Spernol, A

    2012-01-01

    The authors have addressed the basic need for internationally consistent standards and methods demanded by the new and increasing use of radioactive materials, radiopharmaceuticals and labelled compounds. Particular emphasis is given to the basic and practical problems that may be encountered in measuring radioactivity. The text provides information and recommendations in the areas of radiation protection, focusing on quality control and the precautions necessary for the preparation and handling of radioactive substances. New information is also presented on the applications of both traditiona

  10. The main goals and principles of nuclear and radiation safety

    International Nuclear Information System (INIS)

    Huseynov, V.

    2015-01-01

    The use of modern radiation technology expands in various fields of human activity. The most advanced approach, methods and technologies and also radiation technologies are of great importance in industrial, medical, agricultural, construction, science, education, and etc. areas of the fastest growing Azerbaijan Republic. Ensuring of nuclear and radiation safety, safety standards, main principles and conception of safety play a crucial role. The following ten principles are taken as a basis to ensure safety measures. 1. Responsible for ensuring safety; 2. The role of government; 3. Leadership and management of security interests; 4. Devices and justification of activity; 5. Optimization of preservation; 6. Limiting of risks for physical persons; 7. The protection of present and future generations; 8. The prevention of accidents; 9. Emergency preparedness and response; 10. Reducing of risks of existing and unregulated radiation protection measures. The safety principles are applied together

  11. High-Order Hamilton's Principle and the Hamilton's Principle of High-Order Lagrangian Function

    International Nuclear Information System (INIS)

    Zhao Hongxia; Ma Shanjun

    2008-01-01

    In this paper, based on the theorem of the high-order velocity energy, integration and variation principle, the high-order Hamilton's principle of general holonomic systems is given. Then, three-order Lagrangian equations and four-order Lagrangian equations are obtained from the high-order Hamilton's principle. Finally, the Hamilton's principle of high-order Lagrangian function is given.

  12. DETERMINATION OF QMS PRINCIPLE COEFFICIENTS OF SIGNIFICANCE IN ACHIEVING BUSINESS EXCELLENCE

    Directory of Open Access Journals (Sweden)

    Aleksandar Vujovic

    2008-03-01

    Full Text Available This paper has been developed as a tendency of researchers in the Center for quality-Faculty of mechanical engineering in Podgorica to establish a model for improvement of business processes performances based on quality management system through comparison with top organizational performances characterized by criteria i.e. particularities of the business excellence model. Correlation of principles of the quality management system with QMS principles has been established to that effect. Weight coefficients have been also determined for each principle individually. Thereby key principles were identified, namely priorities in terms of achieving business excellence i.e. areas (principles were given priorities, that is to say principles that play the biggest part in achieving business excellence. In that way, pre conditions were made to define preventive measures of a certain intensity depending on the weight coefficients, with a goal to improve performances of a certified and process-modulated quality management system in direction of achieving top organizational performances.

  13. A Variation on Uncertainty Principle and Logarithmic Uncertainty Principle for Continuous Quaternion Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    Mawardi Bahri

    2017-01-01

    Full Text Available The continuous quaternion wavelet transform (CQWT is a generalization of the classical continuous wavelet transform within the context of quaternion algebra. First of all, we show that the directional quaternion Fourier transform (QFT uncertainty principle can be obtained using the component-wise QFT uncertainty principle. Based on this method, the directional QFT uncertainty principle using representation of polar coordinate form is easily derived. We derive a variation on uncertainty principle related to the QFT. We state that the CQWT of a quaternion function can be written in terms of the QFT and obtain a variation on uncertainty principle related to the CQWT. Finally, we apply the extended uncertainty principles and properties of the CQWT to establish logarithmic uncertainty principles related to generalized transform.

  14. Thermography. Principles and measurements; Thermographie. Principes et mesure

    Energy Technology Data Exchange (ETDEWEB)

    Pajani, D. [Ecole Centrale de Lyon, 69 - Ecully (France)

    2001-09-01

    Thermography is a technique which allows to obtain the thermal image of a given scene and for a determined spectral domain. Infrared thermography is the most well-known and used technique of thermography, but this article deals with the thermographic measurements in general and for a wider part of the radiation spectrum: 1 - general considerations: terminology, fluxes and temperatures measurement; 2 - radiations (emission and reception), radiative properties of materials: basic notions, simplified radiometer, radiative properties of materials; 3 - thermographic measurements: general considerations, calibration, radiometric measurement situation, from the radiometric measurement to the thermometric measurement and to the thermographic measurement, measurement uncertainties. (J.S.)

  15. Biomedical engineering principles

    CERN Document Server

    Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N

    2011-01-01

    Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem

  16. Mechanical spectra of glass-forming liquids. I. Low-frequency bulk and shear moduli of DC704 and 5-PPE measured by piezoceramic transducers

    DEFF Research Database (Denmark)

    Hecksher, Tina; Olsen, Niels Boye; Nelson, Keith Adam

    2013-01-01

    We present dynamic shear and bulk modulus measurements of supercooled tetraphenyl-tetramethyl-trisiloxane (DC704) and 5-phenyl-4-ether over a range of temperatures close to their glass transition. The data are analyzed and compared in terms of time-temperature superposition (TTS), the relaxation ...

  17. A study of radiative properties of fractal soot aggregates using the superposition T-matrix method

    International Nuclear Information System (INIS)

    Li Liu; Mishchenko, Michael I.; Patrick Arnott, W.

    2008-01-01

    We employ the numerically exact superposition T-matrix method to perform extensive computations of scattering and absorption properties of soot aggregates with varying state of compactness and size. The fractal dimension, D f , is used to quantify the geometrical mass dispersion of the clusters. The optical properties of soot aggregates for a given fractal dimension are complex functions of the refractive index of the material m, the number of monomers N S , and the monomer radius a. It is shown that for smaller values of a, the absorption cross section tends to be relatively constant when D f f >2. However, a systematic reduction in light absorption with D f is observed for clusters with sufficiently large N S , m, and a. The scattering cross section and single-scattering albedo increase monotonically as fractals evolve from chain-like to more densely packed morphologies, which is a strong manifestation of the increasing importance of scattering interaction among spherules. Overall, the results for soot fractals differ profoundly from those calculated for the respective volume-equivalent soot spheres as well as for the respective external mixtures of soot monomers under the assumption that there are no electromagnetic interactions between the monomers. The climate-research implications of our results are discussed

  18. The genetic difference principle.

    Science.gov (United States)

    Farrelly, Colin

    2004-01-01

    In the newly emerging debates about genetics and justice three distinct principles have begun to emerge concerning what the distributive aim of genetic interventions should be. These principles are: genetic equality, a genetic decent minimum, and the genetic difference principle. In this paper, I examine the rationale of each of these principles and argue that genetic equality and a genetic decent minimum are ill-equipped to tackle what I call the currency problem and the problem of weight. The genetic difference principle is the most promising of the three principles and I develop this principle so that it takes seriously the concerns of just health care and distributive justice in general. Given the strains on public funds for other important social programmes, the costs of pursuing genetic interventions and the nature of genetic interventions, I conclude that a more lax interpretation of the genetic difference principle is appropriate. This interpretation stipulates that genetic inequalities should be arranged so that they are to the greatest reasonable benefit of the least advantaged. Such a proposal is consistent with prioritarianism and provides some practical guidance for non-ideal societies--that is, societies that do not have the endless amount of resources needed to satisfy every requirement of justice.

  19. Principles of planar near-field antenna measurements

    CERN Document Server

    Gregson, Stuart; Parini, Clive

    2007-01-01

    This single volume provides a comprehensive introduction and explanation of both the theory and practice of 'Planar Near-Field Antenna Measurement' from its basic postulates and assumptions, to the intricacies of its deployment in complex and demanding measurement scenarios.

  20. The Fourier transform of tubular densities

    KAUST Repository

    Prior, C B; Goriely, A

    2012-01-01

    molecules. We consider tubes of both finite radii and unrestricted radius. When there is overlap of the tube structure the net density is calculated using the super-position principle. The Fourier transform of this density is composed of two expressions, one

  1. Analytical prediction model for non-symmetric fatigue crack growth in Fibre Metal Laminates

    NARCIS (Netherlands)

    Wang, W.; Rans, C.D.; Benedictus, R.

    2017-01-01

    This paper proposes an analytical model for predicting the non-symmetric crack growth and accompanying delamination growth in FMLs. The general approach of this model applies Linear Elastic Fracture Mechanics, the principle of superposition, and displacement compatibility based on the

  2. Meta-Analyses of Seven of NIDA’s Principles of Drug Addiction Treatment

    Science.gov (United States)

    Pearson, Frank S.; Prendergast, Michael L.; Podus, Deborah; Vazan, Peter; Greenwell, Lisa; Hamilton, Zachary

    2011-01-01

    Seven of the 13 Principles of Drug Addiction Treatment disseminated by the National Institute on Drug Abuse (NIDA) were meta-analyzed as part of the Evidence-based Principles of Treatment (EPT) project. By averaging outcomes over the diverse programs included in EPT, we found that five of the NIDA principles examined are supported: matching treatment to the client’s needs; attending to the multiple needs of clients; behavioral counseling interventions; treatment plan reassessment; and counseling to reduce risk of HIV. Two of the NIDA principles are not supported: remaining in treatment for an adequate period of time and frequency of testing for drug use. These weak effects could be the result of the principles being stated too generally to apply to the diverse interventions and programs that exist or of unmeasured moderator variables being confounded with the moderators that measured the principles. Meta-analysis should be a standard tool for developing principles of effective treatment for substance use disorders. PMID:22119178

  3. Core principles of evolutionary medicine

    Science.gov (United States)

    Grunspan, Daniel Z; Nesse, Randolph M; Barnes, M Elizabeth; Brownell, Sara E

    2018-01-01

    Abstract Background and objectives Evolutionary medicine is a rapidly growing field that uses the principles of evolutionary biology to better understand, prevent and treat disease, and that uses studies of disease to advance basic knowledge in evolutionary biology. Over-arching principles of evolutionary medicine have been described in publications, but our study is the first to systematically elicit core principles from a diverse panel of experts in evolutionary medicine. These principles should be useful to advance recent recommendations made by The Association of American Medical Colleges and the Howard Hughes Medical Institute to make evolutionary thinking a core competency for pre-medical education. Methodology The Delphi method was used to elicit and validate a list of core principles for evolutionary medicine. The study included four surveys administered in sequence to 56 expert panelists. The initial open-ended survey created a list of possible core principles; the three subsequent surveys winnowed the list and assessed the accuracy and importance of each principle. Results Fourteen core principles elicited at least 80% of the panelists to agree or strongly agree that they were important core principles for evolutionary medicine. These principles over-lapped with concepts discussed in other articles discussing key concepts in evolutionary medicine. Conclusions and implications This set of core principles will be helpful for researchers and instructors in evolutionary medicine. We recommend that evolutionary medicine instructors use the list of core principles to construct learning goals. Evolutionary medicine is a young field, so this list of core principles will likely change as the field develops further. PMID:29493660

  4. The Maximum Entropy Principle and the Modern Portfolio Theory

    Directory of Open Access Journals (Sweden)

    Ailton Cassetari

    2003-12-01

    Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.

  5. Energy conservation and the principle of equivalence

    International Nuclear Information System (INIS)

    Haugan, M.P.

    1979-01-01

    If the equivalence principle is violated, then observers performing local experiments can detect effects due to their position in an external gravitational environment (preferred-location effects) or can detect effects due to their velocity through some preferred frame (preferred frame effects). We show that the principle of energy conservation implies a quantitative connection between such effects and structure-dependence of the gravitational acceleration of test bodies (violation of the Weak Equivalence Principle). We analyze this connection within a general theoretical framework that encompasses both non-gravitational local experiments and test bodies as well as gravitational experiments and test bodies, and we use it to discuss specific experimental tests of the equivalence principle, including non-gravitational tests such as gravitational redshift experiments, Eoetvoes experiments, the Hughes-Drever experiment, and the Turner-Hill experiment, and gravitational tests such as the lunar-laser-ranging ''Eoetvoes'' experiment, and measurements of anisotropies and variations in the gravitational constant. This framework is illustrated by analyses within two theoretical formalisms for studying gravitational theories: the PPN formalism, which deals with the motion of gravitating bodies within metric theories of gravity, and the THepsilonμ formalism that deals with the motion of charged particles within all metric theories and a broad class of non-metric theories of gravity

  6. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  7. Principle and application of ion mobility spectroscopy

    International Nuclear Information System (INIS)

    Adler, J.; Arnold, G.; Baumbach, J.I.; Doering, H.R.

    1990-01-01

    An outline is given of the principle and application of ion mobility spectroscopy to the selective measurement of single substances in a substance matrix, including advantages and disadvantages of ion mobility detectors for solving analytical problems in the fields of environment, microelectronics, medicine, and military engineering. (orig.) [de

  8. The certainty principle (review)

    OpenAIRE

    Arbatsky, D. A.

    2006-01-01

    The certainty principle (2005) allowed to conceptualize from the more fundamental grounds both the Heisenberg uncertainty principle (1927) and the Mandelshtam-Tamm relation (1945). In this review I give detailed explanation and discussion of the certainty principle, oriented to all physicists, both theorists and experimenters.

  9. ANALYSIS OF FUZZY QUEUES: PARAMETRIC PROGRAMMING APPROACH BASED ON RANDOMNESS - FUZZINESS CONSISTENCY PRINCIPLE

    OpenAIRE

    Dhruba Das; Hemanta K. Baruah

    2015-01-01

    In this article, based on Zadeh’s extension principle we have apply the parametric programming approach to construct the membership functions of the performance measures when the interarrival time and the service time are fuzzy numbers based on the Baruah’s Randomness- Fuzziness Consistency Principle. The Randomness-Fuzziness Consistency Principle leads to defining a normal law of fuzziness using two different laws of randomness. In this article, two fuzzy queues FM...

  10. Fusion research principles

    CERN Document Server

    Dolan, Thomas James

    2013-01-01

    Fusion Research, Volume I: Principles provides a general description of the methods and problems of fusion research. The book contains three main parts: Principles, Experiments, and Technology. The Principles part describes the conditions necessary for a fusion reaction, as well as the fundamentals of plasma confinement, heating, and diagnostics. The Experiments part details about forty plasma confinement schemes and experiments. The last part explores various engineering problems associated with reactor design, vacuum and magnet systems, materials, plasma purity, fueling, blankets, neutronics

  11. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  12. Insights into the ammonia synthesis from first-principles

    DEFF Research Database (Denmark)

    Hellmann, A.; Honkala, Johanna Karoliina; Remediakis, Ioannis

    2006-01-01

    -properties, such as apparent activation energies and reaction orders, are calculated from the first-principles model. Our analysis shows that the reaction order of N-2 is unity under all considered conditions, whereas the reaction orders of H-2 and NH3 depend on reaction conditions. (c) 2006 Elsevier B.V. All rights reserved.......A new set of measurements is used to further test a recently published first-principles model for the ammonia (NH3) synthesis on an unpromoted Ru-based catalyst. A direct comparison shows an overall good agreement in NH3 productivity between the model and the experiment. In addition, macro...

  13. NUMERICAL ANALYSIS OF MATHEMATICAL MODELS OF THE FACTUAL CONTRIBUTION DISTRIBUTION IN ASYMMETRY AND DEVIATION OF VOLTAGE AT THE COMMON COUPLING POINTS OF ENERGY SUPPLY SYSTEMS

    Directory of Open Access Journals (Sweden)

    Yu.L. Sayenko

    2016-05-01

    Full Text Available Purpose. Perform numerical analysis of the distribution of the factual contributions of line sources of distortion in the voltage distortion at the point of common coupling, based on the principles of superposition and exclusions. Methodology. Numerical analysis was performed on the results of the simulation steady state operation of power supply system of seven electricity consumers. Results. Mathematical model for determining the factual contribution of line sources of distortion in the voltage distortion at the point of common coupling, based on the principles of superposition and exclusions, are equivalent. To assess the degree of participation of each source of distortion in the voltage distortion at the point of common coupling and distribution of financial compensation to the injured party by all sources of distortion developed a one-dimensional criteria based on the scalar product of vectors. Not accounting group sources of distortion, which belong to the subject of the energy market, to determine their total factual contribution as the residual of the factual contribution between all sources of distortion. Originality. Simulation mode power supply system was carried out in the phase components space, taking into account the distributed characteristics of distortion sources. Practical value. The results of research can be used to develop methods and tools for distributed measurement and analytical systems assessment of the power quality.

  14. Basic principles for measurement of intramuscular pressure

    Science.gov (United States)

    Hargens, A. R.; Ballard, R. E.

    1995-01-01

    We review historical and methodological approaches to measurements of intramuscular pressure (IMP) in humans. These techniques provide valuable measures of muscle tone and activity as well as diagnostic criteria for evaluation of exertional compartment syndrome. Although the wick and catheter techniques provide accurate measurements of IMP at rest, their value for exercise studies and diagnosis of exertional compartment syndrome is limited because of low frequency response and hydrostatic (static and inertial) pressure artifacts. Presently, most information on diagnosis of exertional compartment syndromes during dynamic exercise is available using the Myopress catheter. However, future research and clinical diagnosis using IMP can be optimized by the use of a miniature transducer-tipped catheter such as the Millar Mikro-tip.

  15. Cryogenic test of the equivalence principle

    International Nuclear Information System (INIS)

    Worden, P.W. Jr.

    1976-01-01

    The weak equivalence principle is the hypothesis that the ratio of internal and passive gravitational mass is the same for all bodies. A greatly improved test of this principle is possible in an orbiting satellite. The most promising experiments for an orbital test are adaptations of the Galilean free-fall experiment and the Eotvos balance. Sensitivity to gravity gradient noise, both from the earth and from the spacecraft, defines a limit to the sensitivity in each case. This limit is generally much worse for an Eotvos balance than for a properly designed free-fall experiment. The difference is related to the difficulty of making a balance sufficiently isoinertial. Cryogenic technology is desirable to take full advantage of the potential sensitivity, but tides in the liquid helium refrigerant may produce a gravity gradient that seriously degrades the ultimate sensitivity. The Eotvos balance appears to have a limiting sensitivity to relative difference of rate of fall of about 2 x 10 -14 in orbit. The free-fall experiment is limited by helium tide to about 10 -15 ; if the tide can be controlled or eliminated the limit may approach 10 -18 . Other limitations to equivalence principle experiments are discussed. An experimental test of some of the concepts involved in the orbital free-fall experiment is continuing. The experiment consists in comparing the motions of test masses levitated in a superconducting magnetic bearing, and is itself a sensitive test of the equivalence principle. At present the levitation magnets, position monitors and control coils have been tested and major noise sources identified. A measurement of the equivalence principle is postponed pending development of a system for digitizing data. The experiment and preliminary results are described

  16. Relative and center-of-mass motion in the attractive Bose-Hubbard model

    DEFF Research Database (Denmark)

    Sørensen, Ole Søe; Gammelmark, Søren; Mølmer, Klaus

    2012-01-01

    We present first-principles numerical calculations for few-particle solutions of the attractive Bose-Hubbard model with periodic boundary conditions. We show that the low-energy many-body states found by numerical diagonalization can be written as translational superposition states of compact...

  17. On the general procedure for modelling complex ecological systems

    International Nuclear Information System (INIS)

    He Shanyu.

    1987-12-01

    In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

  18. Principles to promote physician satisfaction and work-life balance.

    Science.gov (United States)

    Shanafelt, Tait D; West, Colin P; Poland, Gregory A; LaRusso, Nicolas F; Menaker, Ronald; Bahn, Rebecca S

    2008-12-01

    Substantial evidence suggests that difficulty balancing their personal and professional life is a major contributor to physician distress. Limited evidence suggests that the mission and policies of health care organizations may relate to physician satisfaction. In this article, we describe principles to promote professional satisfaction and work-life integration developed by the Mayo Clinic department of medicine. These principles can be used to measure and align policies. It is hoped they will serve as a model that can be used by other health care organizations.

  19. LEGAL PRINCIPLES IN FUNCTION AND PERFORMANCE OF BOT CONTRACT

    Directory of Open Access Journals (Sweden)

    Reifon Cristabella Eventia

    2017-09-01

    Full Text Available Build, Operate and Transfer (BOT represents a long term partnership of the government and private sector. In BOT project, either the government or a private sector identifies a need for a development project. The philosophy in BOT contract begins from the increasing infrastructural needs in all areas and with a limited budget, government are required to commit the duties and functions state governance so that the concept of BOT give a solution through a partnership with the private sector. The government then gives a concession to the private sector to build the project and operate it for a fixed period years, after the period ended, the building shall be transferred to the government. Through BOT, the country is able to gain asset without government spending while maintaining a measure of regulatory control over the project. BOT permits the government to use private sector fund to finance public infrastructure development. The main issues elaborated in this article are the legal principle in the formation of BOT contract and the legal principle in the performance of BOT contract. There are two results; firstly, in the formation of a BOT contract, the principles of partnership and the principle of transparency should be emphasized. Secondly, in performance of the BOT contract, the principle of risk management and the principle of proportionality should be clearly stated in the rules and legal norms.

  20. Nonlinear optomechanical measurement of mechanical motion

    DEFF Research Database (Denmark)

    Brawley, G.A.; Vanner, M R; Larsen, Peter Emil

    2016-01-01

    Precision measurement of nonlinear observables is an important goal in all facets of quantum optics. This allows measurement-based non-classical state preparation, which has been applied to great success in various physical systems, and provides a route for quantum information processing with oth......Precision measurement of nonlinear observables is an important goal in all facets of quantum optics. This allows measurement-based non-classical state preparation, which has been applied to great success in various physical systems, and provides a route for quantum information processing...... with otherwise linear interactions. In cavity optomechanics much progress has been made using linear interactions and measurement, but observation of nonlinear mechanical degrees-of-freedom remains outstanding. Here we report the observation of displacement-squared thermal motion of a micro-mechanical resonator...... by exploiting the intrinsic nonlinearity of the radiation-pressure interaction. Using this measurement we generate bimodal mechanical states of motion with separations and feature sizes well below 100 pm. Future improvements to this approach will allow the preparation of quantum superposition states, which can...

  1. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha R.

    2015-02-13

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  2. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha R.; Park, Kihong; Alouini, Mohamed-Slim; Aissa, Sonia

    2015-01-01

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  3. Ehrenfest's principle in quantum gravity

    International Nuclear Information System (INIS)

    Greensite, J.

    1991-01-01

    The Ehrenfest principle d t = is proposed as (part of) a definition of the time variable in canonical quantum gravity. This principle selects a time direction in superspace, and provides a conserved, positive definite probability measure. An exact solution of the Ehrenfest condition is obtained, which leads to constant-time surfaces in superspace generated by the operator d/dτ=ΛθxΛ, where Λ is the gradient operator in superspace, and θ is the phase of the Wheeler-DeWitt wavefunction Φ; the constant-time surfaces are determined by this solution up to a choice of initial t=0 surface. This result holds throughout superspace, including classically forbidden regions and in the neighborhood of caustics; it also leads to ordinary quantum field theory and classical gravity in regions of superspace where the phase satisfies vertical stroked t θvertical stroke>>vertical stroked t ln(Φ * Φ)vertical stroke and (d t θ) 2 >>vertical stroked t 2 θvertical stroke. (orig.)

  4. Radiation protection principles

    International Nuclear Information System (INIS)

    Ismail Bahari

    2007-01-01

    The presentation outlines the aspects of radiation protection principles. It discussed the following subjects; radiation hazards and risk, the objectives of radiation protection, three principles of the system - justification of practice, optimization of protection and safety, dose limit

  5. The principle(s) of co-existence in Europe: Social, economic and legal avenues

    NARCIS (Netherlands)

    Purnhagen, K.; Wesseler, J.H.H.

    2015-01-01

    The European policy of coexistence follows a number of well-established social, economic and legal principles. Applying these principles in practice has resulted in a complex “rag rug” of coexistence policies in Europe. This rag rug makes enforcement of these principles difficult, at times even

  6. The Principle and the Method of the Radioimmunoassay

    Energy Technology Data Exchange (ETDEWEB)

    Kurata, Kunio [Dainabot Radioisotope Laboratory, Tokyo (Japan)

    1970-03-15

    The measurements of the amounts of various hormones in the body is one of the most important subjects in the field of endocrinology. The result obtained is not only helpful for the basic studies, such as the function studies of each organ or their interrelationship, but also valuable for routine clinical diagnosis. For the most of peptide hormones or protein hormones, a chemical measurement is very difficult. Biological methods are mostly utilized for this purpose but are not always satisfactory. The use of labeled hormone in combination with its antiserum led to the highly specific and sensitive measurement of the hormone in human plasma. This method is based essentially on the principle of isotope dilution method and is called radioimmunoassay. From the nuclear medical aspect, this is now one of the major fields of in vitro assay with radioisotope. With this method, less than 1 micro unit of insulin per ml of serum can be detected. In this lecture, I would like to talk about the principle and the method of the radioimmunoassay.

  7. Can quantum probes satisfy the weak equivalence principle?

    International Nuclear Information System (INIS)

    Seveso, Luigi; Paris, Matteo G.A.

    2017-01-01

    We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’s mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.

  8. Can quantum probes satisfy the weak equivalence principle?

    Energy Technology Data Exchange (ETDEWEB)

    Seveso, Luigi, E-mail: luigi.seveso@unimi.it [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); Paris, Matteo G.A. [Quantum Technology Lab, Dipartimento di Fisica, Università degli Studi di Milano, I-20133 Milano (Italy); INFN, Sezione di Milano, I-20133 Milano (Italy)

    2017-05-15

    We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’s mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.

  9. [Bioethics of principles].

    Science.gov (United States)

    Pérez-Soba Díez del Corral, Juan José

    2008-01-01

    Bioethics emerges about the tecnological problems of acting in human life. Emerges also the problem of the moral limits determination, because they seem exterior of this practice. The Bioethics of Principles, take his rationality of the teleological thinking, and the autonomism. These divergence manifest the epistemological fragility and the great difficulty of hmoralñ thinking. This is evident in the determination of autonomy's principle, it has not the ethical content of Kant's propose. We need a new ethic rationality with a new refelxion of new Principles whose emerges of the basic ethic experiences.

  10. Principles of dynamics

    CERN Document Server

    Hill, Rodney

    2013-01-01

    Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics

  11. 32 CFR 776.19 - Principles.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Principles. 776.19 Section 776.19 National... Professional Conduct § 776.19 Principles. The Rules of this subpart are based on the following principles... exists, this subpart should be interpreted consistent with these general principles. (a) Covered...

  12. The Principle of General Tovariance

    Science.gov (United States)

    Heunen, C.; Landsman, N. P.; Spitters, B.

    2008-06-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.

  13. Extremum principles for irreversible processes

    International Nuclear Information System (INIS)

    Hillert, M.; Agren, J.

    2006-01-01

    Hamilton's extremum principle is a powerful mathematical tool in classical mechanics. Onsager's extremum principle may play a similar role in irreversible thermodynamics and may also become a valuable tool. His principle may formally be regarded as a principle of maximum rate of entropy production but does not have a clear physical interpretation. Prigogine's principle of minimum rate of entropy production has a physical interpretation when it applies, but is not strictly valid except for a very special case

  14. 36 CFR 219.2 - Principles.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 2 2010-07-01 2010-07-01 false Principles. 219.2 Section 219... Forest System Land and Resource Management Planning Purpose and Principles § 219.2 Principles. The planning regulations in this subpart are based on the following principles: (a) The first priority for...

  15. The principle of locality: Effectiveness, fate, and challenges

    International Nuclear Information System (INIS)

    Doplicher, Sergio

    2010-01-01

    The special theory of relativity and quantum mechanics merge in the key principle of quantum field theory, the principle of locality. We review some examples of its 'unreasonable effectiveness' in giving rise to most of the conceptual and structural frame of quantum field theory, especially in the absence of massless particles. This effectiveness shows up best in the formulation of quantum field theory in terms of operator algebras of local observables; this formulation is successful in digging out the roots of global gauge invariance, through the analysis of superselection structure and statistics, in the structure of the local observable quantities alone, at least for purely massive theories; but so far it seems unfit to cope with the principle of local gauge invariance. This problem emerges also if one attempts to figure out the fate of the principle of locality in theories describing the gravitational forces between elementary particles as well. An approach based on the need to keep an operational meaning, in terms of localization of events, of the notion of space-time, shows that, in the small, the latter must loose any meaning as a classical pseudo-Riemannian manifold, locally based on Minkowski space, but should acquire a quantum structure at the Planck scale. We review the geometry of a basic model of quantum space-time and some attempts to formulate interaction of quantum fields on quantum space-time. The principle of locality is necessarily lost at the Planck scale, and it is a crucial open problem to unravel a replacement in such theories which is equally mathematically sharp, namely, a principle where the general theory of relativity and quantum mechanics merge, which reduces to the principle of locality at larger scales. Besides exploring its fate, many challenges for the principle of locality remain; among them, the analysis of superselection structure and statistics also in the presence of massless particles, and to give a precise mathematical

  16. Monte Carlo evaluation of the convolution/superposition algorithm of Hi-Art tomotherapy in heterogeneous phantoms and clinical cases

    International Nuclear Information System (INIS)

    Sterpin, E.; Salvat, F.; Olivera, G.; Vynckier, S.

    2009-01-01

    The reliability of the convolution/superposition (C/S) algorithm of the Hi-Art tomotherapy system is evaluated by using the Monte Carlo model TomoPen, which has been already validated for homogeneous phantoms. The study was performed in three stages. First, measurements with EBT Gafchromic film for a 1.25x2.5 cm 2 field in a heterogeneous phantom consisting of two slabs of polystyrene separated with Styrofoam were compared to simulation results from TomoPen. The excellent agreement found in this comparison justifies the use of TomoPen as the reference for the remaining parts of this work. Second, to allow analysis and interpretation of the results in clinical cases, dose distributions calculated with TomoPen and C/S were compared for a similar phantom geometry, with multiple slabs of various densities. Even in conditions of lack of lateral electronic equilibrium, overall good agreement was obtained between C/S and TomoPen results, with deviations within 3%/2 mm, showing that the C/S algorithm accounts for modifications in secondary electron transport due to the presence of a low density medium. Finally, calculations were performed with TomoPen and C/S of dose distributions in various clinical cases, from large bilateral head and neck tumors to small lung tumors with diameter of <3 cm. To ensure a ''fair'' comparison, identical dose calculation grid and dose-volume histogram calculator were used. Very good agreement was obtained for most of the cases, with no significant differences between the DVHs obtained from both calculations. However, deviations of up to 4% for the dose received by 95% of the target volume were found for the small lung tumors. Therefore, the approximations in the C/S algorithm slightly influence the accuracy in small lung tumors even though the C/S algorithm of the tomotherapy system shows very good overall behavior.

  17. Culture of safety. Indicators of culture of safety. Stage of culture of safety. Optimization of radiating protection. Principle of precaution. Principle ALARA. Procedure ALARA

    International Nuclear Information System (INIS)

    Mursa, E.

    2006-01-01

    Object of research: is the theory and practice of optimization of radiating protection according to recommendations of the international organizations, realization of principle ALARA and maintenance of culture of safety (SC) on the nuclear power plant. The purpose of work - to consider the general aspects of realization of principle ALARA, conceptual bases of culture of safety, as principle of management, and practice of their introduction on the nuclear power plant. The work has the experts' report character in which the following questions are presented: The recommendations materials of the IAEA and other international organizations have been assembled, systematized and analyzed. The definitions, characteristics and universal SC features, and also indicators as a problem of parameters and quantitative SC measurements are described in details advanced. The ALARA principles - principle of precaution; not acceptance of zero risk; choice of a principle ALARA; model of acceptable radiation risk are described. The methodology of an estimation of culture of safety level and practical realization of the ALARA principle in separate organization is shown on a practical example. The SC general estimation at a national level in Republic of Moldova have been done. Taking into consideration that now Safety Culture politics are introduced only in relation to APS, in this paper the attempt of application of Safety Culture methodology to Radiological Objects have been made (Oncological Institute of the Republic of Moldova and Special Objects No.5101 and 5102 for a long time Storage of the Radioactive Waste). (authors)

  18. Factor investing based on Musharakah principle

    Science.gov (United States)

    Simon, Shahril; Omar, Mohd; Lazam, Norazliani Md; Amin, Mohd Nazrul Mohd

    2015-10-01

    Shariah stock investing has become a widely discussed topic in financial industry as part of today's investment strategy. The strategy primarily applies market capitalization allocations. However, some researchers have argued that market capitalization weighting is inherently flawed and have advocated replacing market capitalization allocations with factor allocations. In this paper, we discuss the rationale for factor investing based on Musharakah principle. The essential elements or factors of Musharakah principle such as business sector, management capability, profitability growth and capital efficiency are embedded in the Shariah-compliant stock. We then transform these factors into indexation for better analysis and performance measurement. Investment universe for this research covers Malaysian stocks for the period of January 2009 to December 2013. We found out that these factor indexes have historically earned excess returns over market capitalization weighted indexes and experienced higher Sharpe Ratios.

  19. A new adaptive light beam focusing principle for scanning light stimulation systems.

    Science.gov (United States)

    Bitzer, L A; Meseth, M; Benson, N; Schmechel, R

    2013-02-01

    In this article a novel principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following methodology: First, a reference point on a camera sensor is introduced where optimal focusing conditions are adjusted and the distance between the light focusing optic and the reference point is determined using a laser displacement sensor. In a second step, this displacement sensor is used to map the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point at the sample surface, that are determined by the height difference between camera sensor and the sample topography. This principle is independent of the measurement values, the optical or electrical properties of the sample, the used light source, or the selected wavelength. Furthermore, the samples can be tilted, rough, bent, or of different surface materials. In the following the principle is implemented using an optical beam induced current system, but basically it can be applied to any other scanning light stimulation system. Measurements to demonstrate its operation are shown, using a polycrystalline silicon solar cell.

  20. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  1. The inconstant "principle of constancy".

    Science.gov (United States)

    Kanzer, M

    1983-01-01

    A review of the principle of constancy, as it appeared in Freud's writings, shows that it was inspired by his clinical observations, first with Breuer in the field of cathartic therapy and then through experiences in the early usage of psychoanalysis. The recognition that memories repressed in the unconscious created increasing tension, and that this was relieved with dischargelike phenomena when the unconscious was made conscious, was the basis for his claim to originality in this area. The two principles of "neuronic inertia" Freud expounded in the Project (1895), are found to offer the key to the ambiguous definition of the principle of constancy he was to offer in later years. The "original" principle, which sought the complete discharge of energy (or elimination of stimuli), became the forerunner of the death drive; the "extended" principle achieved balances that were relatively constant, but succumbed in the end to complete discharge. This was the predecessor of the life drives. The relation between the constancy and pleasure-unpleasure principles was maintained for twenty-five years largely on an empirical basis which invoked the concept of psychophysical parallelism between "quantity" and "quality." As the links between the two principles were weakened by clinical experiences attendant upon the growth of ego psychology, a revision of the principle of constancy was suggested, and it was renamed the Nirvana principle. Actually it was shifted from alignment with the "extended" principle of inertia to the original, so that "constancy" was incongruously identified with self-extinction. The former basis for the constancy principle, the extended principle of inertia, became identified with Eros. Only a few commentators seem aware of this radical transformation, which has been overlooked in the Standard Edition of Freud's writings. Physiological biases in the history and conception of the principle of constancy are noted in the Standard Edition. The historical

  2. Routine internal- and external-quality control data in clinical laboratories for estimating measurement and diagnostic uncertainty using GUM principles.

    Science.gov (United States)

    Magnusson, Bertil; Ossowicki, Haakan; Rienitz, Olaf; Theodorsson, Elvar

    2012-05-01

    Healthcare laboratories are increasingly joining into larger laboratory organizations encompassing several physical laboratories. This caters for important new opportunities for re-defining the concept of a 'laboratory' to encompass all laboratories and measurement methods measuring the same measurand for a population of patients. In order to make measurement results, comparable bias should be minimized or eliminated and measurement uncertainty properly evaluated for all methods used for a particular patient population. The measurement as well as diagnostic uncertainty can be evaluated from internal and external quality control results using GUM principles. In this paper the uncertainty evaluations are described in detail using only two main components, within-laboratory reproducibility and uncertainty of the bias component according to a Nordtest guideline. The evaluation is exemplified for the determination of creatinine in serum for a conglomerate of laboratories both expressed in absolute units (μmol/L) and relative (%). An expanded measurement uncertainty of 12 μmol/L associated with concentrations of creatinine below 120 μmol/L and of 10% associated with concentrations above 120 μmol/L was estimated. The diagnostic uncertainty encompasses both measurement uncertainty and biological variation, and can be estimated for a single value and for a difference. This diagnostic uncertainty for the difference for two samples from the same patient was determined to be 14 μmol/L associated with concentrations of creatinine below 100 μmol/L and 14 % associated with concentrations above 100 μmol/L.

  3. Cognitive Ability, Principled Reasoning and Political Tolerance

    DEFF Research Database (Denmark)

    Hebbelstrup Rye Rasmussen, Stig; Nørgaard, Asbjørn Sonne

    Individuals are not equally politically tolerant. To explain why, individual differences in emotions and threat have received much scholarly attention in recent years. However, extant research also shows that psychological dispositions, habitual cognitive styles, ideological orientation...... and ‘principled reasoning’ influence political tolerance judgments. The extent to which cognitive ability plays a role has not been entertained even if the capacity to think abstractly, comprehend complex ideas and apply abstract ideas to concrete situations is inherent to both principled tolerance judgment...... and cognitive ability. Cognitive ability, we argue and show, adds to the etiology of political tolerance. In Danish and American samples cognitive ability strongly predicts political tolerance after taking habitual cognitive styles (as measured by personality traits), education, social ideology, and feelings...

  4. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basing finite element methods on variational principles, especially if, as maximum and minimum principles, these can provide bounds and hence estimates of accuracy. The non-symmetric (and hence stationary rather than extremum principles) are seen however to play a significant role in optimisation theory. (Orig./A.B.)

  5. The principle of finiteness – a guideline for physical laws

    International Nuclear Information System (INIS)

    Sternlieb, Abraham

    2013-01-01

    I propose a new principle in physics-the principle of finiteness (FP). It stems from the definition of physics as a science that deals with measurable dimensional physical quantities. Since measurement results including their errors, are always finite, FP postulates that the mathematical formulation of legitimate laws in physics should prevent exactly zero or infinite solutions. I propose finiteness as a postulate, as opposed to a statement whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories or principles. Some consequences of FP are discussed, first in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The corrected Lorentz transformations include an additional translation term depending on the minimum length epsilon. The relativistic gamma is replaced by a corrected gamma, that is finite for v=c. To comply with FP, physical laws should include the relevant extremum finite values in their mathematical formulation. An important prediction of FP is that there is a maximum attainable relativistic mass/energy which is the same for all subatomic particles, meaning that there is a maximum theoretical value for cosmic rays energy. The Generalized Uncertainty Principle required by Quantum Gravity is actually a necessary consequence of FP at Planck's scale. Therefore, FP may possibly contribute to the axiomatic foundation of Quantum Gravity.

  6. Marine environmental protection, sustainability and the precautionary principle

    International Nuclear Information System (INIS)

    Johnston, P.; Santillo, D.; Stringer, R.

    1999-01-01

    The global oceans provide a diverse array of ecosystem services which cannot be replaced by technological means and are therefore of potentially infinite value. While valuation of ecosystem services is a useful qualitative metric, unresolved uncertainties limit its application in the regulatory and policy domain. This paper evaluates current human activities in terms of their conformity to four principles of sustainability. Violation of any one of the principles indicates that a given activity is unsustainable and that controlling measures are required. Examples of human uses of the oceans can be evaluated using these principles, taking into account also the transgenerational obligations of the current global population. When three major issues concerning the oceans: Land based activities, fisheries and climatic change are examined in this way, they may easily be shown to be globally unsustainable. It is argued that effective environmental protection can best be achieved through the application of a precautionary approach. (author)

  7. [The precautionary principle: advantages and risks].

    Science.gov (United States)

    Tubiana, M

    2001-04-01

    The extension of the precautionary principle to the field of healthcare is the social response to two demands of the population: improved health safety and the inclusion of an informed public in the decision-making process. The necessary balance between cost (treatment-induced risk) and benefit (therapeutic effect) underlies all healthcare decisions. An underestimation or an overestimation of cost, i.e. risk, is equally harmful in public healthcare. A vaccination should be prescribed when its beneficial effect outweighs its inevitable risk. Mandatory vaccination, such as in the case of the Hepatitis B virus, is a health policy requiring some courage because those who benefit will never be aware of its positive effect while those who are victims of the risk could resort to litigation. Defense against such accusations requires an accurate assessment of risk and benefit, which underlines the importance of expertise. Even within the framework of the precautionary principle, it is impossible to act without knowledge, or at least a plausible estimation, of expected effects. Recent affairs (blood contamination, transmissible spongiform encephalitis by growth hormone, and new variant of Creutzfeldt-Jacob disease) illustrate that in such cases the precautionary principle would have had limited impact and it is only when enough knowledge was available that effective action could be taken. Likewise, in current debates concerning the possible risks of electromagnetic fields, cellular phones and radon, research efforts must be given priority. The general public understands intuitively the concept of cost and benefit. For example, the possible health risks of oral contraceptives and hormone replacement therapy were not ignored, but the public has judged that their advantages justify the risk. Estimating risk and benefit and finding a balance between risk and preventive measures could help avoid the main drawbacks of the precautionary principle, i.e. inaction and refusal of

  8. Optimal quantum state estimation with use of the no-signaling principle

    International Nuclear Information System (INIS)

    Han, Yeong-Deok; Bae, Joonwoo; Wang Xiangbin; Hwang, Won-Young

    2010-01-01

    A simple derivation of the optimal state estimation of a quantum bit was obtained by using the no-signaling principle. In particular, the no-signaling principle determines a unique form of the guessing probability independent of figures of merit, such as the fidelity or information gain. This proves that the optimal estimation for a quantum bit can be achieved by the same measurement for almost all figures of merit.

  9. Downlink Cooperative Broadcast Transmission Based on Superposition Coding in a Relaying System for Future Wireless Sensor Networks.

    Science.gov (United States)

    Liu, Yang; Han, Guangjie; Shi, Sulong; Li, Zhengquan

    2018-06-20

    This study investigates the superiority of cooperative broadcast transmission over traditional orthogonal schemes when applied in a downlink relaying broadcast channel (RBC). Two proposed cooperative broadcast transmission protocols, one with an amplify-and-forward (AF) relay, and the other with a repetition-based decode-and-forward (DF) relay, are investigated. By utilizing superposition coding (SupC), the source and the relay transmit the private user messages simultaneously instead of sequentially as in traditional orthogonal schemes, which means the channel resources are reused and an increased channel degree of freedom is available to each user, hence the half-duplex penalty of relaying is alleviated. To facilitate a performance evaluation, theoretical outage probability expressions of the two broadcast transmission schemes are developed, based on which, we investigate the minimum total power consumption of each scheme for a given traffic requirement by numerical simulation. The results provide details on the overall system performance and fruitful insights on the essential characteristics of cooperative broadcast transmission in RBCs. It is observed that better overall outage performances and considerable power gains can be obtained by utilizing cooperative broadcast transmissions compared to traditional orthogonal schemes.

  10. Electrical contacts principles and applications

    CERN Document Server

    Slade, Paul G

    2013-01-01

    Covering the theory, application, and testing of contact materials, Electrical Contacts: Principles and Applications, Second Edition introduces a thorough discussion on making electric contact and contact interface conduction; presents a general outline of, and measurement techniques for, important corrosion mechanisms; considers the results of contact wear when plug-in connections are made and broken; investigates the effect of thin noble metal plating on electronic connections; and relates crucial considerations for making high- and low-power contact joints. It examines contact use in switch

  11. Variational principles in physics

    CERN Document Server

    Basdevant, Jean-Louis

    2007-01-01

    Optimization under constraints is an essential part of everyday life. Indeed, we routinely solve problems by striking a balance between contradictory interests, individual desires and material contingencies. This notion of equilibrium was dear to thinkers of the enlightenment, as illustrated by Montesquieu’s famous formulation: "In all magistracies, the greatness of the power must be compensated by the brevity of the duration." Astonishingly, natural laws are guided by a similar principle. Variational principles have proven to be surprisingly fertile. For example, Fermat used variational methods to demonstrate that light follows the fastest route from one point to another, an idea which came to be known as Fermat’s principle, a cornerstone of geometrical optics. Variational Principles in Physics explains variational principles and charts their use throughout modern physics. The heart of the book is devoted to the analytical mechanics of Lagrange and Hamilton, the basic tools of any physicist. Prof. Basdev...

  12. Heisenberg-limited interferometry with pair coherent states and parity measurements

    International Nuclear Information System (INIS)

    Gerry, Christopher C.; Mimih, Jihane

    2010-01-01

    After reviewing parity-measurement-based interferometry with twin Fock states, which allows for supersensitivity (Heisenberg limited) and super-resolution, we consider interferometry with two different superpositions of twin Fock states, namely, two-mode squeezed vacuum states and pair coherent states. This study is motivated by the experimental challenge of producing twin Fock states on opposite sides of a beam splitter. We find that input two-mode squeezed states, while allowing for Heisenberg-limited sensitivity, do not yield super-resolutions, whereas both are possible with input pair coherent states.

  13. Quantitative measurement of the orbital angular momentum density of light

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-03-01

    Full Text Available of the azimuthal mode index, n, on LCD1 is equiva- lent to n on LCD2. If the reader wishes to orientate the experimental setup differently, such that the two SLMs have the same orientation (i.e., are not mirror images of each other), the complex conjugate... measurement, is separated into two parts: (1) the generation of the optical field and (2) the mea- surement of the OAM density, which is achieved by performing a modal decomposition of the opti- cal field. A. Symmetric Superposition of Two Bessel Beams...

  14. Theoretical calculation on ICI reduction using digital coherent superposition of optical OFDM subcarrier pairs in the presence of laser phase noise.

    Science.gov (United States)

    Yi, Xingwen; Xu, Bo; Zhang, Jing; Lin, Yun; Qiu, Kun

    2014-12-15

    Digital coherent superposition (DCS) of optical OFDM subcarrier pairs with Hermitian symmetry can reduce the inter-carrier-interference (ICI) noise resulted from phase noise. In this paper, we show two different implementations of DCS-OFDM that have the same performance in the presence of laser phase noise. We complete the theoretical calculation on ICI reduction by using the model of pure Wiener phase noise. By Taylor expansion of the ICI, we show that the ICI power is cancelled to the second order by DCS. The fourth order term is further derived out and only decided by the ratio of laser linewidth to OFDM subcarrier symbol rate, which can greatly simplify the system design. Finally, we verify our theoretical calculations in simulations and use the analytical results to predict the system performance. DCS-OFDM is expected to be beneficial to certain optical fiber transmissions.

  15. Internal Displacement, the Guiding Principles on Internal Displacement, the Principles Normative Status, and the Need for their Effective Domestic Implementation in Colombia.

    Directory of Open Access Journals (Sweden)

    Robert K. Goldman

    2010-05-01

    Full Text Available The paper briefly examines the phenomenon of internal displacement world-wide and the genesis of the United Nation’s mandate to deal with this problem. It examines key conclusions of a UN sponsored study which found that existing international law contained signifi cant gaps and grey areas in terms of meeting the needs of internally displaced persons. It also examines the origins and the content of the Guiding Principles on Internal Displacement and the normative status of these Principles. It suggests that, while not binding as such on states, the Guiding Principles have nonetheless become the most authoritative expression of minimum international standards applicable to the internally displaced and that based on state practice many, if not all, of these principles may eventually become part of customary international law. The paper also discusses the need for effective domestic implementation of the Guiding Principles, and examines how governmental authorities, the Constitutional Court and civil society organizations in Colombia, as well as inter-governmental bodies, have responded to the crisis of internal displacement in the country. While noting the adequacy of Colombia’s legislative framework on internal displacement, the paper concludes that the State has not taken the measures required to prevent future displacement or to effectively meet the protection and assistance needs of its displaced citizens.

  16. The principles of dose limitation in radiation protection

    International Nuclear Information System (INIS)

    Kaul, A.

    1988-01-01

    The aim of radiation protection is to protect individuals, their offspring and the population as a whole against harmful effects from ionizing radiation and radioactive substances. Harmful effects may be either somatic, i.e. occurring in the exposed person himself/herself, or hereditary, i.e. occurring in the exposed person's offspring. Successful radiation protection involves (a) protective measures based on the results of research into the biological and biophysical effects of radiation and (b) ensuring that activities necessitating exposure are justified and that the degree of exposure is minimal. This benefit/risk principle ceases to apply if a radiation source is out of control, since the main aim is then to introduce risk limitation measures, provided that these are of positive net benefit to the individual and the population as a whole. This paper discusses the principles of dose limitation as a function of exposure conditions, i.e. controlled or uncontrolled exposure to a source of radiation

  17. Prediction of solid oxide fuel cell cathode activity with first-principles descriptors

    DEFF Research Database (Denmark)

    Lee, Yueh-Lin; Kleis, Jesper; Rossmeisl, Jan

    2011-01-01

    In this work we demonstrate that the experimentally measured area specific resistance and oxygen surface exchange of solid oxide fuel cell cathode perovskites are strongly correlated with the first-principles calculated oxygen p-band center and vacancy formation energy. These quantities...... are therefore descriptors of catalytic activity that can be used in the first-principles design of new SOFC cathodes....

  18. Engageability: a new sub-principle of the learnability principle in human-computer interaction

    Directory of Open Access Journals (Sweden)

    B Chimbo

    2011-12-01

    Full Text Available The learnability principle relates to improving the usability of software, as well as users’ performance and productivity. A gap has been identified as the current definition of the principle does not distinguish between users of different ages. To determine the extent of the gap, this article compares the ways in which two user groups, adults and children, learn how to use an unfamiliar software application. In doing this, we bring together the research areas of human-computer interaction (HCI, adult and child learning, learning theories and strategies, usability evaluation and interaction design. A literature survey conducted on learnability and learning processes considered the meaning of learnability of software applications across generations. In an empirical investigation, users aged from 9 to 12 and from 35 to 50 were observed in a usability laboratory while learning to use educational software applications. Insights that emerged from data analysis showed different tactics and approaches that children and adults use when learning unfamiliar software. Eye tracking data was also recorded. Findings indicated that subtle re- interpretation of the learnability principle and its associated sub-principles was required. An additional sub-principle, namely engageability was proposed to incorporate aspects of learnability that are not covered by the existing sub-principles. Our re-interpretation of the learnability principle and the resulting design recommendations should help designers to fulfill the varying needs of different-aged users, and improve the learnability of their designs. Keywords: Child computer interaction, Design principles, Eye tracking, Generational differences, human-computer interaction, Learning theories, Learnability, Engageability, Software applications, Uasability Disciplines: Human-Computer Interaction (HCI Studies, Computer science, Observational Studies

  19. Quantum mechanics and precision measurements

    International Nuclear Information System (INIS)

    Ramsey, N.F.

    1995-01-01

    The accuracies of measurements of almost all fundamental physical constants have increased by factors of about 10000 during the past 60 years. Although some of the improvements are due to greater care, most are due to new techniques based on quantum mechanics. Although the Heisenberg Uncertainty Principle often limits measurement accuracies, in many cases the validity of quantum mechanics makes possible the vastly improved measurement accuracies. Seven quantum features that have a profound influence on the science of measurements are: 1) Existence of discrete quantum states of energy. 2) Energy conservation in transitions between two states. 3) Electromagnetic radiation of frequency v is quantized with energy hv per quantum. 4) The identity principle. 5) The Heisenberg Uncertainty Principle. 6) Addition of probability amplitudes (not probabilities). 7) Wave and coherent phase phenomena. Of these seven quantum features, only the Heisenberg Uncertainty Principle limits the accuracy of measurements, and its effect is often negligibly small. The other six features make possible much more accurate measurements of quantum systems than with almost all classical systems. These effects are discussed and illustrated

  20. Cognition and Self-Efficacy of Stratigraphy and Geologic Time: Implications for Improving Undergraduate Student Performance in Geological Reasoning

    Science.gov (United States)

    Burton, Erin Peters; Mattietti, G. K.

    2011-01-01

    In general, integration of spatial information can be difficult for students. To study students' spatial thinking and their self-efficacy of interpreting stratigraphic columns, we designed an exercise that asks college-level students to interpret problems on the principles of superposition, original horizontality and lateral continuity, and…

  1. Basic design principles of colorimetric vision systems

    Science.gov (United States)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  2. Introductory remote sensing principles and concepts principles and concepts

    CERN Document Server

    Gibson, Paul

    2013-01-01

    Introduction to Remote Sensing Principles and Concepts provides a comprehensive student introduction to both the theory and application of remote sensing. This textbook* introduces the field of remote sensing and traces its historical development and evolution* presents detailed explanations of core remote sensing principles and concepts providing the theory required for a clear understanding of remotely sensed images.* describes important remote sensing platforms - including Landsat, SPOT and NOAA * examines and illustrates many of the applications of remotely sensed images in various fields.

  3. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    In this article survey of variational principles has been given. Variational principles play a significant role in mathematical theory with emphasis on the physical aspects. There are two principals used i.e. to represent the equation of the system in a succinct way and to enable a particular computation in the system to be carried out with greater accuracy. The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basic finite element methods on variational principles. (A.B.)

  4. Mach's principle and rotating universes

    International Nuclear Information System (INIS)

    King, D.H.

    1990-01-01

    It is shown that the Bianchi 9 model universe satisfies the Mach principle. These closed rotating universes were previously thought to be counter-examples to the principle. The Mach principle is satisfied because the angular momentum of the rotating matter is compensated by the effective angular momentum of gravitational waves. A new formulation of the Mach principle is given that is based on the field theory interpretation of general relativity. Every closed universe with 3-sphere topology is shown to satisfy this formulation of the Mach principle. It is shown that the total angular momentum of the matter and gravitational waves in a closed 3-sphere topology universe is zero

  5. Stochastic theory for classical and quantum mechanical systems

    International Nuclear Information System (INIS)

    Pena, L. de la; Cetto, A.M.

    1975-01-01

    From first principles a theory of stochastic processes in configuration space is formulated. The fundamental equations of the theory are an equation of motion which generalizes Newton's second law and an equation which expresses the condition of conservation of matter. Two types of stochastic motion are possible, both described by the same general equations, but leading in one case to classical Brownian motion behavior and in the other to quantum mechanical behavior. The Schroedinger equation, which is derived with no further assumption, is thus shown to describe a specific stochastic process. It is explicitly shown that only in the quantum mechanical process does the superposition of probability amplitudes give rise to interference phenomena; moreover, the presence of dissipative forces in the Brownian motion equations invalidates the superposition principle. At no point are any special assumptions made concerning the physical nature of the underlying stochastic medium, although some suggestions are discussed in the last section

  6. Information Theoretic Characterization of Physical Theories with Projective State Space

    Science.gov (United States)

    Zaopo, Marco

    2015-08-01

    Probabilistic theories are a natural framework to investigate the foundations of quantum theory and possible alternative or deeper theories. In a generic probabilistic theory, states of a physical system are represented as vectors of outcomes probabilities and state spaces are convex cones. In this picture the physics of a given theory is related to the geometric shape of the cone of states. In quantum theory, for instance, the shape of the cone of states corresponds to a projective space over complex numbers. In this paper we investigate geometric constraints on the state space of a generic theory imposed by the following information theoretic requirements: every non completely mixed state of a system is perfectly distinguishable from some other state in a single shot measurement; information capacity of physical systems is conserved under making mixtures of states. These assumptions guarantee that a generic physical system satisfies a natural principle asserting that the more a state of the system is mixed the less information can be stored in the system using that state as logical value. We show that all theories satisfying the above assumptions are such that the shape of their cones of states is that of a projective space over a generic field of numbers. Remarkably, these theories constitute generalizations of quantum theory where superposition principle holds with coefficients pertaining to a generic field of numbers in place of complex numbers. If the field of numbers is trivial and contains only one element we obtain classical theory. This result tells that superposition principle is quite common among probabilistic theories while its absence gives evidence of either classical theory or an implausible theory.

  7. General Quantum Interference Principle and Duality Computer

    International Nuclear Information System (INIS)

    Long Guilu

    2006-01-01

    In this article, we propose a general principle of quantum interference for quantum system, and based on this we propose a new type of computing machine, the duality computer, that may outperform in principle both classical computer and the quantum computer. According to the general principle of quantum interference, the very essence of quantum interference is the interference of the sub-waves of the quantum system itself. A quantum system considered here can be any quantum system: a single microscopic particle, a composite quantum system such as an atom or a molecule, or a loose collection of a few quantum objects such as two independent photons. In the duality computer, the wave of the duality computer is split into several sub-waves and they pass through different routes, where different computing gate operations are performed. These sub-waves are then re-combined to interfere to give the computational results. The quantum computer, however, has only used the particle nature of quantum object. In a duality computer, it may be possible to find a marked item from an unsorted database using only a single query, and all NP-complete problems may have polynomial algorithms. Two proof-of-the-principle designs of the duality computer are presented: the giant molecule scheme and the nonlinear quantum optics scheme. We also propose thought experiment to check the related fundamental issues, the measurement efficiency of a partial wave function.

  8. The four variational principles of mechanics

    International Nuclear Information System (INIS)

    Gray, C.G.; Karl, G.; Novikov, V.A.

    1996-01-01

    We argue that there are four basic forms of the variational principles of mechanics: Hamilton close-quote s least action principle (HP), the generalized Maupertuis principle (MP), and their two reciprocal principles, RHP and RMP. This set is invariant under reciprocity and Legendre transformations. One of these forms (HP) is in the literature: only special cases of the other three are known. The generalized MP has a weaker constraint compared to the traditional formulation, only the mean energy bar E is kept fixed between virtual paths. This reformulation of MP alleviates several weaknesses of the old version. The reciprocal Maupertuis principle (RMP) is the classical limit of Schroedinger close-quote s variational principle of quantum mechanics, and this connection emphasizes the importance of the reciprocity transformation for variational principles. Two unconstrained formulations (UHP and UMP) of these four principles are also proposed, with completely specified Lagrange multipliers Percival close-quote s variational principle for invariant tori and variational principles for scattering orbits are derived from the RMP. The RMP is very convenient for approximate variational solutions to problems in mechanics using Ritz type methods Examples are provided. Copyright copyright 1996 Academic Press, Inc

  9. PRINCIPLE OF PROPORTIONALITY, CRITERION OF LEGITIMACY IN THE PUBLIC LAW

    Directory of Open Access Journals (Sweden)

    MARIUS ANDREESCU

    2011-04-01

    Full Text Available A problem of essence of the state is the one to delimit the discretionary power, respectively the power abuse in the activity of the state’s institutions. The legal behavior of the state’s institutions consists in their right to appreciate them and the power excess generates the violation of a subjective right or of the right that is of legitimate interest to the citizen. The application and nonobservance of the principle of lawfulness in the activities of the state is a complex problem because the exercise of the state’s functions assumes the discretionary powers with which the states authorities are invested, or otherwise said the ‘right of appreciation” of the authorities regarding the moment of adopting the contents of the measures proposed. The discretionary power cannot be opposed to the principle of lawfulness, as a dimension of the state de jure. In this study we propose to analyze the concept of discretionary power, respectively the power excess, having as a guidance the legislation, jurisprudence and doctrine in the matter. At the same time we would like to identify the most important criterions that will allow the user, regardless that he is or not an administrator, a public clerk or a judge, to delimit the legal behavior of the state’s institutions from the power excess. Within this context, we appreciate that the principle of proportionality represents such a criterion. The proportionality is a legal principle of the law, but at the same time it is a principle of the constitutional law and of other law branches. It expresses clearly the idea of balance, reasonability but also of adjusting the measures ordered by the state’s authorities to the situation in fact, respectively to the purpose for which they have been conceived. In our study we choose theoretical and jurisprudence arguments according to which the principle of proportionality can procedurally be determined and used to delimit the discretionary power and

  10. Variational principles

    CERN Document Server

    Moiseiwitsch, B L

    2004-01-01

    This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha

  11. Radiation protection principles applied to conventional industries producing deleterious environmental effects

    International Nuclear Information System (INIS)

    Tadmor, J.

    1980-01-01

    Comparison of the radiation protection standards, for the population at large, with the conventional pollutants ambient standards, reveals differences in basic principles which result in more relaxed ambient standards for conventional pollutants and consequently, the penalization of the nuclear industry, due to the increased cost of its safety measures. It is proposed that radiation protection principles should be used as a prototype for pollutants having harmful environmental effects and that radiation health physicists should be active in the application of these principles of population protection. A case study of atmospheric release of SO 2 , under different conditions, is analyzed, to emphasize the importance of consideration of the size of the exposed population. (H.K.)

  12. Analyzing Electroencephalogram Signal Using EEG Lab

    Directory of Open Access Journals (Sweden)

    Mukesh BHARDWAJ

    2009-01-01

    Full Text Available The EEG is composed of electrical potentials arising from several sources. Each source (including separate neural clusters, blink artifact or pulse artifact forms a unique topography onto the scalp – ‘scalp map‘. Scalp map may be 2-D or 3-D.These maps are mixed according to the principle of linear superposition. Independent component analysis (ICA attempts to reverse the superposition by separating the EEG into mutually independent scalp maps, or components. MATLAB toolbox and graphic user interface, EEGLAB is used for processing EEG data of any number of channels. Wavelet toolbox has been used for 2-D signal analysis.

  13. Multisensory processing of redundant information in go/no-go and choice responses

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2014-01-01

    In multisensory research, faster responses are commonly observed when multimodal stimuli are presented as compared to unimodal target presentations. This so-called redundant signals effect can be explained by several frameworks including separate activation and coactivation models. The redundant ...... of redundant information provided by different sensory channels and is not restricted to simple responses. The results connect existing theories on multisensory integration with theories on choice behavior....... processes (Schwarz, 1994) within two absorbing barriers. The diffusion superposition model accurately describes mean and variance of response times as well as the proportion of correct responses observed in the two tasks. Linear superposition seems, thus, to be a general principle in integration...

  14. The precautionary principle: is it safe.

    Science.gov (United States)

    Gignon, Maxime; Ganry, Olivier; Jardé, Olivier; Manaouil, Cécile

    2013-06-01

    The precautionary principle is generally acknowledged to be a powerful tool for protecting health but it was originally invoked by policy makers for dealing with environmental issues. In the 1990s, the principle was incorporated into many legislative and regulatory texts in international law. One can consider that the precautionary principle has turned into "precautionism" necessary to prove to the people, taking account of risk in decisions. There is now a risk that these abuses will deprive the principle of its meaning and value. When pushed to its limits, the precautionary principle can even be dangerous when applied to the healthcare field. This is why a critical analysis of the principle is necessary. Through the literature, it sometimes seems to deviate somehow from the essence of the precautionary principle as it is commonly used in relation to health. We believe that educational work is necessary to familiarize professionals, policy makers and public opinion of the precautionary principle and avoid confusion. We propose a critical analysis of the use and misuse of the precautionary principle.

  15. Crosscheck Principle in Pediatric Audiology Today: A 40-Year Perspective.

    Science.gov (United States)

    Hall, James W

    2016-09-01

    The crosscheck principle is just as important in pediatric audiology as it was when first described 40 years ago. That is, no auditory test result should be accepted and used in the diagnosis of hearing loss until it is confirmed or crosschecked by one or more independent measures. Exclusive reliance on only one or two tests, even objective auditory measures, may result in a auditory diagnosis that is not clear or perhaps incorrect. On the other hand, close and careful analysis of findings for a test battery consisting of objective procedures and behavioral tests whenever feasible usually leads to prompt and accurate diagnosis of auditory dysfunction. This paper provides a concise review of the crosscheck principle from its introduction to its clinical application today. The review concludes with a description of a modern test battery for pediatric hearing assessment that supplements traditional behavioral tests with a variety of independent objective procedures including aural immittance measures, otoacoustic emissions, and auditory evoked responses.

  16. Crosscheck Principle in Pediatric Audiology Today: A 40-Year Perspective

    Science.gov (United States)

    2016-01-01

    The crosscheck principle is just as important in pediatric audiology as it was when first described 40 years ago. That is, no auditory test result should be accepted and used in the diagnosis of hearing loss until it is confirmed or crosschecked by one or more independent measures. Exclusive reliance on only one or two tests, even objective auditory measures, may result in a auditory diagnosis that is not clear or perhaps incorrect. On the other hand, close and careful analysis of findings for a test battery consisting of objective procedures and behavioral tests whenever feasible usually leads to prompt and accurate diagnosis of auditory dysfunction. This paper provides a concise review of the crosscheck principle from its introduction to its clinical application today. The review concludes with a description of a modern test battery for pediatric hearing assessment that supplements traditional behavioral tests with a variety of independent objective procedures including aural immittance measures, otoacoustic emissions, and auditory evoked responses. PMID:27626077

  17. The principle of guilt as a basis for criminal sanctions justification review in the Criminal Law in Serbia

    Directory of Open Access Journals (Sweden)

    Ćorović Emir A.

    2013-01-01

    Full Text Available The principle of guilt is one of the essential principles of criminal law. However, it is a very complex principle. Its content has been presented in this paper particularly referring to a systematic deviation of it in the criminal legislation of the Republic of Serbia. According to the provisions of the article 2 of the Criminal Code of Serbia the principle of guilt is related to punishments and warning measures, while security and educational measures remained beyond its reach. On the other side, The Criminal Code defining a crime offense in the article 14 demands culpability of perpetrator's behavior. It involves a conceptual problem: a possibility is given for criminal sanctions of the principle of guilt, article 2 of the Criminal Code not referring to security and educational measures could be applied for people acting without culpability. It is paradoxical to accept criminal-justice reaction in the form of criminal sanctions regarding people without guilt. According to author of this paper, such a normative solution brings into issue the relevant principle, more precisely its basis, generality and guidance, the qualities that every legal principle should maintain. Of course, deviations of legal principle and the principle of guilt are possible but they must be kept to a minimum. Otherwise, systematic legal principle deviations, in this case the principle of guilt, are not to be tolerated. Connecting the principle of guilt with the system of criminal sanctions opens the debate on voluntarism embodied in the freedom of will and guilt and positivism/determinism embodied in perpetrator's danger and educational neglect within the criminal law. It is over a century discussion in the science of criminal law. The author of the paper concludes criminal-justice reaction in the form of criminal sanction can be justified only of based on the principle of guilt. Otherwise, such a reaction has no place in the criminal law.

  18. The principle of general tovariance

    NARCIS (Netherlands)

    Heunen, C.; Landsman, N.P.; Spitters, B.A.W.; Loja Fernandes, R.; Picken, R.

    2008-01-01

    We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance

  19. 48 CFR 49.113 - Cost principles.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Cost principles. 49.113 Section 49.113 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS General Principles 49.113 Cost principles. The cost principles and procedures in the...

  20. Pramana – Journal of Physics | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    There are four reasons why our present knowledge and understanding of quantum mechanics can be regarded as incomplete. (1) The principle of linear superposition has not been experimentally tested for position eigenstates of objects having more than about a thousand atoms. (2) There is no universally agreed upon ...

  1. Quantum Mechanics with a Little Less Mystery

    Science.gov (United States)

    Cropper, William H.

    1969-01-01

    Suggests the "route of the inquiring mind in presenting the esoteric quantum mechanical postulates and concepts in an understandable form. Explains that the quantum mechanical postulates are but useful mathematical forms to express thebroader principles of superposition and correspondence. Briefly describes some of the features which makes the…

  2. The anthropic principle

    International Nuclear Information System (INIS)

    Carr, B.J.

    1982-01-01

    The anthropic principle (the conjecture that certain features of the world are determined by the existence of Man) is discussed with the listing of the objections, and is stated that nearly all the constants of nature may be determined by the anthropic principle which does not give exact values for the constants but only their orders of magnitude. (J.T.)

  3. Biomechanics principles and practices

    CERN Document Server

    Peterson, Donald R

    2014-01-01

    Presents Current Principles and ApplicationsBiomedical engineering is considered to be the most expansive of all the engineering sciences. Its function involves the direct combination of core engineering sciences as well as knowledge of nonengineering disciplines such as biology and medicine. Drawing on material from the biomechanics section of The Biomedical Engineering Handbook, Fourth Edition and utilizing the expert knowledge of respected published scientists in the application and research of biomechanics, Biomechanics: Principles and Practices discusses the latest principles and applicat

  4. Principles of μSR technics

    International Nuclear Information System (INIS)

    Chappert, J.

    1983-05-01

    Principles of muon spin rotation spectroscopy (μSR) are presented. Muons plus are only take into account because physical and chemical results obtained presently are obtained by them. Muon plus has two main characteristics when implanted in a sample: just, it can be considered as a interstitial site probe; then the muon plus can diffuse. Accordingly the quantities measured by μSR are a combination of static and dynamic properties of the muon plus and of the sample. A fluid characteristics is the possibility of munomium formation (μ + e - bound state) [fr

  5. PRINCIPLES OF CONTENT FORMATION EDUCATIONAL ELECTRONIC RESOURCE

    Directory of Open Access Journals (Sweden)

    О Ю Заславская

    2017-12-01

    Full Text Available The article considers modern possibilities of information and communication technologies for the design of electronic educational resources. The conceptual basis of the open educational multimedia system is based on the modular architecture of the electronic educational resource. The content of the electronic training module can be implemented in several versions of the modules: obtaining information, practical exercises, control. The regularities in the teaching process in modern pedagogical theory are considered: general and specific, and the principles for the formation of the content of instruction at different levels are defined, based on the formulated regularities. On the basis of the analysis, the principles of the formation of the electronic educational resource are determined, taking into account the general and didactic patterns of teaching.As principles of the formation of educational material for obtaining information for the electronic educational resource, the article considers: the principle of methodological orientation, the principle of general scientific orientation, the principle of systemic nature, the principle of fundamentalization, the principle of accounting intersubject communications, the principle of minimization. The principles of the formation of the electronic training module of practical studies in the article include: the principle of systematic and dose based consistency, the principle of rational use of study time, the principle of accessibility. The principles of the formation of the module for monitoring the electronic educational resource can be: the principle of the operationalization of goals, the principle of unified identification diagnosis.

  6. Measuring and characterizing beat phenomena with a smartphone

    Science.gov (United States)

    Osorio, M.; Pereyra, C. J.; Gau, D. L.; Laguarda, A.

    2018-03-01

    Nowadays, smartphones are in everyone’s life. Apart from being excellent tools for work and communication, they can also be used to perform several measurements of simple physical magnitudes, serving as a mobile and inexpensive laboratory, ideal for use physics lectures in high schools or universities. In this article, we use a smartphone to analyse the acoustic beat phenomena by using a simple experimental setup, which can complement lessons in the classroom. The beats were created by the superposition of the waves generated by two tuning forks, with their natural frequencies previously characterized using different applications. After the characterization, we recorded the beats and analysed the oscillations in time and frequency.

  7. Smart labels in municipal solid waste - a case for the Precautionary Principle?

    International Nuclear Information System (INIS)

    Waeger, P.A.; Eugster, M.; Hilty, L.M.; Som, C.

    2005-01-01

    The Precautionary Principle aims at anticipating and minimizing potentially serious or irreversible risks under conditions of uncertainty. Although it has been incorporated into many international treaties and pieces of national legislation for environmental protection and sustainable development, the Precautionary Principle has rarely been applied to novel Information and Communication Technologies (ICT) and their potential environmental impacts. In this article we analyze the implications of the disposal and recycling of packaging materials containing so-called smart labels and discuss the results from the perspective of the Precautionary Principle. We argue that a broad application of smart labels bears some risk of dissipating both toxic and valuable substances, and of disrupting established recycling processes. However, these risks can be avoided by precautionary measures, mainly concerning the composition and the use of smart labels. These measures should be implemented as early as possible in order to avoid irreversible developments which are undesirable from the viewpoint of resource management and environmental protection

  8. PRINCIPLES AND PROCEDURES ON FISCAL

    Directory of Open Access Journals (Sweden)

    Morar Ioan Dan

    2011-07-01

    Full Text Available Fiscal science advertise in most analytical situations, while the principles reiterated by specialists in the field in various specialized works The two components of taxation, the tax system relating to the theoretical and the practical procedures relating to tax are marked by frequent references and invocations of the underlying principles to tax. This paper attempts a return on equity fiscal general vision as a principle often invoked and used to justify tax policies, but so often violated the laws fiscality . Also want to emphasize the importance of devising procedures to ensure fiscal equitable treatment of taxpayers. Specific approach of this paper is based on the notion that tax equity is based on equality before tax and social policies of the executive that would be more effective than using the other tax instruments. I want to emphasize that if the scientific approach to justify the unequal treatment of the tax law is based on the various social problems of the taxpayers, then deviates from the issue of tax fairness justification explaining the need to promote social policies usually more attractive to taxpayers. Modern tax techniques are believed to be promoted especially in order to ensure an increasing level of high efficiency at the expense of the taxpayers obligations to ensure equality before the law tax. On the other hand, tax inequities reaction generates multiple recipients from the first budget plan, but finalities unfair measures can not quantify and no timeline for the reaction, usually not known. But while statistics show fluctuations in budgetary revenues and often find in literature reviews and analysis relevant to a connection between changes in government policies, budget execution and outcome. The effects of inequality on tax on tax procedures and budgetary revenues are difficult to quantify and is among others to this work. Providing tax equity without combining it with the principles of discrimination and neutrality

  9. Quantum principles in field interactions

    International Nuclear Information System (INIS)

    Shirkov, D.V.

    1986-01-01

    The concept of quantum principle is intruduced as a principle whosee formulation is based on specific quantum ideas and notions. We consider three such principles, viz. those of quantizability, local gauge symmetry, and supersymmetry, and their role in the development of the quantum field theory (QFT). Concerning the first of these, we analyze the formal aspects and physical contents of the renormalization procedure in QFT and its relation to ultraviolet divergences and the renorm group. The quantizability principle is formulated as an existence condition of a self-consistent quantum version with a given mechanism of the field interaction. It is shown that the consecutive (from a historial point of view) use of these quantum principles puts still larger limitations on possible forms of field interactions

  10. Principles of project management

    Science.gov (United States)

    1982-01-01

    The basic principles of project management as practiced by NASA management personnel are presented. These principles are given as ground rules and guidelines to be used in the performance of research, development, construction or operational assignments.

  11. Test masses for the G-POEM test of the weak equivalence principle

    International Nuclear Information System (INIS)

    Reasenberg, Robert D; Phillips, James D; Popescu, Eugeniu M

    2011-01-01

    We describe the design of the test masses that are used in the 'ground-based principle of equivalence measurement' test of the weak equivalence principle. The main features of the design are the incorporation of corner cubes and the use of mass removal and replacement to create pairs of test masses with different test substances. The corner cubes allow for the vertical separation of the test masses to be measured with picometer accuracy by SAO's unique tracking frequency laser gauge, while the mass removal and replacement operations are arranged so that the test masses incorporating different test substances have nominally identical gravitational properties. (papers)

  12. Modern electronic maintenance principles

    CERN Document Server

    Garland, DJ

    2013-01-01

    Modern Electronic Maintenance Principles reviews the principles of maintaining modern, complex electronic equipment, with emphasis on preventive and corrective maintenance. Unfamiliar subjects such as the half-split method of fault location, functional diagrams, and fault finding guides are explained. This book consists of 12 chapters and begins by stressing the need for maintenance principles and discussing the problem of complexity as well as the requirements for a maintenance technician. The next chapter deals with the connection between reliability and maintenance and defines the terms fai

  13. Developing principles of growth

    DEFF Research Database (Denmark)

    Neergaard, Helle; Fleck, Emma

    of the principles of growth among women-owned firms. Using an in-depth case study methodology, data was collected from women-owned firms in Denmark and Ireland, as these countries are similar in contextual terms, e.g. population and business composition, dominated by micro, small and medium-sized enterprises....... Extending on principles put forward in effectuation theory, we propose that women grow their firms according to five principles which enable women’s enterprises to survive in the face of crises such as the current financial world crisis....

  14. A generalized Principle of Relativity

    International Nuclear Information System (INIS)

    Felice, Fernando de; Preti, Giovanni

    2009-01-01

    The Theory of Relativity stands as a firm groundstone on which modern physics is founded. In this paper we bring to light an hitherto undisclosed richness of this theory, namely its admitting a consistent reformulation which is able to provide a unified scenario for all kinds of particles, be they lightlike or not. This result hinges on a generalized Principle of Relativity which is intrinsic to Einstein's theory - a fact which went completely unnoticed before. The road leading to this generalization starts, in the very spirit of Relativity, from enhancing full equivalence between the four spacetime directions by requiring full equivalence between the motions along these four spacetime directions as well. So far, no measurable spatial velocity in the direction of the time axis has ever been defined, on the same footing of the usual velocities - the 'space-velocities' - in the local three-space of a given observer. In this paper, we show how Relativity allows such a 'time-velocity' to be defined in a very natural way, for any particle and in any reference frame. As a consequence of this natural definition, it also follows that the time- and space-velocity vectors sum up to define a spacelike 'world-velocity' vector, the modulus of which - the world-velocity - turns out to be equal to the Maxwell's constant c, irrespective of the observer who measures it. This measurable world-velocity (not to be confused with the space-velocities we are used to deal with) therefore represents the speed at which all kinds of particles move in spacetime, according to any observer. As remarked above, the unifying scenario thus emerging is intrinsic to Einstein's Theory; it extends the role traditionally assigned to Maxwell's constant c, and can therefore justly be referred to as 'a generalized Principle of Relativity'.

  15. Cosmological equivalence principle and the weak-field limit

    International Nuclear Information System (INIS)

    Wiltshire, David L.

    2008-01-01

    The strong equivalence principle is extended in application to averaged dynamical fields in cosmology to include the role of the average density in the determination of inertial frames. The resulting cosmological equivalence principle is applied to the problem of synchronization of clocks in the observed universe. Once density perturbations grow to give density contrasts of order 1 on scales of tens of megaparsecs, the integrated deceleration of the local background regions of voids relative to galaxies must be accounted for in the relative synchronization of clocks of ideal observers who measure an isotropic cosmic microwave background. The relative deceleration of the background can be expected to represent a scale in which weak-field Newtonian dynamics should be modified to account for dynamical gradients in the Ricci scalar curvature of space. This acceleration scale is estimated using the best-fit nonlinear bubble model of the universe with backreaction. At redshifts z -10 ms -2 , is small, when integrated over the lifetime of the universe it amounts to an accumulated relative difference of 38% in the rate of average clocks in galaxies as compared to volume-average clocks in the emptiness of voids. A number of foundational aspects of the cosmological equivalence principle are also discussed, including its relation to Mach's principle, the Weyl curvature hypothesis, and the initial conditions of the universe.

  16. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  17. Absolute distance measurement with micrometer accuracy using a Michelson interferometer and the iterative synthetic wavelength principle.

    Science.gov (United States)

    Alzahrani, Khaled; Burton, David; Lilley, Francis; Gdeisat, Munther; Bezombes, Frederic; Qudeisat, Mohammad

    2012-02-27

    We present a novel system that can measure absolute distances of up to 300 mm with an uncertainty of the order of one micrometer, within a timeframe of 40 seconds. The proposed system uses a Michelson interferometer, a tunable laser, a wavelength meter and a computer for analysis. The principle of synthetic wave creation is used in a novel way in that the system employs an initial low precision estimate of the distance, obtained using a triangulation, or time-of-flight, laser system, or similar, and then iterates through a sequence of progressively smaller synthetic wavelengths until it reaches micrometer uncertainties in the determination of the distance. A further novel feature of the system is its use of Fourier transform phase analysis techniques to achieve sub-wavelength accuracy. This method has the major advantages of being relatively simple to realize, offering demonstrated high relative precisions better than 5 × 10(-5). Finally, the fact that this device does not require a continuous line-of-sight to the target as is the case with other configurations offers significant advantages.

  18. Tectonic superposition of the Kurosegawa Terrane upon the Sanbagawa metamorphic belt in eastern Shikoku, southwest Japan

    International Nuclear Information System (INIS)

    Suzuki, Hisashi; Isozaki, Yukio; Itaya, Tetsumaru.

    1990-01-01

    Weakly metamorphosed pre-Cenozoic accretionary complex in the northern part of the Chichibu Belt in Kamikatsu Town, eastern Shikoku, consists of two distinct geologic units; the Northern Unit and Southern Unit. The Northern Unit is composed mainly of phyllitic pelites and basic tuff with allochthonous blocks of chert and limestone, and possesses mineral paragenesis of the glaucophane schist facies. The Southern Unit is composed mainly of phyllitic pelites with allochthonous blocks of sandstone, limestone, massive green rocks, and chert, and possesses mineral paragenesis of the pumpellyite-actinolite facies. The Southern Unit tectonically overlies the Northern Univ by the south-dipping Jiganji Fault. K-Ar ages were dated for the recrystallized white micas from 11 samples of pelites and basic tuff in the Northern Unit, and from 6 samples of pelites in the Southern Unit. The K-Ar ages of the samples from the Northern Unit range in 129-112 Ma, and those from the Southern Unit in 225-194 Ma. In terms of metamorphic ages, the Northern Unit and Southern Unit are referred to the constituents of the Sanbagawa Metamorphic Belt, and to those of the Kurosegawa Terrane, respectively. Thus, tectonic superposition of these two units in the study area suggests that the Kurosegawa Terrane occurs in a higher structural position over the Sanbagawa Metamorphic Belt in eastern Shikoku. (author)

  19. Fermat and the Minimum Principle

    Indian Academy of Sciences (India)

    Arguably, least action and minimum principles were offered or applied much earlier. This (or these) principle(s) is/are among the fundamental, basic, unifying or organizing ones used to describe a variety of natural phenomena. It considers the amount of energy expended in performing a given action to be the least required ...

  20. Consideration of microstructure evolution and residual stress measurement near severe worked surface using high energy x-ray

    International Nuclear Information System (INIS)

    Hashimoto, Tadafumi; Mochizuki, Masahito; Shobu, Takahisa

    2012-01-01

    It is necessary to establish a measurement method that can evaluate accurate stress on the surface. However, the microstructure evolution takes place near the surface due to severe plastic deformation, since structural members have been superpositioned a lot of working processes to complete. As well known, a plane stress can't be assumed on the severe worked surface. Therefore we have been proposed the measurement method that can be measured the in-depth distribution of residual stress components by using high energy X-ray from a synchrotron radiation source. There is the combination of the constant penetration depth method and tri-axial stress analysis. Measurements were performed by diffraction planes for the orientation parameter Γ=0.25 of which elastic constants are nearly equal to the mechanical one. The stress components obtained must be converted to the stress components in real space by using optimization technique, since it corresponds to the weighted average stress components associated with the attenuation of X-ray in materials. The predicted stress components distribution agrees very well with the corrected one which was measured by the conventional removal method. To verify the availability of the proposed method, thermal aging variation of residual stress components on the severe worked surface under elevated temperature was investigated using specimen superpositioned working processes (i.e., welding, machining, peening). It is clarified that the residual stress components increase with thermal aging, using the diffraction planes in hard elastic constants to the bulk. This result suggests that the thermal stability of residual stress has the dependence of the diffraction plane. (author)