WorldWideScience

Sample records for zero-error classical communication

  1. Polaractivation for classical zero-error capacity of qudit channels

    Energy Technology Data Exchange (ETDEWEB)

    Gyongyosi, Laszlo, E-mail: gyongyosi@hit.bme.hu [Quantum Technologies Laboratory, Department of Telecommunications, Budapest University of Technology and Economics, 2 Magyar tudosok krt, Budapest, H-1117, Hungary and Information Systems Research Group, Mathematics and Natural Sciences, Hungarian Ac (Hungary); Imre, Sandor [Quantum Technologies Laboratory, Department of Telecommunications, Budapest University of Technology and Economics, 2 Magyar tudosok krt, Budapest, H-1117 (Hungary)

    2014-12-04

    We introduce a new phenomenon for zero-error transmission of classical information over quantum channels that initially were not able for zero-error classical communication. The effect is called polaractivation, and the result is similar to the superactivation effect. We use the Choi-Jamiolkowski isomorphism and the Schmidt-theorem to prove the polaractivation of classical zero-error capacity and define the polaractivator channel coding scheme.

  2. Activation of zero-error classical capacity in low-dimensional quantum systems

    Science.gov (United States)

    Park, Jeonghoon; Heo, Jun

    2018-06-01

    Channel capacities of quantum channels can be nonadditive even if one of two quantum channels has no channel capacity. We call this phenomenon activation of the channel capacity. In this paper, we show that when we use a quantum channel on a qubit system, only a noiseless qubit channel can generate the activation of the zero-error classical capacity. In particular, we show that the zero-error classical capacity of two quantum channels on qubit systems cannot be activated. Furthermore, we present a class of examples showing the activation of the zero-error classical capacity in low-dimensional systems.

  3. Entanglement-assisted zero-error capacity is upper-bounded by the Lovasz θ function

    International Nuclear Information System (INIS)

    Beigi, Salman

    2010-01-01

    The zero-error capacity of a classical channel is expressed in terms of the independence number of some graph and its tensor powers. This quantity is hard to compute even for small graphs such as the cycle of length seven, so upper bounds such as the Lovasz theta function play an important role in zero-error communication. In this paper, we show that the Lovasz theta function is an upper bound on the zero-error capacity even in the presence of entanglement between the sender and receiver.

  4. Zero-Error Capacity of a Class of Timing Channels

    DEFF Research Database (Denmark)

    Kovacevic, M.; Popovski, Petar

    2014-01-01

    We analyze the problem of zero-error communication through timing channels that can be interpreted as discrete-time queues with bounded waiting times. The channel model includes the following assumptions: 1) time is slotted; 2) at most N particles are sent in each time slot; 3) every particle is ...

  5. Simultaneous classical communication and quantum key distribution using continuous variables*

    Science.gov (United States)

    Qi, Bing

    2016-10-01

    Presently, classical optical communication systems employing strong laser pulses and quantum key distribution (QKD) systems working at single-photon levels are very different communication modalities. Dedicated devices are commonly required to implement QKD. In this paper, we propose a scheme which allows classical communication and QKD to be implemented simultaneously using the same communication infrastructure. More specially, we propose a coherent communication scheme where both the bits for classical communication and the Gaussian distributed random numbers for QKD are encoded on the same weak coherent pulse and decoded by the same coherent receiver. Simulation results based on practical system parameters show that both deterministic classical communication with a bit error rate of 10-9 and secure key distribution could be achieved over tens of kilometers of single-mode fibers. It is conceivable that in the future coherent optical communication network, QKD will be operated in the background of classical communication at a minimal cost.

  6. Random electrodynamics: the theory of classical electrodynamics with classical electromagnetic zero-point radiation

    International Nuclear Information System (INIS)

    Boyer, T.H.

    1975-01-01

    The theory of classical electrodynamics with classical electromagnetic zero-point radiation is outlined here under the title random electrodynamics. The work represents a reanalysis of the bounds of validity of classical electron theory which should sharpen the understanding of the connections and distinctions between classical and quantum theories. The new theory of random electrodynamics is a classical electron theory involving Newton's equations for particle motion due to the Lorentz force, and Maxwell's equations for the electromagnetic fields with point particles as sources. However, the theory departs from the classical electron theory of Lorentz in that it adopts a new boundary condition on Maxwell's equations. It is assumed that the homogeneous boundary condition involves random classical electromagnetic radiation with a Lorentz-invariant spectrum, classical electromagnetic zero-point radiation. The implications of random electrodynamics for atomic structure, atomic spectra, and particle-interference effects are discussed on an order-of-magnitude or heuristic level. Some detailed mathematical connections and some merely heuristic connections are noted between random electrodynamics and quantum theory. (U.S.)

  7. Derivation of the blackbody radiation spectrum from the equivalence principle in classical physics with classical electromagnetic zero-point radiation

    International Nuclear Information System (INIS)

    Boyer, T.H.

    1984-01-01

    A derivation of Planck's spectrum including zero-point radiation is given within classical physics from recent results involving the thermal effects of acceleration through classical electromagnetic zero-point radiation. A harmonic electric-dipole oscillator undergoing a uniform acceleration a through classical electromagnetic zero-point radiation responds as would the same oscillator in an inertial frame when not in zero-point radiation but in a different spectrum of random classical radiation. Since the equivalence principle tells us that the oscillator supported in a gravitational field g = -a will respond in the same way, we see that in a gravitational field we can construct a perpetual-motion machine based on this different spectrum unless the different spectrum corresponds to that of thermal equilibrium at a finite temperature. Therefore, assuming the absence of perpetual-motion machines of the first kind in a gravitational field, we conclude that the response of an oscillator accelerating through classical zero-point radiation must be that of a thermal system. This then determines the blackbody radiation spectrum in an inertial frame which turns out to be exactly Planck's spectrum including zero-point radiation

  8. Experimental multiplexing of quantum key distribution with classical optical communication

    International Nuclear Information System (INIS)

    Wang, Liu-Jun; Chen, Luo-Kan; Ju, Lei; Xu, Mu-Lan; Zhao, Yong; Chen, Kai; Chen, Zeng-Bing; Chen, Teng-Yun; Pan, Jian-Wei

    2015-01-01

    We demonstrate the realization of quantum key distribution (QKD) when combined with classical optical communication, and synchronous signals within a single optical fiber. In the experiment, the classical communication sources use Fabry-Pérot (FP) lasers, which are implemented extensively in optical access networks. To perform QKD, multistage band-stop filtering techniques are developed, and a wavelength-division multiplexing scheme is designed for the multi-longitudinal-mode FP lasers. We have managed to maintain sufficient isolation among the quantum channel, the synchronous channel and the classical channels to guarantee good QKD performance. Finally, the quantum bit error rate remains below a level of 2% across the entire practical application range. The proposed multiplexing scheme can ensure low classical light loss, and enables QKD over fiber lengths of up to 45 km simultaneously when the fibers are populated with bidirectional FP laser communications. Our demonstration paves the way for application of QKD to current optical access networks, where FP lasers are widely used by the end users

  9. Inefficiency and classical communication bounds for conversion between partially entangled pure bipartite states

    International Nuclear Information System (INIS)

    Fortescue, Ben; Lo, H.-K.

    2005-01-01

    We derive lower limits on the inefficiency and classical communication costs of dilution between two-term bipartite pure states that are partially entangled. We first calculate explicit relations between the allowable error and classical communication costs of entanglement dilution using a previously described protocol, then consider a two-stage dilution from singlets with this protocol followed by some unknown protocol for conversion between partially entangled states. Applying overall lower bounds on classical communication and inefficiency to this two-stage protocol, we derive bounds for the unknown protocol. In addition we derive analogous (but looser) bounds for general pure states

  10. Zero Thermal Noise in Resistors at Zero Temperature

    Science.gov (United States)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran

    2016-06-01

    The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.

  11. Understanding zero-point energy in the context of classical electromagnetism

    International Nuclear Information System (INIS)

    Boyer, Timothy H

    2016-01-01

    Today’s textbooks of electromagnetism give the particular solution to Maxwell’s equations involving the integral over the charge and current sources at retarded times. However, the texts fail to emphasise that the choice of the incoming-wave boundary conditions corresponding to solutions of the homogeneous Maxwell equations must be made based upon experiment. Here we discuss the role of these incoming-wave boundary conditions for an experimenter with a hypothetical charged harmonic oscillator as his equipment. We describe the observations of the experimenter when located near a radio station or immersed in thermal radiation at temperature T . The classical physicists at the end of the 19th century chose the incoming-wave boundary conditions for the homogeneous Maxwell equations based upon the experimental observations of Lummer and Pringsheim which measured only the thermal radiation which exceeded the random radiation surrounding their measuring equipment; the physicists concluded that they could take the homogeneous solutions to vanish at zero temperature. Today at the beginning of the 21st century, classical physicists must choose the incoming-wave boundary conditions for the homogeneous Maxell equations to correspond to the full radiation spectrum revealed by the recent Casimir force measurements which detect all the radiation surrounding conducting parallel plates, including the radiation absorbed and emitted by the plates themselves. The random classical radiation spectrum revealed by the Casimir force measurements includes electromagnetic zero-point radiation, which is missing from the spectrum measured by Lummer and Pringsheim, and which cannot be eliminated by going to zero temperature. This zero-point radiation will lead to zero-point energy for all systems which have electromagnetic interactions. Thus the choice of the incoming-wave boundary conditions on the homogeneous Maxwell equations is intimately related to the ideas of zero-point energy and

  12. Zero-point energy constraint in quasi-classical trajectory calculations.

    Science.gov (United States)

    Xie, Zhen; Bowman, Joel M

    2006-04-27

    A method to constrain the zero-point energy in quasi-classical trajectory calculations is proposed and applied to the Henon-Heiles system. The main idea of this method is to smoothly eliminate the coupling terms in the Hamiltonian as the energy of any mode falls below a specified value.

  13. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  14. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  15. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  16. Role of memory errors in quantum repeaters

    International Nuclear Information System (INIS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Duer, W.

    2007-01-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication

  17. Classical noise, quantum noise and secure communication

    International Nuclear Information System (INIS)

    Tannous, C; Langlois, J

    2016-01-01

    Secure communication based on message encryption might be performed by combining the message with controlled noise (called pseudo-noise) as performed in spread-spectrum communication used presently in Wi-Fi and smartphone telecommunication systems. Quantum communication based on entanglement is another route for securing communications as demonstrated by several important experiments described in this work. The central role played by the photon in unifying the description of classical and quantum noise as major ingredients of secure communication systems is highlighted and described on the basis of the classical and quantum fluctuation dissipation theorems. (review)

  18. Quantum Communication Attacks on Classical Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre

    , one can show that the protocol remains secure even under such an attack. However, there are also cases where the honest players are quantum as well, even if the protocol uses classical communication. For instance, this is the case when classical multiparty computation is used as a “subroutine......In the literature on cryptographic protocols, it has been studied several times what happens if a classical protocol is attacked by a quantum adversary. Usually, this is taken to mean that the adversary runs a quantum algorithm, but communicates classically with the honest players. In several cases......” in quantum multiparty computation. Furthermore, in the future, players in a protocol may employ quantum computing simply to improve efficiency of their local computation, even if the communication is supposed to be classical. In such cases, it no longer seems clear that a quantum adversary must be limited...

  19. Quantum Communication Attacks on Classical Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre

    , one can show that the protocol remains secure even under such an attack. However, there are also cases where the honest players are quantum as well, even if the protocol uses classical communication. For instance, this is the case when classical multiparty computation is used as a “subroutine......” in quantum multiparty computation. Furthermore, in the future, players in a protocol may employ quantum computing simply to improve efficiency of their local computation, even if the communication is supposed to be classical. In such cases, it no longer seems clear that a quantum adversary must be limited......In the literature on cryptographic protocols, it has been studied several times what happens if a classical protocol is attacked by a quantum adversary. Usually, this is taken to mean that the adversary runs a quantum algorithm, but communicates classically with the honest players. In several cases...

  20. Information-preserving structures: A general framework for quantum zero-error information

    International Nuclear Information System (INIS)

    Blume-Kohout, Robin; Ng, Hui Khoon; Poulin, David; Viola, Lorenza

    2010-01-01

    Quantum systems carry information. Quantum theory supports at least two distinct kinds of information (classical and quantum), and a variety of different ways to encode and preserve information in physical systems. A system's ability to carry information is constrained and defined by the noise in its dynamics. This paper introduces an operational framework, using information-preserving structures, to classify all the kinds of information that can be perfectly (i.e., with zero error) preserved by quantum dynamics. We prove that every perfectly preserved code has the same structure as a matrix algebra, and that preserved information can always be corrected. We also classify distinct operational criteria for preservation (e.g., 'noiseless','unitarily correctible', etc.) and introduce two natural criteria for measurement-stabilized and unconditionally preserved codes. Finally, for several of these operational criteria, we present efficient (polynomial in the state-space dimension) algorithms to find all of a channel's information-preserving structures.

  1. Quantum and classical vacuum forces at zero and finite temperature

    International Nuclear Information System (INIS)

    Niekerken, Ole

    2009-06-01

    In this diploma thesis the Casimir-Polder force at zero temperature and at finite temperatures is calculated by using a well-defined quantum field theory (formulated in position space) and the method of image charges. For the calculations at finite temperature KMS-states are used. The so defined temperature describes the temperature of the electromagnetic background. A one oscillator model for inhomogeneous dispersive absorbing dielectric material is introduced and canonically quantized to calculate the Casimir-Polder force at a dielectric interface at finite temperature. The model fulfils causal commutation relations and the dielectric function of the model fulfils the Kramer-Kronig relations. We then use the same methods to calculate the van der Waals force between two neutral atoms at zero temperature and at finite temperatures. It is shown that the high temperature behaviour of the Casimir-Polder force and the van der Waals force are independent of ℎ. This means that they have to be understood classically, what is then shown in an algebraic statistical theory by using classical KMS states. (orig.)

  2. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    Science.gov (United States)

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  3. Key rate of quantum key distribution with hashed two-way classical communication

    International Nuclear Information System (INIS)

    Watanabe, Shun; Matsumoto, Ryutaroh; Uyematsu, Tomohiko; Kawano, Yasuhito

    2007-01-01

    We propose an information reconciliation protocol that uses two-way classical communication. The key rates of quantum key distribution (QKD) protocols that use our protocol are higher than those using previously known protocols for a wide range of error rates for the Bennett-Brassard 1984 and six-state protocols. We also clarify the relation between the proposed and known QKD protocols, and the relation between the proposed protocol and entanglement distillation protocols

  4. Classical radiation zeros in gauge-theory amplitudes

    International Nuclear Information System (INIS)

    Brown, R.W.; Kowalski, K.L.; Brodsky, S.J.

    1983-01-01

    The electromagnetic radiation from classical convection currents in relativistic n-particle collisions is shown to vanish in certain kinematical zones, due to complete destructive interference of the classical radiation patterns of the incoming and outgoing charged lines. We prove that quantum tree photon amplitudes vanish in the same zones, at arbitrary photon momenta including spin, seagull, and internal-line currents, provided only that the electromagnetic couplings and any other derivative couplings are as prescribed by renormalizable local gauge theory (spins + #betta# is thus explained and examples with more particles are discussed. Conditions for the null zones to lie in physical regions are established. A new radiation representation, with the zeros manifest and of practical utility independently of whether the null zones are in physical regions is derived for the complete single-photon amplitude in tree approximation, using a gauge-invariant vertex expansion stemming from new internal-radiation decomposition identities. The question of whether amplitudes with closed loops can vanish in null zones is addressed. The null zone and these relations are discussed in terms of the Bargmann-Michel-Telegdi equation. The extension from photons to general massless gauge bosons is carried out

  5. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  6. Classical correlations, Bell inequalities, and communication complexity

    Energy Technology Data Exchange (ETDEWEB)

    Wilms, Johannes; Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany); Percival, Ian C. [Department of Physics, Univ. of London (United Kingdom)

    2007-07-01

    A computer program is presented which is capable of exploring generalizations of Bell-type inequalities for arbitrary numbers of classical inputs and outputs. Thereby, polytopes can be described which represent classical local realistic theories, classical theories without signaling, or classical theories with explicit signaling. These latter polytopes may also be of interest for exploring basic problems of communication complexity. As a first application the influence of non-perfect detectors is discussed in simple Bell experiments.

  7. Zero-Forcing and Minimum Mean-Square Error Multiuser Detection in Generalized Multicarrier DS-CDMA Systems for Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Lie-Liang Yang

    2008-01-01

    Full Text Available In wireless communications, multicarrier direct-sequence code-division multiple access (MC DS-CDMA constitutes one of the highly flexible multiple access schemes. MC DS-CDMA employs a high number of degrees-of-freedom, which are beneficial to design and reconfiguration for communications in dynamic communications environments, such as in the cognitive radios. In this contribution, we consider the multiuser detection (MUD in MC DS-CDMA, which motivates lowcomplexity, high flexibility, and robustness so that the MUD schemes are suitable for deployment in dynamic communications environments. Specifically, a range of low-complexity MUDs are derived based on the zero-forcing (ZF, minimum mean-square error (MMSE, and interference cancellation (IC principles. The bit-error rate (BER performance of the MC DS-CDMA aided by the proposed MUDs is investigated by simulation approaches. Our study shows that, in addition to the advantages provided by a general ZF, MMSE, or IC-assisted MUD, the proposed MUD schemes can be implemented using modular structures, where most modules are independent of each other. Due to the independent modular structure, in the proposed MUDs one module may be reconfigured without yielding impact on the others. Therefore, the MC DS-CDMA, in conjunction with the proposed MUDs, constitutes one of the promising multiple access schemes for communications in the dynamic communications environments such as in the cognitive radios.

  8. Zero-Forcing and Minimum Mean-Square Error Multiuser Detection in Generalized Multicarrier DS-CDMA Systems for Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Wang Li-Chun

    2008-01-01

    Full Text Available Abstract In wireless communications, multicarrier direct-sequence code-division multiple access (MC DS-CDMA constitutes one of the highly flexible multiple access schemes. MC DS-CDMA employs a high number of degrees-of-freedom, which are beneficial to design and reconfiguration for communications in dynamic communications environments, such as in the cognitive radios. In this contribution, we consider the multiuser detection (MUD in MC DS-CDMA, which motivates lowcomplexity, high flexibility, and robustness so that the MUD schemes are suitable for deployment in dynamic communications environments. Specifically, a range of low-complexity MUDs are derived based on the zero-forcing (ZF, minimum mean-square error (MMSE, and interference cancellation (IC principles. The bit-error rate (BER performance of the MC DS-CDMA aided by the proposed MUDs is investigated by simulation approaches. Our study shows that, in addition to the advantages provided by a general ZF, MMSE, or IC-assisted MUD, the proposed MUD schemes can be implemented using modular structures, where most modules are independent of each other. Due to the independent modular structure, in the proposed MUDs one module may be reconfigured without yielding impact on the others. Therefore, the MC DS-CDMA, in conjunction with the proposed MUDs, constitutes one of the promising multiple access schemes for communications in the dynamic communications environments such as in the cognitive radios.

  9. A simple model for correcting the zero point energy problem in classical trajectory simulations of polyatomic molecules

    International Nuclear Information System (INIS)

    Miller, W.H.; Hase, W.L.; Darling, C.L.

    1989-01-01

    A simple model is proposed for correcting problems with zero point energy in classical trajectory simulations of dynamical processes in polyatomic molecules. The ''problems'' referred to are that classical mechanics allows the vibrational energy in a mode to decrease below its quantum zero point value, and since the total energy is conserved classically this can allow too much energy to pool in other modes. The proposed model introduces hard sphere-like terms in action--angle variables that prevent the vibrational energy in any mode from falling below its zero point value. The algorithm which results is quite simple in terms of the cartesian normal modes of the system: if the energy in a mode k, say, decreases below its zero point value at time t, then at this time the momentum P k for that mode has its sign changed, and the trajectory continues. This is essentially a time reversal for mode k (only exclamation point), and it conserves the total energy of the system. One can think of the model as supplying impulsive ''quantum kicks'' to a mode whose energy attempts to fall below its zero point value, a kind of ''Planck demon'' analogous to a Brownian-like random force. The model is illustrated by application to a model of CH overtone relaxation

  10. Effect of Slice Error of Glass on Zero Offset of Capacitive Accelerometer

    Science.gov (United States)

    Hao, R.; Yu, H. J.; Zhou, W.; Peng, B.; Guo, J.

    2018-03-01

    Packaging process had been studied on capacitance accelerometer. The silicon-glass bonding process had been adopted on sensor chip and glass, and sensor chip and glass was adhered on ceramic substrate, the three-layer structure was curved due to the thermal mismatch, the slice error of glass lead to asymmetrical curve of sensor chip. Thus, the sensitive mass of accelerometer deviated along the sensitive direction, which was caused in zero offset drift. It was meaningful to confirm the influence of slice error of glass, the simulation results showed that the zero output drift was 12.3×10-3 m/s2 when the deviation was 40μm.

  11. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  12. Integration of quantum key distribution and private classical communication through continuous variable

    Science.gov (United States)

    Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping

    2017-12-01

    In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.

  13. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    Science.gov (United States)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  14. Minimal classical communication and measurement complexity for ...

    Indian Academy of Sciences (India)

    Minimal classical communication and measurement complexity for quantum ... Entanglement; teleportation; secret sharing; information splitting. ... Ahmedabad 380 009, India; Birla Institute of Technology and Science, Pilani 333 031, India ...

  15. Johnson(-like)-Noise-Kirchhoff-loop based secure classical communicator characteristics, for ranges of two to two thousand kilometers, via model-line

    International Nuclear Information System (INIS)

    Mingesz, Robert; Gingl, Zoltan; Kish, Laszlo B.

    2008-01-01

    A pair of Kirchhoff-loop-Johnson(-like)-Noise communicators, which is able to work over variable ranges, was designed and built. Tests have been carried out on a model-line performance characteristics were obtained for ranges beyond the ranges of any known direct quantum communication channel and they indicate unrivalled signal fidelity and security performance of the exchanged raw key bits. This simple device has single-wire secure key generation and sharing rates of 0.1, 1, 10, and 100 bit/second for corresponding copper wire diameters/ranges of 21 mm/2000 km, 7 mm/200 km, 2.3 mm/20 km, and 0.7 mm/2 km, respectively and it performs with 0.02% raw-bit error rate (99.98% fidelity). The raw-bit security of this practical system significantly outperforms raw-bit quantum security. Current injection breaking tests show zero bit eavesdropping ability without triggering the alarm signal, therefore no multiple measurements are needed to build an error statistics to detect the eavesdropping as in quantum communication. Wire resistance based breaking tests of Bergou-Scheuer-Yariv type give an upper limit of eavesdropped raw-bit ratio of 0.19% and this limit is inversely proportional to the sixth power of cable diameter. Hao's breaking method yields zero (below measurement resolution) eavesdropping information

  16. Orbital classical solutions, non-perturbative phenomena and singularity at the zero coupling constant point

    International Nuclear Information System (INIS)

    Vourdas, A.

    1982-01-01

    We try to extend previous arguments on orbital classical solutions in non-relativistic quantum mechanics to the 1/4lambda vertical stroke phi vertical stroke 4 complex relativistic field theory. The single valuedness of the Green function in the semiclassical (Planksche Konstante → 0) limit leads to a Bohr-Sommerfeld quantization. A path integral formalism for the Green functions analogous to that in non-relativistic quantum mechanics is employed and a semiclassical approach which uses our classical solutions indicates non-perturbative effects. They reflect an esub(1/lambda) singularity at the zero coupling constant point. (orig.)

  17. Tensor Norms and the Classical Communication Complexity of Nonlocal Quantum Measurement

    OpenAIRE

    Shi, Yaoyun; Zhu, Yufan

    2005-01-01

    We initiate the study of quantifying nonlocalness of a bipartite measurement by the minimum amount of classical communication required to simulate the measurement. We derive general upper bounds, which are expressed in terms of certain tensor norms of the measurement operator. As applications, we show that (a) If the amount of communication is constant, quantum and classical communication protocols with unlimited amount of shared entanglement or shared randomness compute the same set of funct...

  18. PERFORMANCE OF THE ZERO FORCING PRECODING MIMO BROADCAST SYSTEMS WITH CHANNEL ESTIMATION ERRORS

    Institute of Scientific and Technical Information of China (English)

    Wang Jing; Liu Zhanli; Wang Yan; You Xiaohu

    2007-01-01

    In this paper, the effect of channel estimation errors upon the Zero Forcing (ZF) precoding Multiple Input Multiple Output Broadcast (MIMO BC) systems was studied. Based on the two kinds of Gaussian estimation error models, the performance analysis is conducted under different power allocation strategies. Analysis and simulation show that if the covariance of channel estimation errors is independent of the received Signal to Noise Ratio (SNR), imperfect channel knowledge deteriorates the sum capacity and the Bit Error Rate (BER) performance severely. However, under the situation of orthogonal training and the Minimum Mean Square Error (MMSE) channel estimation, the sum capacity and BER performance are consistent with those of the perfect Channel State Information (CSI)with only a performance degradation.

  19. On superactivation of one-shot quantum zero-error capacity and the related property of quantum measurements

    DEFF Research Database (Denmark)

    Shirokov, M. E.; Shulman, Tatiana

    2014-01-01

    We give a detailed description of a low-dimensional quantum channel (input dimension 4, Choi rank 3) demonstrating the symmetric form of superactivation of one-shot quantum zero-error capacity. This property means appearance of a noiseless (perfectly reversible) subchannel in the tensor square...... of a channel having no noiseless subchannels. Then we describe a quantum channel with an arbitrary given level of symmetric superactivation (including the infinite value). We also show that superactivation of one-shot quantum zero-error capacity of a channel can be reformulated in terms of quantum measurement...

  20. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    Science.gov (United States)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  1. Analysis of the "naming game" with learning errors in communications.

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  2. Sharp Threshold Detection Based on Sup-norm Error rates in High-dimensional Models

    DEFF Research Database (Denmark)

    Callot, Laurent; Caner, Mehmet; Kock, Anders Bredahl

    focused almost exclusively on estimation errors in stronger norms. We show that this sup-norm bound can be used to distinguish between zero and non-zero coefficients at a much finer scale than would have been possible using classical oracle inequalities. Thus, our sup-norm bound is tailored to consistent...

  3. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...

  4. Differences among Job Positions Related to Communication Errors at Construction Sites

    Science.gov (United States)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  5. Quantum entanglement in non-local games, graph parameters and zero-error information theory

    NARCIS (Netherlands)

    Scarpa, G.

    2013-01-01

    We study quantum entanglement and some of its applications in graph theory and zero-error information theory. In Chapter 1 we introduce entanglement and other fundamental concepts of quantum theory. In Chapter 2 we address the question of how much quantum correlations generated by entanglement can

  6. Analysis of the “naming game” with learning errors in communications

    OpenAIRE

    Yang Lou; Guanrong Chen

    2015-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is ...

  7. When the asymptotic limit offers no advantage in the local-operations-and-classical-communication paradigm

    Science.gov (United States)

    Fu, Honghao; Leung, Debbie; Mančinska, Laura

    2014-05-01

    We consider bipartite LOCC, the class of operations implementable by local quantum operations and classical communication between two parties. Surprisingly, there are operations that can be approximated to arbitrary precision but are impossible to implement exactly if only a finite number of messages are exchanged. This significantly complicates the analysis of what can or cannot be approximated with LOCC. Toward alleviating this problem, we exhibit two scenarios in which allowing vanishing error does not help. The first scenario is implementation of projective measurements with product measurement operators. The second scenario is the discrimination of unextendable product bases on two three-dimensional systems.

  8. State-independent error-disturbance trade-off for measurement operators

    International Nuclear Information System (INIS)

    Zhou, S.S.; Wu, Shengjun; Chau, H.F.

    2016-01-01

    In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions – one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.

  9. Medical Error Avoidance in Intraoperative Neurophysiological Monitoring: The Communication Imperative.

    Science.gov (United States)

    Skinner, Stan; Holdefer, Robert; McAuliffe, John J; Sala, Francesco

    2017-11-01

    Error avoidance in medicine follows similar rules that apply within the design and operation of other complex systems. The error-reduction concepts that best fit the conduct of testing during intraoperative neuromonitoring are forgiving design (reversibility of signal loss to avoid/prevent injury) and system redundancy (reduction of false reports by the multiplication of the error rate of tests independently assessing the same structure). However, error reduction in intraoperative neuromonitoring is complicated by the dichotomous roles (and biases) of the neurophysiologist (test recording and interpretation) and surgeon (intervention). This "interventional cascade" can be given as follows: test → interpretation → communication → intervention → outcome. Observational and controlled trials within operating rooms demonstrate that optimized communication, collaboration, and situational awareness result in fewer errors. Well-functioning operating room collaboration depends on familiarity and trust among colleagues. Checklists represent one method to initially enhance communication and avoid obvious errors. All intraoperative neuromonitoring supervisors should strive to use sufficient means to secure situational awareness and trusted communication/collaboration. Face-to-face audiovisual teleconnections may help repair deficiencies when a particular practice model disallows personal operating room availability. All supervising intraoperative neurophysiologists need to reject an insular or deferential or distant mindset.

  10. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  11. Energy efficiency of error correcting mechanisms for wireless communications

    NARCIS (Netherlands)

    Havinga, Paul J.M.

    We consider the energy efficiency of error control mechanisms for wireless communication. Since high error rates are inevitable to the wireless environment, energy efficient error control is an important issue for mobile computing systems. Although good designed retransmission schemes can be optimal

  12. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  13. Reducing Diagnostic Errors through Effective Communication: Harnessing the Power of Information Technology

    Science.gov (United States)

    Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann

    2008-01-01

    Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151

  14. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  15. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  16. Zero-point energy conservation in classical trajectory simulations: Application to H2CO

    Science.gov (United States)

    Lee, Kin Long Kelvin; Quinn, Mitchell S.; Kolmann, Stephen J.; Kable, Scott H.; Jordan, Meredith J. T.

    2018-05-01

    A new approach for preventing zero-point energy (ZPE) violation in quasi-classical trajectory (QCT) simulations is presented and applied to H2CO "roaming" reactions. Zero-point energy may be problematic in roaming reactions because they occur at or near bond dissociation thresholds and these channels may be incorrectly open or closed depending on if, or how, ZPE has been treated. Here we run QCT simulations on a "ZPE-corrected" potential energy surface defined as the sum of the molecular potential energy surface (PES) and the global harmonic ZPE surface. Five different harmonic ZPE estimates are examined with four, on average, giving values within 4 kJ/mol—chemical accuracy—for H2CO. The local harmonic ZPE, at arbitrary molecular configurations, is subsequently defined in terms of "projected" Cartesian coordinates and a global ZPE "surface" is constructed using Shepard interpolation. This, combined with a second-order modified Shepard interpolated PES, V, allows us to construct a proof-of-concept ZPE-corrected PES for H2CO, Veff, at no additional computational cost to the PES itself. Both V and Veff are used to model product state distributions from the H + HCO → H2 + CO abstraction reaction, which are shown to reproduce the literature roaming product state distributions. Our ZPE-corrected PES allows all trajectories to be analysed, whereas, in previous simulations, a significant proportion was discarded because of ZPE violation. We find ZPE has little effect on product rotational distributions, validating previous QCT simulations. Running trajectories on V, however, shifts the product kinetic energy release to higher energy than on Veff and classical simulations of kinetic energy release should therefore be viewed with caution.

  17. Ultra-fast secure communication with complex systems in classical channels (Conference Presentation)

    KAUST Repository

    Mazzone, Valerio

    2017-04-28

    Developing secure communications is a research area of growing interest. During the past years, several cryptographic schemes have been developed, with Quantum cryptography being a promising scheme due to the use of quantum effects, which make very difficult for an eavesdropper to intercept the communication. However, practical quantum key distribution methods have encountered several limitations; current experimental realizations, in fact, fail to scale up on long distances, as well as in providing unconditional security and speed comparable to classical optical communications channels. Here we propose a new, low cost and ultra-fast cryptographic system based on a fully classical optical channel. Our cryptographic scheme exploits the complex synchronization of two different random systems (one on the side of the sender and another on the side of the receiver) to realize a “physical” one paid system. The random medium is created by an optical chip fabricated through electron beam lithography on a Silicon On Insulator (SOI) substrate. We present experiments with ps lasers and commercial fibers, showing the ultrafast distribution of a random key between two users (Alice and Bob), with absolute no possibility for a passive/active eavesdropper to intercept the communication. Remarkably, this system enables the same security of quantum cryptography, but with the use of a classical communication channel. Our system exploits a unique synchronization that exists between two different random systems, and at such is extremely versatile and can enable safe communications among different users in standards telecommunications channels.

  18. Classical and quantum fingerprinting strategies

    International Nuclear Information System (INIS)

    Scott, A.; Walgate, J.; Sanders, B.

    2005-01-01

    Full text: Fingerprinting enables two parties to infer whether the messages they hold are the same or different when the cost of communication is high: each message is associated with a smaller fingerprint and comparisons between messages are made in terms of their fingerprints alone. When the two parties are forbidden access to a public coin, it is known that fingerprints composed of quantum information can be made exponentially smaller than those composed of classical information. We present specific constructions of classical fingerprinting strategies through the use of constant-weight codes and provide bounds on the worst-case error probability with the help of extremal set theory. These classical strategies are easily outperformed by quantum strategies constructed from line packings and equiangular tight frames. (author)

  19. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...... incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors....

  20. Impact of Communication Errors in Radiology on Patient Care, Customer Satisfaction, and Work-Flow Efficiency.

    Science.gov (United States)

    Siewert, Bettina; Brook, Olga R; Hochman, Mary; Eisenberg, Ronald L

    2016-03-01

    The purpose of this study is to analyze the impact of communication errors on patient care, customer satisfaction, and work-flow efficiency and to identify opportunities for quality improvement. We performed a search of our quality assurance database for communication errors submitted from August 1, 2004, through December 31, 2014. Cases were analyzed regarding the step in the imaging process at which the error occurred (i.e., ordering, scheduling, performance of examination, study interpretation, or result communication). The impact on patient care was graded on a 5-point scale from none (0) to catastrophic (4). The severity of impact between errors in result communication and those that occurred at all other steps was compared. Error evaluation was performed independently by two board-certified radiologists. Statistical analysis was performed using the chi-square test and kappa statistics. Three hundred eighty of 422 cases were included in the study. One hundred ninety-nine of the 380 communication errors (52.4%) occurred at steps other than result communication, including ordering (13.9%; n = 53), scheduling (4.7%; n = 18), performance of examination (30.0%; n = 114), and study interpretation (3.7%; n = 14). Result communication was the single most common step, accounting for 47.6% (181/380) of errors. There was no statistically significant difference in impact severity between errors that occurred during result communication and those that occurred at other times (p = 0.29). In 37.9% of cases (144/380), there was an impact on patient care, including 21 minor impacts (5.5%; result communication, n = 13; all other steps, n = 8), 34 moderate impacts (8.9%; result communication, n = 12; all other steps, n = 22), and 89 major impacts (23.4%; result communication, n = 45; all other steps, n = 44). In 62.1% (236/380) of cases, no impact was noted, but 52.6% (200/380) of cases had the potential for an impact. Among 380 communication errors in a radiology department, 37

  1. Non-zero total correlation means non-zero quantum correlation

    International Nuclear Information System (INIS)

    Li, Bo; Chen, Lin; Fan, Heng

    2014-01-01

    We investigated the super quantum discord based on weak measurements. The super quantum discord is an extension of the standard quantum discord defined by projective measurements and also describes the quantumness of correlations. We provide some equivalent conditions for zero super quantum discord by using quantum discord, classical correlation and mutual information. In particular, we find that the super quantum discord is zero only for product states, which have zero mutual information. This result suggests that non-zero correlations can always be detected using the quantum correlation with weak measurements. As an example, we present the assisted state-discrimination method.

  2. Physician Preferences to Communicate Neuropsychological Results: Comparison of Qualitative Descriptors and a Proposal to Reduce Communication Errors.

    Science.gov (United States)

    Schoenberg, Mike R; Osborn, Katie E; Mahone, E Mark; Feigon, Maia; Roth, Robert M; Pliskin, Neil H

    2017-11-08

    Errors in communication are a leading cause of medical errors. A potential source of error in communicating neuropsychological results is confusion in the qualitative descriptors used to describe standardized neuropsychological data. This study sought to evaluate the extent to which medical consumers of neuropsychological assessments believed that results/findings were not clearly communicated. In addition, preference data for a variety of qualitative descriptors commonly used to communicate normative neuropsychological test scores were obtained. Preference data were obtained for five qualitative descriptor systems as part of a larger 36-item internet-based survey of physician satisfaction with neuropsychological services. A new qualitative descriptor system termed the Simplified Qualitative Classification System (Q-Simple) was proposed to reduce the potential for communication errors using seven terms: very superior, superior, high average, average, low average, borderline, and abnormal/impaired. A non-random convenience sample of 605 clinicians identified from four United States academic medical centers from January 1, 2015 through January 7, 2016 were invited to participate. A total of 182 surveys were completed. A minority of clinicians (12.5%) indicated that neuropsychological study results were not clearly communicated. When communicating neuropsychological standardized scores, the two most preferred qualitative descriptor systems were by Heaton and colleagues (26%) and a newly proposed Q-simple system (22%). Comprehensive norms for an extended Halstead-Reitan battery: Demographic corrections, research findings, and clinical applications. Odessa, TX: Psychological Assessment Resources) (26%) and the newly proposed Q-Simple system (22%). Initial findings highlight the need to improve and standardize communication of neuropsychological results. These data offer initial guidance for preferred terms to communicate test results and form a foundation for more

  3. Asynchronous error-correcting secure communication scheme based on fractional-order shifting chaotic system

    Science.gov (United States)

    Chao, Luo

    2015-11-01

    In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.

  4. Topics in quantum cryptography, quantum error correction, and channel simulation

    Science.gov (United States)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel

  5. Implementability of two-qubit unitary operations over the butterfly network and the ladder network with free classical communication

    Energy Technology Data Exchange (ETDEWEB)

    Akibue, Seiseki [Department of Physics, Graduate School of Science, The University of Tokyo, Tokyo (Japan); Murao, Mio [Department of Physics, Graduate School of Science, The University of Tokyo, Tokyo, Japan and NanoQuine, The University of Tokyo, Tokyo (Japan)

    2014-12-04

    We investigate distributed implementation of two-qubit unitary operations over two primitive networks, the butterfly network and the ladder network, as a first step to apply network coding for quantum computation. By classifying two-qubit unitary operations in terms of the Kraus-Cirac number, the number of non-zero parameters describing the global part of two-qubit unitary operations, we analyze which class of two-qubit unitary operations is implementable over these networks with free classical communication. For the butterfly network, we show that two classes of two-qubit unitary operations, which contain all Clifford, controlled-unitary and matchgate operations, are implementable over the network. For the ladder network, we show that two-qubit unitary operations are implementable over the network if and only if their Kraus-Cirac number do not exceed the number of the bridges of the ladder.

  6. Implementability of two-qubit unitary operations over the butterfly network and the ladder network with free classical communication

    International Nuclear Information System (INIS)

    Akibue, Seiseki; Murao, Mio

    2014-01-01

    We investigate distributed implementation of two-qubit unitary operations over two primitive networks, the butterfly network and the ladder network, as a first step to apply network coding for quantum computation. By classifying two-qubit unitary operations in terms of the Kraus-Cirac number, the number of non-zero parameters describing the global part of two-qubit unitary operations, we analyze which class of two-qubit unitary operations is implementable over these networks with free classical communication. For the butterfly network, we show that two classes of two-qubit unitary operations, which contain all Clifford, controlled-unitary and matchgate operations, are implementable over the network. For the ladder network, we show that two-qubit unitary operations are implementable over the network if and only if their Kraus-Cirac number do not exceed the number of the bridges of the ladder

  7. Comparison of Bit Error Rate of Line Codes in NG-PON2

    Directory of Open Access Journals (Sweden)

    Tomas Horvath

    2016-05-01

    Full Text Available This article focuses on simulation and comparison of line codes NRZ (Non Return to Zero, RZ (Return to Zero and Miller’s code for NG-PON2 (Next-Generation Passive Optical Network Stage 2 using. Our article provides solutions with Q-factor, BER (Bit Error Rate, and bandwidth comparison. Line codes are the most important part of communication over the optical fibre. The main role of these codes is digital signal representation. NG-PON2 networks use optical fibres for communication that is the reason why OptSim v5.2 is used for simulation.

  8. Modeling Data with Excess Zeros and Measurement Error: Application to Evaluating Relationships between Episodically Consumed Foods and Health Outcomes

    KAUST Repository

    Kipnis, Victor; Midthune, Douglas; Buckman, Dennis W.; Dodd, Kevin W.; Guenther, Patricia M.; Krebs-Smith, Susan M.; Subar, Amy F.; Tooze, Janet A.; Carroll, Raymond J.; Freedman, Laurence S.

    2009-01-01

    Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006, Journal of the American Dietetic Association 106, 1575-1587) describe a general statistical approach

  9. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  10. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  11. Two-Way Communication with a Single Quantum Particle

    Science.gov (United States)

    Del Santo, Flavio; Dakić, Borivoje

    2018-02-01

    In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.

  12. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    Science.gov (United States)

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-03-13

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Classical, Semi-classical and Quantum Noise

    CERN Document Server

    Poor, H; Scully, Marlan

    2012-01-01

    David Middleton was a towering figure of 20th Century engineering and science and one of the founders of statistical communication theory. During the second World War, the young David Middleton, working with Van Fleck, devised the notion of the matched filter, which is the most basic method used for detecting signals in noise. Over the intervening six decades, the contributions of Middleton have become classics. This collection of essays by leading scientists, engineers and colleagues of David are in his honor and reflect the wide  influence that he has had on many fields. Also included is the introduction by Middleton to his forthcoming book, which gives a wonderful view of the field of communication, its history and his own views on the field that he developed over the past 60 years. Focusing on classical noise modeling and applications, Classical, Semi-Classical and Quantum Noise includes coverage of statistical communication theory, non-stationary noise, molecular footprints, noise suppression, Quantum e...

  14. Reducing image interpretation errors – Do communication strategies undermine this?

    International Nuclear Information System (INIS)

    Snaith, B.; Hardy, M.; Lewis, E.F.

    2014-01-01

    Introduction: Errors in the interpretation of diagnostic images in the emergency department are a persistent problem internationally. To address this issue, a number of risk reduction strategies have been suggested but only radiographer abnormality detection schemes (RADS) have been widely implemented in the UK. This study considers the variation in RADS operation and communication in light of technological advances and changes in service operation. Methods: A postal survey of all NHS hospitals operating either an Emergency Department or Minor Injury Unit and a diagnostic imaging (radiology) department (n = 510) was undertaken between July and August 2011. The questionnaire was designed to elicit information on emergency service provision and details of RADS. Results: 325 questionnaires were returned (n = 325/510; 63.7%). The majority of sites (n = 288/325; 88.6%) operated a RADS with the majority (n = 227/288; 78.8%) employing a visual ‘flagging’ system as the only method of communication although symbols used were inconsistent and contradictory across sites. 61 sites communicated radiographer findings through a written proforma (paper or electronic) but this was run in conjunction with a flagging system at 50 sites. The majority of sites did not have guidance on the scope or operation of the ‘flagging’ or written communication system in use. Conclusions: RADS is an established clinical intervention to reduce errors in diagnostic image interpretation within the emergency setting. The lack of standardisation in communication processes and practices alongside the rapid adoption of technology has increased the potential for error and miscommunication

  15. Zeros da função zeta de Riemann e o teorema dos números primos

    OpenAIRE

    Oliveira, Willian Diego [UNESP

    2013-01-01

    We studied various properties of the Riemann’s zeta function. Three proofs of the Prime Number Theorem were provides. Classical results on zero-free region of the zeta function, as well as their relation to the error term in the Prime Number Theorem, were studied in details Estudamos várias propriedades da função zeta de Riemann. Três provas do Teorema dos Números Primos foram fornecidas. Resultados clássicos sobre regiões livres de zeros da função zeta, bem como sua relação com o termo do...

  16. Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Method Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions......Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...

  17. In-hospital fellow coverage reduces communication errors in the surgical intensive care unit.

    Science.gov (United States)

    Williams, Mallory; Alban, Rodrigo F; Hardy, James P; Oxman, David A; Garcia, Edward R; Hevelone, Nathanael; Frendl, Gyorgy; Rogers, Selwyn O

    2014-06-01

    Staff coverage strategies of intensive care units (ICUs) impact clinical outcomes. High-intensity staff coverage strategies are associated with lower morbidity and mortality. Accessible clinical expertise, team work, and effective communication have all been attributed to the success of this coverage strategy. We evaluate the impact of in-hospital fellow coverage (IHFC) on improving communication of cardiorespiratory events. A prospective observational study performed in an academic tertiary care center with high-intensity staff coverage. The main outcome measure was resident to fellow communication of cardiorespiratory events during IHFC vs home coverage (HC) periods. Three hundred twelve cardiorespiratory events were collected in 114 surgical ICU patients in 134 study days. Complete data were available for 306 events. One hundred three communication errors occurred. IHFC was associated with significantly better communication of events compared to HC (Pcommunicated 89% of events during IHFC vs 51% of events during HC (PCommunication patterns of junior and midlevel residents were similar. Midlevel residents communicated 68% of all on-call events (87% IHFC vs 50% HC, Pcommunicated 66% of events (94% IHFC vs 52% HC, PCommunication errors were lower in all ICUs during IHFC (Pcommunication errors. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Understanding the Planck blackbody spectrum and Landau diamagnetism within classical electromagnetism

    International Nuclear Information System (INIS)

    Boyer, Timothy H

    2016-01-01

    Electromagnetism is a relativistic theory, and one must exercise care in coupling this theory with nonrelativistic classical mechanics and with nonrelativistic classical statistical mechanics. Indeed historically, both the blackbody radiation spectrum and diamagnetism within classical theory have been misunderstood because of two crucial failures: (1) the neglect of classical electromagnetic zero-point radiation, and (2) the use of erroneous combinations of nonrelativistic mechanics with relativistic electrodynamics. Here we review the treatment of classical blackbody radiation, and show that the presence of Lorentz-invariant classical electromagnetic zero-point radiation can explain both the Planck blackbody spectrum and Landau diamagnetism at thermal equilibrium within classical electromagnetic theory. The analysis requires that relativistic electromagnetism is joined appropriately with simple nonrelativistic mechanical systems which can be regarded as the zero-velocity limits of relativistic systems, and that nonrelativistic classical statistical mechanics is applied only in the low-frequency limit when zero-point energy makes no contribution. (paper)

  19. Capacity of oscillatory associative-memory networks with error-free retrieval

    International Nuclear Information System (INIS)

    Nishikawa, Takashi; Lai Yingcheng; Hoppensteadt, Frank C.

    2004-01-01

    Networks of coupled periodic oscillators (similar to the Kuramoto model) have been proposed as models of associative memory. However, error-free retrieval states of such oscillatory networks are typically unstable, resulting in a near zero capacity. This puts the networks at disadvantage as compared with the classical Hopfield network. Here we propose a simple remedy for this undesirable property and show rigorously that the error-free capacity of our oscillatory, associative-memory networks can be made as high as that of the Hopfield network. They can thus not only provide insights into the origin of biological memory, but can also be potentially useful for applications in information science and engineering

  20. Information transmission in microbial and fungal communication: from classical to quantum.

    Science.gov (United States)

    Majumdar, Sarangam; Pal, Sukla

    2018-06-01

    Microbes have their own communication systems. Secretion and reception of chemical signaling molecules and ion-channels mediated electrical signaling mechanism are yet observed two special ways of information transmission in microbial community. In this article, we address the aspects of various crucial machineries which set the backbone of microbial cell-to-cell communication process such as quorum sensing mechanism (bacterial and fungal), quorum sensing regulated biofilm formation, gene expression, virulence, swarming, quorum quenching, role of noise in quorum sensing, mathematical models (therapy model, evolutionary model, molecular mechanism model and many more), synthetic bacterial communication, bacterial ion-channels, bacterial nanowires and electrical communication. In particular, we highlight bacterial collective behavior with classical and quantum mechanical approaches (including quantum information). Moreover, we shed a new light to introduce the concept of quantum synthetic biology and possible cellular quantum Turing test.

  1. Quantum and classical vacuum forces at zero and finite temperature; Quantentheoretische und klassische Vakuum-Kraefte bei Temperatur Null und bei endlicher Temperatur

    Energy Technology Data Exchange (ETDEWEB)

    Niekerken, Ole

    2009-06-15

    In this diploma thesis the Casimir-Polder force at zero temperature and at finite temperatures is calculated by using a well-defined quantum field theory (formulated in position space) and the method of image charges. For the calculations at finite temperature KMS-states are used. The so defined temperature describes the temperature of the electromagnetic background. A one oscillator model for inhomogeneous dispersive absorbing dielectric material is introduced and canonically quantized to calculate the Casimir-Polder force at a dielectric interface at finite temperature. The model fulfils causal commutation relations and the dielectric function of the model fulfils the Kramer-Kronig relations. We then use the same methods to calculate the van der Waals force between two neutral atoms at zero temperature and at finite temperatures. It is shown that the high temperature behaviour of the Casimir-Polder force and the van der Waals force are independent of {Dirac_h}. This means that they have to be understood classically, what is then shown in an algebraic statistical theory by using classical KMS states. (orig.)

  2. Associations between communication climate and the frequency of medical error reporting among pharmacists within an inpatient setting.

    Science.gov (United States)

    Patterson, Mark E; Pace, Heather A; Fincham, Jack E

    2013-09-01

    Although error-reporting systems enable hospitals to accurately track safety climate through the identification of adverse events, these systems may be underused within a work climate of poor communication. The objective of this analysis is to identify the extent to which perceived communication climate among hospital pharmacists impacts medical error reporting rates. This cross-sectional study used survey responses from more than 5000 pharmacists responding to the 2010 Hospital Survey on Patient Safety Culture (HSOPSC). Two composite scores were constructed for "communication openness" and "feedback and about error," respectively. Error reporting frequency was defined from the survey question, "In the past 12 months, how many event reports have you filled out and submitted?" Multivariable logistic regressions were used to estimate the likelihood of medical error reporting conditional upon communication openness or feedback levels, controlling for pharmacist years of experience, hospital geographic region, and ownership status. Pharmacists with higher communication openness scores compared with lower scores were 40% more likely to have filed or submitted a medical error report in the past 12 months (OR, 1.4; 95% CI, 1.1-1.7; P = 0.004). In contrast, pharmacists with higher communication feedback scores were not any more likely than those with lower scores to have filed or submitted a medical report in the past 12 months (OR, 1.0; 95% CI, 0.8-1.3; P = 0.97). Hospital work climates that encourage pharmacists to freely communicate about problems related to patient safety is conducive to medical error reporting. The presence of feedback infrastructures about error may not be sufficient to induce error-reporting behavior.

  3. Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication

    Science.gov (United States)

    Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-09-01

    In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

  4. General Unified Integral Controller with Zero Steady-State Error for Single-Phase Grid-Connected Inverters

    DEFF Research Database (Denmark)

    Guo, Xiaoqiang; Guerrero, Josep M.

    2016-01-01

    Current regulation is crucial for operating single-phase grid-connected inverters. The challenge of the current controller is how to fast and precisely tracks the current with zero steady-state error. This paper proposes a novel feedback mechanism for the conventional PI controller. It allows...... done indicates that the widely used PR (P+Resonant) control is just a special case of the proposed control solution. The time-domain simulation in Matlab/Simulink and experimental results from a TMS320F2812 DSP based laboratory prototypes are in good agreement, which verify the effectiveness...

  5. Lakshmibai-Seshadri paths of level-zero weight shape and one-dimensional sums associated to level-zero fundamental representations

    OpenAIRE

    Naito, Satoshi; Sagaki, Daisuke

    2006-01-01

    We give interpretations of energy functions and (classically restricted) one-dimensional sums associated to tensor products of level-zero fundamental representations of quantum affine algebras in terms of Lakshmibai-Seshadri paths of level-zero weight shape.

  6. MIMO FIR feedforward design for zero error tracking control

    NARCIS (Netherlands)

    Heertjes, M.F.; Bruijnen, D.J.H.

    2014-01-01

    This paper discusses a multi-input multi-output (MIMO) finite impulse response (FIR) feedforward design. The design is intended for systems that have (non-)minimum phase zeros in the plant description. The zeros of the plant (either minimum or non-minimum phase) are used in the shaping of the

  7. Error statistics in a high-speed fibreoptic communication line with a phase shift of odd bits

    International Nuclear Information System (INIS)

    Shapiro, Elena G

    2009-01-01

    The propagation of optical pulses through a fibreoptic communication line with a phase shift of odd bits is directly numerically simulated. It is shown that simple analytic expressions approximate well the error probability. The phase shift of odd bits in the initial sequence is statistically shown to decrease significantly the error probability in the communication line. (fibreoptic communication lines)

  8. Addressee Errors in ATC Communications: The Call Sign Problem

    Science.gov (United States)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  9. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  10. Uniqueness and zeros of q-shift difference polynomials

    Indian Academy of Sciences (India)

    In this paper, we consider the zero distributions of -shift difference polynomials of meromorphic functions with zero order, and obtain two theorems that extend the classical Hayman results on the zeros of differential polynomials to -shift difference polynomials. We also investigate the uniqueness problem of -shift ...

  11. Mixed quantum/classical investigation of the photodissociation of NH3(A) and a practical method for maintaining zero-point energy in classical trajectories.

    Science.gov (United States)

    Bonhommeau, David; Truhlar, Donald G

    2008-07-07

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  12. Mixed quantum/classical investigation of the photodissociation of NH3(Ã) and a practical method for maintaining zero-point energy in classical trajectories

    Science.gov (United States)

    Bonhommeau, David; Truhlar, Donald G.

    2008-07-01

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  13. Learning from incident reports in the Australian medical imaging setting: handover and communication errors.

    Science.gov (United States)

    Hannaford, N; Mandel, C; Crock, C; Buckley, K; Magrabi, F; Ong, M; Allen, S; Schultz, T

    2013-02-01

    To determine the type and nature of incidents occurring within medical imaging settings in Australia and identify strategies that could be engaged to reduce the risk of their re-occurrence. 71 search terms, related to clinical handover and communication, were applied to 3976 incidents in the Radiology Events Register. Detailed classification and thematic analysis of a subset of incidents that involved handover or communication (n=298) were undertaken to identify the most prevalent types of error and to make recommendations about patient safety initiatives in medical imaging. Incidents occurred most frequently during patient preparation (34%), when requesting imaging (27%) and when communicating a diagnosis (23%). Frequent problems within each of these stages of the imaging cycle included: inadequate handover of patients (41%) or unsafe or inappropriate transfer of the patient to or from medical imaging (35%); incorrect information on the request form (52%); and delayed communication of a diagnosis (36%) or communication of a wrong diagnosis (36%). The handover of patients and clinical information to and from medical imaging is fraught with error, often compromising patient safety and resulting in communication of delayed or wrong diagnoses, unnecessary radiation exposure and a waste of limited resources. Corrective strategies to address safety concerns related to new information technologies, patient transfer and inadequate test result notification policies are relevant to all healthcare settings. Handover and communication errors are prevalent in medical imaging. System-wide changes that facilitate effective communication are required.

  14. Continuous fractional-order Zero Phase Error Tracking Control.

    Science.gov (United States)

    Liu, Lu; Tian, Siyuan; Xue, Dingyu; Zhang, Tao; Chen, YangQuan

    2018-04-01

    A continuous time fractional-order feedforward control algorithm for tracking desired time varying input signals is proposed in this paper. The presented controller cancels the phase shift caused by the zeros and poles of controlled closed-loop fractional-order system, so it is called Fractional-Order Zero Phase Tracking Controller (FZPETC). The controlled systems are divided into two categories i.e. with and without non-cancellable (non-minimum-phase) zeros which stand in unstable region or on stability boundary. Each kinds of systems has a targeted FZPETC design control strategy. The improved tracking performance has been evaluated successfully by applying the proposed controller to three different kinds of fractional-order controlled systems. Besides, a modified quasi-perfect tracking scheme is presented for those systems which may not have available future tracking trajectory information or have problem in high frequency disturbance rejection if the perfect tracking algorithm is applied. A simulation comparison and a hardware-in-the-loop thermal peltier platform are shown to validate the practicality of the proposed quasi-perfect control algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Statistics of errors in fibre communication lines with a phase-modulation format and optical phase conjugation

    International Nuclear Information System (INIS)

    Shapiro, Elena G; Fedoruk, Mikhail P

    2011-01-01

    Analytical formulas are derived to approximate the probability density functions of 'zero' and 'one' bits in a linear communication channel with a binary format of optical signal phase modulation. Direct numerical simulation of the propagation of optical pulses in a communication line with optical phase conjugation is performed. The results of the numerical simulation are in good agreement with the analytical approximation. (fibreoptic communication lines)

  16. Communication, stereotypes and dignity: the inadequacy of the liberal case against censorship

    OpenAIRE

    Lucas, Peter

    2011-01-01

    J. S. Mill’s case against censorship rests on a conception of relevant communications as truth apt. If the communication is true, everyone benefits from the opportunity to exchange error for truth. If it is false, we benefit from the livelier impression truth makes when it collides with error. This classical liberal model is not however adequate for today’s world. In particular, it is inadequate for dealing with the problem of stereotyping. Much contemporary communication is not truth apt. Ad...

  17. On the average capacity and bit error probability of wireless communication systems

    KAUST Repository

    Yilmaz, Ferkan

    2011-12-01

    Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function-based unified expression for both average binary error probabilities and average capacity of single and multiple link communication with maximal ratio combining. It is a matter to note that the generic unified expression offered in this paper can be easily calculated and that is applicable to a wide variety of fading scenarios, and the mathematical formalism is illustrated with the generalized Gamma fading distribution in order to validate the correctness of our newly derived results. © 2011 IEEE.

  18. Impossibility of Classically Simulating One-Clean-Qubit Model with Multiplicative Error

    Science.gov (United States)

    Fujii, Keisuke; Kobayashi, Hirotada; Morimae, Tomoyuki; Nishimura, Harumichi; Tamate, Shuhei; Tani, Seiichiro

    2018-05-01

    The one-clean-qubit model (or the deterministic quantum computation with one quantum bit model) is a restricted model of quantum computing where all but a single input qubits are maximally mixed. It is known that the probability distribution of measurement results on three output qubits of the one-clean-qubit model cannot be classically efficiently sampled within a constant multiplicative error unless the polynomial-time hierarchy collapses to the third level [T. Morimae, K. Fujii, and J. F. Fitzsimons, Phys. Rev. Lett. 112, 130502 (2014), 10.1103/PhysRevLett.112.130502]. It was open whether we can keep the no-go result while reducing the number of output qubits from three to one. Here, we solve the open problem affirmatively. We also show that the third-level collapse of the polynomial-time hierarchy can be strengthened to the second-level one. The strengthening of the collapse level from the third to the second also holds for other subuniversal models such as the instantaneous quantum polynomial model [M. Bremner, R. Jozsa, and D. J. Shepherd, Proc. R. Soc. A 467, 459 (2011), 10.1098/rspa.2010.0301] and the boson sampling model [S. Aaronson and A. Arkhipov, STOC 2011, p. 333]. We additionally study the classical simulatability of the one-clean-qubit model with further restrictions on the circuit depth or the gate types.

  19. Quantum secret sharing via local operations and classical communication.

    Science.gov (United States)

    Yang, Ying-Hui; Gao, Fei; Wu, Xia; Qin, Su-Juan; Zuo, Hui-Juan; Wen, Qiao-Yan

    2015-11-20

    We investigate the distinguishability of orthogonal multipartite entangled states in d-qudit system by restricted local operations and classical communication. According to these properties, we propose a standard (2, n)-threshold quantum secret sharing scheme (called LOCC-QSS scheme), which solves the open question in [Rahaman et al., Phys. Rev. A, 91, 022330 (2015)]. On the other hand, we find that all the existing (k, n)-threshold LOCC-QSS schemes are imperfect (or "ramp"), i.e., unauthorized groups can obtain some information about the shared secret. Furthermore, we present a (3, 4)-threshold LOCC-QSS scheme which is close to perfect.

  20. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.

  1. Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements

    Science.gov (United States)

    Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.

    2014-01-01

    This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.

  2. Communication Error Management in Law Enforcement Interactions : a receiver's perspective

    NARCIS (Netherlands)

    Oostinga, Miriam; Giebels, Ellen; Taylor, Paul Jonathon

    2018-01-01

    Two experiments explore the effect of law enforcement officers’ communication errors and their response strategies on a suspect’s trust in the officer; established rapport and hostility; and, the amount and quality of information shared. Students were questioned online by an exam board member about

  3. Stable long-time semiclassical description of zero-point energy in high-dimensional molecular systems.

    Science.gov (United States)

    Garashchuk, Sophya; Rassolov, Vitaly A

    2008-07-14

    Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.

  4. Unequal error control scheme for dimmable visible light communication systems

    Science.gov (United States)

    Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan

    2017-01-01

    Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.

  5. Mixed quantum/classical investigation of the photodissociation of NH3(A-tilde) and a practical method for maintaining zero-point energy in classical trajectories

    International Nuclear Information System (INIS)

    Bonhommeau, David; Truhlar, Donald G.

    2008-01-01

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν 2 with n 2 =0,...,6 quanta of vibration) in the A-tilde electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU/SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU/SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH 2 internal energy distributions obtained for n 2 =0 and n 2 >1, as observed in experiments. Distributions obtained for n 2 =1 present an intermediate behavior between distributions obtained for smaller and larger n 2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH 2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n 2 =0 and n 2 =6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching

  6. Compensation of position errors in passivity based teleoperation over packet switched communication networks

    NARCIS (Netherlands)

    Secchi, C; Stramigioli, Stefano; Fantuzzi, C.

    Because of the use of scattering based communication channels, passivity based telemanipulation systems can be subject to a steady state position error between master and slave robots. In this paper, we consider the case in which the passive master and slave sides communicate through a packet

  7. Zeros of smallest modulus of functions resembling exp(z

    Directory of Open Access Journals (Sweden)

    Kenneth B. Stolarsky

    1982-01-01

    Full Text Available To determine (in various senses the zeros of the Laplace transform of a signed mass distribution is of great importance for many problems in classical analysis and number theory. For example, if the mass consists of finitely many atoms, the transform is an exponential polynomial. This survey studies what is known when the distribution is a probability density function of small variance, and examines in what sense the zeros must have large moduli. In particular, classical results on Bessel function zeros, of Szegö on zeros of partial sums of the exponential, of I. J. Schoenberg on k-times positive functions, and a result stemming from Graeffe's method, are all presented from a unified probabilistic point of view.

  8. Condition for unambiguous state discrimination using local operations and classical communication

    International Nuclear Information System (INIS)

    Chefles, Anthony

    2004-01-01

    We obtain a necessary and sufficient condition for a finite set of states of a finite-dimensional multiparticle quantum system to be amenable to unambiguous discrimination using local operations and classical communication. This condition is valid for states which may be mixed, entangled, or both. When the support of the set of states is the entire multiparticle Hilbert space, this condition is found to have an intriguing connection with the theory of entanglement witnesses

  9. Comparative Study of Communication Error between Conventional and Digital MCR Operators in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2015-05-15

    In this regard, the appropriate communication is directly related to the efficient and safe system operation, and inappropriate communication is one of the main causes of the accidents in various industries since inappropriate communications can cause a lack of necessary information exchange between operators and lead to serious consequences in large process systems such as nuclear power plants. According to the study conducted by Y. Hirotsu in 2001, about 25 percents of human error caused incidents in NPPs were related to communication issues. Also, other studies were reported that 85 percents of human error caused incidents in aviation industry and 92 percents in railway industry were related to communication problems. Accordingly, the importance of the efforts for reducing inappropriate communications has been emphasized in order to enhance the safety of pre-described systems. As a result, the average ratio of inappropriate communication in digital MCRs was slightly higher than that in conventional MCRs when the average ratio of no communication in digital MCRs was much smaller than that in conventional MCRs. Regarding the average ratio of inappropriate communication, it can be inferred that operators are still more familiar to the conventional MCRs than digital MCRs. More case studies are required for more delicate comparison since there were only three examined cases for digital MCRs. However, similar result is expected because there are no differences in communication method, although there are many differences in the way of procedure proceeding.

  10. The contrasting roles of Planck's constant in classical and quantum theories

    Science.gov (United States)

    Boyer, Timothy H.

    2018-04-01

    We trace the historical appearance of Planck's constant in physics, and we note that initially the constant did not appear in connection with quanta. Furthermore, we emphasize that Planck's constant can appear in both classical and quantum theories. In both theories, Planck's constant sets the scale of atomic phenomena. However, the roles played in the foundations of the theories are sharply different. In quantum theory, Planck's constant is crucial to the structure of the theory. On the other hand, in classical electrodynamics, Planck's constant is optional, since it appears only as the scale factor for the (homogeneous) source-free contribution to the general solution of Maxwell's equations. Since classical electrodynamics can be solved while taking the homogenous source-free contribution in the solution as zero or non-zero, there are naturally two different theories of classical electrodynamics, one in which Planck's constant is taken as zero and one where it is taken as non-zero. The textbooks of classical electromagnetism present only the version in which Planck's constant is taken to vanish.

  11. Detecting a set of entanglement measures in an unknown tripartite quantum state by local operations and classical communication

    International Nuclear Information System (INIS)

    Bai Yankui; Li Shushen; Zheng Houzhi; Wang, Z. D.

    2006-01-01

    We propose a more general method for detecting a set of entanglement measures, i.e., negativities, in an arbitrary tripartite quantum state by local operations and classical communication. To accomplish the detection task using this method, three observers do not need to perform partial transposition maps by the structural physical approximation; instead, they only need to collectively measure some functions via three local networks supplemented by a classical communication. With these functions, they are able to determine the set of negativities related to the tripartite quantum state

  12. The Neuroelectromagnetic Inverse Problem and the Zero Dipole Localization Error

    Directory of Open Access Journals (Sweden)

    Rolando Grave de Peralta

    2009-01-01

    Full Text Available A tomography of neural sources could be constructed from EEG/MEG recordings once the neuroelectromagnetic inverse problem (NIP is solved. Unfortunately the NIP lacks a unique solution and therefore additional constraints are needed to achieve uniqueness. Researchers are then confronted with the dilemma of choosing one solution on the basis of the advantages publicized by their authors. This study aims to help researchers to better guide their choices by clarifying what is hidden behind inverse solutions oversold by their apparently optimal properties to localize single sources. Here, we introduce an inverse solution (ANA attaining perfect localization of single sources to illustrate how spurious sources emerge and destroy the reconstruction of simultaneously active sources. Although ANA is probably the simplest and robust alternative for data generated by a single dominant source plus noise, the main contribution of this manuscript is to show that zero localization error of single sources is a trivial and largely uninformative property unable to predict the performance of an inverse solution in presence of simultaneously active sources. We recommend as the most logical strategy for solving the NIP the incorporation of sound additional a priori information about neural generators that supplements the information contained in the data.

  13. Does the A-not-B error in adult pet dogs indicate sensitivity to human communication?

    Science.gov (United States)

    Kis, Anna; Topál, József; Gácsi, Márta; Range, Friederike; Huber, Ludwig; Miklósi, Adám; Virányi, Zsófia

    2012-07-01

    Recent dog-infant comparisons have indicated that the experimenter's communicative signals in object hide-and-search tasks increase the probability of perseverative (A-not-B) errors in both species (Topál et al. 2009). These behaviourally similar results, however, might reflect different mechanisms in dogs and in children. Similar errors may occur if the motor response of retrieving the object during the A trials cannot be inhibited in the B trials or if the experimenter's movements and signals toward the A hiding place in the B trials ('sham-baiting') distract the dogs' attention. In order to test these hypotheses, we tested dogs similarly to Topál et al. (2009) but eliminated the motor search in the A trials and 'sham-baiting' in the B trials. We found that neither an inability to inhibit previously rewarded motor response nor insufficiencies in their working memory and/or attention skills can explain dogs' erroneous choices. Further, we replicated the finding that dogs have a strong tendency to commit the A-not-B error after ostensive-communicative hiding and demonstrated the crucial effect of socio-communicative cues as the A-not-B error diminishes when location B is ostensively enhanced. These findings further support the hypothesis that the dogs' A-not-B error may reflect a special sensitivity to human communicative cues. Such object-hiding and search tasks provide a typical case for how susceptibility to human social signals could (mis)lead domestic dogs.

  14. Relativistic quantum channel of communication through field quanta

    International Nuclear Information System (INIS)

    Cliche, M.; Kempf, A.

    2010-01-01

    Setups in which a system Alice emits field quanta that a system Bob receives are prototypical for wireless communication and have been extensively studied. In the most basic setup, Alice and Bob are modeled as Unruh-DeWitt detectors for scalar quanta, and the only noise in their communication is due to quantum fluctuations. For this basic setup, we construct the corresponding information-theoretic quantum channel. We calculate the classical channel capacity as a function of the spacetime separation, and we confirm that the classical as well as the quantum channel capacity are strictly zero for spacelike separations. We show that this channel can be used to entangle Alice and Bob instantaneously. Alice and Bob are shown to extract this entanglement from the vacuum through a Casimir-Polder effect.

  15. Fermions from classical statistics

    International Nuclear Information System (INIS)

    Wetterich, C.

    2010-01-01

    We describe fermions in terms of a classical statistical ensemble. The states τ of this ensemble are characterized by a sequence of values one or zero or a corresponding set of two-level observables. Every classical probability distribution can be associated to a quantum state for fermions. If the time evolution of the classical probabilities p τ amounts to a rotation of the wave function q τ (t)=±√(p τ (t)), we infer the unitary time evolution of a quantum system of fermions according to a Schroedinger equation. We establish how such classical statistical ensembles can be mapped to Grassmann functional integrals. Quantum field theories for fermions arise for a suitable time evolution of classical probabilities for generalized Ising models.

  16. On the average capacity and bit error probability of wireless communication systems

    KAUST Repository

    Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2011-01-01

    Analysis of the average binary error probabilities and average capacity of wireless communications systems over generalized fading channels have been considered separately in the past. This paper introduces a novel moment generating function

  17. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  18. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

    2012-01-01

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

  19. A test of inflated zeros for Poisson regression models.

    Science.gov (United States)

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  20. Asymptotically perfect discrimination in the local-operation-and-classical-communication paradigm

    International Nuclear Information System (INIS)

    Kleinmann, M.; Kampermann, H.; Bruss, D.

    2011-01-01

    We revisit the problem of discriminating orthogonal quantum states within the local-quantum-operation-and-classical-communication (LOCC) paradigm. Our particular focus is on the asymptotic situation where the parties have infinite resources and the protocol may become arbitrarily long. Our main result is a necessary condition for perfect asymptotic LOCC discrimination. As an application, we prove that for complete product bases, unlimited resources are of no advantage. On the other hand, we identify an example for which it still remains undecided whether unlimited resources are superior.

  1. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  2. Classical calculation of the total ionization energy of helium-like atoms

    International Nuclear Information System (INIS)

    Karastoyanov, A.

    1990-01-01

    Quantum mechanics rejects the classical modelling of microworld. One of the reasons is that the Bohr's rules can not be applied for many-electron atoms and molecules. But the many-body problem in classical mechanics has no analytical solution even for 3 particles. Numerical solutions should be used. The quantum Bohr's rule expressing the moment of momentum conservation for two particles is invalid in more complicated cases. Yet Bohr reached some success for helium-like atoms. The Bohr's formula concerning helim-like atoms is deduced again in this paper and its practical reliability is analyzed with contemporary data. The binding energy of the system is obtained in the simple form E=(Z-1/4) 2 α 2 mc 2 , where Z is the atomic number, α - the fine structure constant, M - the electron mass and c - the light speed in vacuum. The calculated values are compared with experimental data on the total ionization energy of the helium-like atoms from 2 He 4 to 29 Cu 64 . The error decreases quickly with the increasing of atomic mass, reaching zero for Cu. This indicated that the main source of error is the nucleus motion. The role of other possible causes is analyzed and proves negligible. (author). 1 tab, 4 refs

  3. Computer Learner Corpora: Analysing Interlanguage Errors in Synchronous and Asynchronous Communication

    Science.gov (United States)

    MacDonald, Penny; Garcia-Carbonell, Amparo; Carot, Sierra, Jose Miguel

    2013-01-01

    This study focuses on the computer-aided analysis of interlanguage errors made by the participants in the telematic simulation IDEELS (Intercultural Dynamics in European Education through on-Line Simulation). The synchronous and asynchronous communication analysed was part of the MiLC Corpus, a multilingual learner corpus of texts written by…

  4. Comparison of Classical and Quantum Bremsstrahlung

    International Nuclear Information System (INIS)

    Pratt, R.H.; Uskov, D.B.; Korol, A.V.; Obolensky, O.I.

    2003-01-01

    Classical features persist in bremsstrahlung at surprisingly high energies, while quantum features are present at low energies. For Coulomb bremsstrahlung this is related to the similar properties of Coulomb scattering. For bremsstrahlung in a screened potential, the low energy spectrum and angular distribution exhibit structures. In quantum mechanics these structures are associated with zeroes of particular angular-momentum transfer matrix elements at particular energies, a continuation of the Cooper minima in atomic photoeffect. They lead to transparency windows in free-free absorption. The trajectories of these zeroes in the plane of initial and final transition energies (bound and continuum) has been explored. Corresponding features have now been seen in classical bremsstrahlung, resulting from reduced contributions from particular impact parameters at particular energies. This has suggested the possibility of a more unified treatment of classical and quantum bremsstrahlung, based on the singularities of the scattering amplitude in angular momentum

  5. The Time Division Multi-Channel Communication Model and the Correlative Protocol Based on Quantum Time Division Multi-Channel Communication

    International Nuclear Information System (INIS)

    Liu Xiao-Hui; Pei Chang-Xing; Nie Min

    2010-01-01

    Based on the classical time division multi-channel communication theory, we present a scheme of quantum time-division multi-channel communication (QTDMC). Moreover, the model of quantum time division switch (QTDS) and correlative protocol of QTDMC are proposed. The quantum bit error rate (QBER) is analyzed and the QBER simulation test is performed. The scheme shows that the QTDS can carry out multi-user communication through quantum channel, the QBER can also reach the reliability requirement of communication, and the protocol of QTDMC has high practicability and transplantable. The scheme of QTDS may play an important role in the establishment of quantum communication in a large scale in the future. (general)

  6. Direct estimation of functionals of density operators by local operations and classical communication

    International Nuclear Information System (INIS)

    Alves, Carolina Moura; Horodecki, Pawel; Oi, Daniel K. L.; Kwek, L. C.; Ekert, Artur K.

    2003-01-01

    We present a method of direct estimation of important properties of a shared bipartite quantum state, within the ''distant laboratories'' paradigm, using only local operations and classical communication. We apply this procedure to spectrum estimation of shared states, and locally implementable structural physical approximations to incompletely positive maps. This procedure can also be applied to the estimation of channel capacity and measures of entanglement

  7. Minimum Probability of Error-Based Equalization Algorithms for Fading Channels

    Directory of Open Access Journals (Sweden)

    Janos Levendovszky

    2007-06-01

    Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels.

  8. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    Science.gov (United States)

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Performance of an Error Control System with Turbo Codes in Powerline Communications

    Directory of Open Access Journals (Sweden)

    Balbuena-Campuzano Carlos Alberto

    2014-07-01

    Full Text Available This paper reports the performance of turbo codes as an error control technique in PLC (Powerline Communications data transmissions. For this system, computer simulations are used for modeling data networks based on the model classified in technical literature as indoor, and uses OFDM (Orthogonal Frequency Division Multiplexing as a modulation technique. Taking into account the channel, modulation and turbo codes, we propose a methodology to minimize the bit error rate (BER, as a function of the average received signal noise ratio (SNR.

  10. Optimal classical-communication-assisted local model of n-qubit Greenberger-Horne-Zeilinger correlations

    International Nuclear Information System (INIS)

    Tessier, Tracey E.; Caves, Carlton M.; Deutsch, Ivan H.; Eastin, Bryan; Bacon, Dave

    2005-01-01

    We present a model, motivated by the criterion of reality put forward by Einstein, Podolsky, and Rosen and supplemented by classical communication, which correctly reproduces the quantum-mechanical predictions for measurements of all products of Pauli operators on an n-qubit GHZ state (or 'cat state'). The n-2 bits employed by our model are shown to be optimal for the allowed set of measurements, demonstrating that the required communication overhead scales linearly with n. We formulate a connection between the generation of the local values utilized by our model and the stabilizer formalism, which leads us to conjecture that a generalization of this method will shed light on the content of the Gottesman-Knill theorem

  11. Quantum and classical ripples in graphene

    Science.gov (United States)

    Hašík, Juraj; Tosatti, Erio; MartoÅák, Roman

    2018-04-01

    Thermal ripples of graphene are well understood at room temperature, but their quantum counterparts at low temperatures are in need of a realistic quantitative description. Here we present atomistic path-integral Monte Carlo simulations of freestanding graphene, which show upon cooling a striking classical-quantum evolution of height and angular fluctuations. The crossover takes place at ever-decreasing temperatures for ever-increasing wavelengths so that a completely quantum regime is never attained. Zero-temperature quantum graphene is flatter and smoother than classical graphene at large scales yet rougher at short scales. The angular fluctuation distribution of the normals can be quantitatively described by coexistence of two Gaussians, one classical strongly T -dependent and one quantum about 2° wide, of zero-point character. The quantum evolution of ripple-induced height and angular spread should be observable in electron diffraction in graphene and other two-dimensional materials, such as MoS2, bilayer graphene, boron nitride, etc.

  12. A second generation 50 Mbps VLSI level zero processing system prototype

    Science.gov (United States)

    Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby

    1994-01-01

    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.

  13. Error statistics during the propagation of short optical pulses in a high-speed fibreoptic communication line

    International Nuclear Information System (INIS)

    Shapiro, E G

    2008-01-01

    Simple analytic expressions are derived to approximate the bit error rate for data transmission through fibreoptic communication lines. The propagation of optical pulses is directly numerically simulated. Analytic estimates are in good agreement with numerical calculations. (fibreoptic communication)

  14. Classical limit for quantum mechanical energy eigenfunctions

    International Nuclear Information System (INIS)

    Sen, D.; Sengupta, S.

    2004-01-01

    The classical limit problem is discussed for the quantum mechanical energy eigenfunctions using the Wentzel-Kramers-Brillouin approximation, free from the problem at the classical turning points. A proper perspective of the whole issue is sought to appreciate the significance of the discussion. It is observed that for bound states in arbitrary potential, appropriate limiting condition is definable in terms of a dimensionless classical limit parameter leading smoothly to all observable classical results. Most important results are the emergence of classical phase space, keeping the observable distribution functions non-zero only within the so-called classical region at the limit point and resolution of some well-known paradoxes. (author)

  15. Efficient quantum repeater with respect to both entanglement-concentration rate and complexity of local operations and classical communication

    Science.gov (United States)

    Su, Zhaofeng; Guan, Ji; Li, Lvzhou

    2018-01-01

    Quantum entanglement is an indispensable resource for many significant quantum information processing tasks. However, in practice, it is difficult to distribute quantum entanglement over a long distance, due to the absorption and noise in quantum channels. A solution to this challenge is a quantum repeater, which can extend the distance of entanglement distribution. In this scheme, the time consumption of classical communication and local operations takes an important place with respect to time efficiency. Motivated by this observation, we consider a basic quantum repeater scheme that focuses on not only the optimal rate of entanglement concentration but also the complexity of local operations and classical communication. First, we consider the case where two different two-qubit pure states are initially distributed in the scenario. We construct a protocol with the optimal entanglement-concentration rate and less consumption of local operations and classical communication. We also find a criterion for the projective measurements to achieve the optimal probability of creating a maximally entangled state between the two ends. Second, we consider the case in which two general pure states are prepared and general measurements are allowed. We get an upper bound on the probability for a successful measurement operation to produce a maximally entangled state without any further local operations.

  16. A Classical Delphi Study to Identify the Barriers of Pursuing Green Information and Communication Technologies

    Science.gov (United States)

    Gotay, Jose Antonio

    2013-01-01

    This qualitative, classical Delphi study served to explore the apparent lack of corporate commitment to prioritized Green Information Communication Technologies (ICTs), which could delay the economic and social benefits for maximizing the use of natural energy resources in a weak economy. The purpose of this study was to examine the leadership…

  17. Gaussian density matrices: Quantum analogs of classical states

    International Nuclear Information System (INIS)

    Mann, A.; Revzen, M.

    1993-01-01

    We study quantum analogs of clasical situations, i.e. quantum states possessing some specific classical attribute(s). These states seem quite generally, to have the form of gaussian density matrices. Such states can always be parametrized as thermal squeezed states (TSS). We consider the following specific cases: (a) Two beams that are built from initial beams which passed through a beam splitter cannot, classically, be distinguished from (appropriately prepared) two independent beams that did not go through a splitter. The only quantum states possessing this classical attribute are TSS. (b) The classical Cramer's theorem was shown to have a quantum version (Hegerfeldt). Again, the states here are Gaussian density matrices. (c) The special case in the study of the quantum version of Cramer's theorem, viz. when the state obtained after partial tracing is a pure state, leads to the conclusion that all states involved are zero temperature limit TSS. The classical analog here are gaussians of zero width, i.e. all distributions are δ functions in phase space. (orig.)

  18. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  19. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  20. Zeroing and testing units developed for Gerdien atmospheric ion detectors

    International Nuclear Information System (INIS)

    Kolarz, P.; Marinkovic, B.P.; Filipovic, D.M.

    2005-01-01

    Low current measurements in atmospheric ion detection using a Gerdien condenser are subjected to numerous sources of error. Zeroing and testing units described in this article, connected as modules to this type of detector, enable some of these errors to be found and eliminated. The zeroing unit provides digital compensation of the zero drift with a digital sample and hold circuit of 12-bit resolution. It overcomes difficulties related to zero drift and techniques used in the zero conductivity determination when the accelerating potential or airflow rate are zero. The testing unit is a current reference of nominally 10 -12 A intended for testing and correcting the system on current leakage and other measuring deviations due to changes in atmospheric parameters. This unit is an independent battery-powered module, which provides a charge of 10 -12 C per cycle (frequency of order 1 Hz) to the collecting electrode. The control of Gerdien devices is substantially simplified using the zeroing and testing units realized here. Both units are used during 'zero conductivity' regime only

  1. Zero-point oscillations, zero-point fluctuations, and fluctuations of zero-point oscillations

    International Nuclear Information System (INIS)

    Khalili, Farit Ya

    2003-01-01

    Several physical effects and methodological issues relating to the ground state of an oscillator are considered. Even in the simplest case of an ideal lossless harmonic oscillator, its ground state exhibits properties that are unusual from the classical point of view. In particular, the mean value of the product of two non-negative observables, kinetic and potential energies, is negative in the ground state. It is shown that semiclassical and rigorous quantum approaches yield substantially different results for the ground state energy fluctuations of an oscillator with finite losses. The dependence of zero-point fluctuations on the boundary conditions is considered. Using this dependence, it is possible to transmit information without emitting electromagnetic quanta. Fluctuations of electromagnetic pressure of zero-point oscillations are analyzed, and the corresponding mechanical friction is considered. This friction can be viewed as the most fundamental mechanism limiting the quality factor of mechanical oscillators. Observation of these effects exceeds the possibilities of contemporary experimental physics but almost undoubtedly will be possible in the near future. (methodological notes)

  2. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  3. Interlacing of zeros of quasi-orthogonal meixner polynomials | Driver ...

    African Journals Online (AJOL)

    ... interlacing of zeros of quasi-orthogonal Meixner polynomials Mn(x;β; c) with the zeros of their nearest orthogonal counterparts Mt(x;β + k; c), l; n ∈ ℕ, k ∈ {1; 2}; is also discussed. Mathematics Subject Classication (2010): 33C45, 42C05. Key words: Discrete orthogonal polynomials, quasi-orthogonal polynomials, Meixner

  4. Near field communications technology and the potential to reduce medication errors through multidisciplinary application

    LENUS (Irish Health Repository)

    O’Connell, Emer

    2016-07-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems.

  5. Asymptotics for the greatest zeros of solutions of a particular O.D.E.

    Directory of Open Access Journals (Sweden)

    Silvia Noschese

    1994-05-01

    Full Text Available This paper deals with the Liouville-Stekeloff method for approximating solutions of homogeneous linear ODE and a general result due to Tricomi which provides estimates for the zeros of functions by means of the knowledge of an asymptotic representation. From the classical tools we deduce information about the asymptotics of the greatest zeros of a class of solutions of a particular ODE, including the classical Hermite polynomials.

  6. Quantum money with classical verification

    Energy Technology Data Exchange (ETDEWEB)

    Gavinsky, Dmitry [NEC Laboratories America, Princeton, NJ (United States)

    2014-12-04

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it.

  7. Quantum money with classical verification

    International Nuclear Information System (INIS)

    Gavinsky, Dmitry

    2014-01-01

    We propose and construct a quantum money scheme that allows verification through classical communication with a bank. This is the first demonstration that a secure quantum money scheme exists that does not require quantum communication for coin verification. Our scheme is secure against adaptive adversaries - this property is not directly related to the possibility of classical verification, nevertheless none of the earlier quantum money constructions is known to possess it

  8. Near field communications technology and the potential to reduce medication errors through multidisciplinary application.

    Science.gov (United States)

    O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (PNFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.

  9. Trajectory-based understanding of the quantum-classical transition for barrier scattering

    Science.gov (United States)

    Chou, Chia-Chun

    2018-06-01

    The quantum-classical transition of wave packet barrier scattering is investigated using a hydrodynamic description in the framework of a nonlinear Schrödinger equation. The nonlinear equation provides a continuous description for the quantum-classical transition of physical systems by introducing a degree of quantumness. Based on the transition equation, the transition trajectory formalism is developed to establish the connection between classical and quantum trajectories. The quantum-classical transition is then analyzed for the scattering of a Gaussian wave packet from an Eckart barrier and the decay of a metastable state. Computational results for the evolution of the wave packet and the transmission probabilities indicate that classical results are recovered when the degree of quantumness tends to zero. Classical trajectories are in excellent agreement with the transition trajectories in the classical limit, except in some regions where transition trajectories cannot cross because of the single-valuedness of the transition wave function. As the computational results demonstrate, the process that the Planck constant tends to zero is equivalent to the gradual removal of quantum effects originating from the quantum potential. This study provides an insightful trajectory interpretation for the quantum-classical transition of wave packet barrier scattering.

  10. Systems and methods for tracking a device in zero-infrastructure and zero-power conditions, and a tracking device therefor

    KAUST Repository

    Shamim, Atif

    2017-03-23

    Disclosed are embodiments for a tracking device having multiple layers of localization and communication capabilities, and particularly having the ability to operate in zero-infrastructure or zero-power conditions. Also disclosed are methods and systems that enhance location determination in zero-infrastructure and zero-power conditions. In one example, a device, system and/or method includes an infrastructure-based localization module, an infrastructure-less localization module and a passive module that can utilize at least two of the modules to determine a location of the tracking device.

  11. A Bayesian zero-truncated approach for analysing capture-recapture count data from classical scrapie surveillance in France.

    Science.gov (United States)

    Vergne, Timothée; Calavas, Didier; Cazeau, Géraldine; Durand, Benoît; Dufour, Barbara; Grosbois, Vladimir

    2012-06-01

    Capture-recapture (CR) methods are used to study populations that are monitored with imperfect observation processes. They have recently been applied to the monitoring of animal diseases to evaluate the number of infected units that remain undetected by the surveillance system. This paper proposes three bayesian models to estimate the total number of scrapie-infected holdings in France from CR count data obtained from the French classical scrapie surveillance programme. We fitted two zero-truncated Poisson (ZTP) models (with and without holding size as a covariate) and a zero-truncated negative binomial (ZTNB) model to the 2006 national surveillance count dataset. We detected a large amount of heterogeneity in the count data, making the use of the simple ZTP model inappropriate. However, including holding size as a covariate did not bring any significant improvement over the simple ZTP model. The ZTNB model proved to be the best model, giving an estimation of 535 (CI(95%) 401-796) infected and detectable sheep holdings in 2006, although only 141 were effectively detected, resulting in a holding-level prevalence of 4.4‰ (CI(95%) 3.2-6.3) and a sensitivity of holding-level surveillance of 26% (CI(95%) 18-35). The main limitation of the present study was the small amount of data collected during the surveillance programme. It was therefore not possible to build complex models that would allow depicting more accurately the epidemiological and detection processes that generate the surveillance data. We discuss the perspectives of capture-recapture count models in the context of animal disease surveillance. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Probabilistic Teleportation of an Arbitrary Three-Level Two-Particle State and Classical Communication Cost

    Institute of Scientific and Technical Information of China (English)

    DAIHong-Yi; KUANGLe-Man; LICheng-Zu

    2005-01-01

    We propose a scheme to probabilistically teleport an unknown arbitrary three-level two-particle state by using two partial entangled two-particle states of three-level as the quantum channel. The classical communication cost required in the ideal probabilistic teleportation process is also calculated. This scheme can be directly generalized to teleport an unknown and arbitrary three-level K-particle state by using K partial entangled two-particle states of three-level as the quantum channel.

  13. CLASSICS

    Indian Academy of Sciences (India)

    2013-11-11

    Nov 11, 2013 ... Polanyi's classic paper, co-authored by Henry Eyring, reproduced in this ... spatial conf guration of the atoms in terms of the energy function of the diatomic .... The present communication deals with the construction of such .... These three contributions are complemented by a fourth term if one takes into.

  14. Seismic-load-induced human errors and countermeasures using computer graphics in plant-operator communication

    International Nuclear Information System (INIS)

    Hara, Fumio

    1988-01-01

    This paper remarks the importance of seismic load-induced human errors in plant operation by delineating the characteristics of the task performance of human beings under seismic loads. It focuses on man-machine communication via multidimensional data like that conventionally displayed on large panels in a plant control room. It demonstrates a countermeasure to human errors using a computer graphics technique that conveys the global state of the plant operation to operators through cartoon-like, colored graphs in the form of faces that, with different facial expressions, show the plant safety status. (orig.)

  15. Communicating natural hazards. The case of marine extreme events and the importance of the forecast's errors.

    Science.gov (United States)

    Marone, Eduardo; Camargo, Ricardo

    2013-04-01

    Scientific knowledge has to fulfill some necessary conditions. Among them, it has to be properly communicated. Usually, scientists (mis)understand that the communication requirement is satisfied by publishing their results on peer reviewed journals. Society claims for information in other formats or languages and other tools and approaches have to be used, otherwise the scientific discoveries will not fulfill its social mean. However, scientists are not so well trained to do so. These facts are particularly relevant when the scientific work has to deal with natural hazards, which do not affect just a lab or a computer experiment, but the life and fate of human beings. We are actually working with marine extreme events related with sea level changes, waves and other coastal hazards. Primary, the work is developed on the classic scientific format, but focusing not only in the stochastic way of predicting such extreme events, but estimating the potential errors the forecasting methodologies intrinsically have. The scientific results are translated to a friendly format required by stakeholders (which are financing part of the work). Finally, we hope to produce a document prepared for the general public. Each of the targets has their own characteristics and we have to use the proper communication tools and languages. Also, when communicating such knowledge, we have to consider that stakeholders and general public have no obligation of understanding the scientific language, but scientists have the responsibility of translating their discoveries and predictions in a proper way. The information on coastal hazards is analyzed in statistical and numerical ways, departing from long term observation of, for instance, sea level. From the analysis it is possible to recognize different natural regimes and to present the return times of extreme events, while from the numerical models, properly tuned to reproduce the same past ocean behavior using hindcast approaches, it is

  16. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    Science.gov (United States)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  17. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  18. Threshold-based detection for amplify-and-forward cooperative communication systems with channel estimation error

    KAUST Repository

    Abuzaid, Abdulrahman I.

    2014-09-01

    Efficient receiver designs for cooperative communication systems are becoming increasingly important. In previous work, cooperative networks communicated with the use of $L$ relays. As the receiver is constrained, it can only process $U$ out of $L$ relays. Channel shortening and reduced-rank techniques were employed to design the preprocessing matrix. In this paper, a receiver structure is proposed which combines the joint iterative optimization (JIO) algorithm and our proposed threshold selection criteria. This receiver structure assists in determining the optimal $U-{opt}$. Furthermore, this receiver provides the freedom to choose $U ≤ U-{opt}$ for each frame depending upon the tolerable difference allowed for mean square error (MSE). Our study and simulation results show that by choosing an appropriate threshold, it is possible to gain in terms of complexity savings without affecting the BER performance of the system. Furthermore, in this paper the effect of channel estimation errors is investigated on the MSE performance of the amplify-and-forward (AF) cooperative relaying system.

  19. The importance of commitment, communication, culture and learning for the implementation of the Zero Accident Vision in 27 companies in Europe

    NARCIS (Netherlands)

    Zwetsloot, G.I.J.M.; Kines, P.; Ruotsala, R.; Drupsteen, L.; Merivirta, M.L.; Bezemer, R.A.

    2017-01-01

    In this paper the findings are presented of a multinational study involving 27 companies that have adopted a ‘Zero Accident Vision’ (ZAV). ZAV is the ambition that all accidents are preventable, and this paper focuses on how companies implement ZAV through ZAV commitment, safety communication,

  20. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  1. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  2. DC-Link Voltage Coordinated-Proportional Control for Cascaded Converter With Zero Steady-State Error and Reduced System Type

    DEFF Research Database (Denmark)

    Tian, Yanjun; Loh, Poh Chiang; Deng, Fujin

    2016-01-01

    Cascaded converter is formed by connecting two subconverters together, sharing a common intermediate dc-link voltage. Regulation of this dc-link voltage is frequently realized with a proportional-integral (PI) controller, whose high gain at dc helps to force a zero steady-state tracking error....... The proposed scheme can be used with either unidirectional or bidirectional power flow, and has been verified by simulation and experimental results presented in this paper........ Such precise tracking is, however, at the expense of increasing the system type, caused by the extra pole at the origin introduced by the PI controller. The overall system may, hence, be tougher to control. To reduce the system type while preserving precise dc-link voltage tracking, this paper proposes...

  3. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  4. Methods for estimation of radiation risk in epidemiological studies accounting for classical and Berkson errors in doses.

    Science.gov (United States)

    Kukush, Alexander; Shklyar, Sergiy; Masiuk, Sergii; Likhtarov, Illya; Kovgan, Lina; Carroll, Raymond J; Bouville, Andre

    2011-02-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  5. The classicality and quantumness of a quantum ensemble

    International Nuclear Information System (INIS)

    Zhu Xuanmin; Pang Shengshi; Wu Shengjun; Liu Quanhui

    2011-01-01

    In this Letter, we investigate the classicality and quantumness of a quantum ensemble. We define a quantity called ensemble classicality based on classical cloning strategy (ECCC) to characterize how classical a quantum ensemble is. An ensemble of commuting states has a unit ECCC, while a general ensemble can have a ECCC less than 1. We also study how quantum an ensemble is by defining a related quantity called quantumness. We find that the classicality of an ensemble is closely related to how perfectly the ensemble can be cloned, and that the quantumness of the ensemble used in a quantum key distribution (QKD) protocol is exactly the attainable lower bound of the error rate in the sifted key. - Highlights: → A quantity is defined to characterize how classical a quantum ensemble is. → The classicality of an ensemble is closely related to the cloning performance. → Another quantity is also defined to investigate how quantum an ensemble is. → This quantity gives the lower bound of the error rate in a QKD protocol.

  6. Noiseless method for checking the Peres separability criterion by local operations and classical communication

    International Nuclear Information System (INIS)

    Bai Yankui; Li Shushen; Zheng Houzhi

    2005-01-01

    We present a method for checking the Peres separability criterion in an arbitrary bipartite quantum state ρ AB within local operations and classical communication scenario. The method does not require noise operation which is needed in making the partial transposition map physically implementable. The main task for the two observers, Alice and Bob, is to measure some specific functions of the partial transposed matrix. With these functions, they can determine the eigenvalues of ρ AB T B , among which the minimum serves as an entanglement witness

  7. Detecting binary non-return-to-zero data in free-space optical communication systems using FPGAs

    Science.gov (United States)

    Bui, Vy; Tran, Lan; El-Araby, Esam; Namazi, Nader M.

    2014-06-01

    High bandwidth, fast deployment with relatively low cost implementation are some of the important advantages of free space optical (FSO) communications. However, the atmospheric turbulence has a substantial impact on the quality of a laser beam propagating through the atmosphere. A new method was presented in [1] and [2] to perform bit synchronization and detection of binary Non-Return-to-Zero (NRZ) data from a free-space optical (FSO) communication link. It was shown that, when the data is binary NRZ with no modulation, the Haar wavelet transformation can effectively reduce the scintillation noise. In this paper, we leverage and modify the work presented in [1] in order to provide a real-time streaming hardware prototype. The applicability of these concepts will be demonstrated through providing the hardware prototype using one of the state-of-the-art reconfigurable hardware, namely Field Programmable Gate Arrays, and highly productive high-level design tools such as System Generator for DSP from Xilinx.

  8. ZEROES OF GENERALIZED FRESNEL COMPLEMENTARY INTEGRAL FUNCTIONS

    Directory of Open Access Journals (Sweden)

    Jaime Lobo Segura

    2016-08-01

    Full Text Available Theoretical upper and lower bounds are established for zeroes of a parametric family of functions which are defined by integrals of the same type as the Fresnel complementary integral. Asymptotic properties for these bounds are obtained as well as monotony properties of the localization intervals. Given the value of the parameter an analytical-numerical procedure is deduced to enclose all zeros of a given function with an a priori error.

  9. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  10. Asynchronous anti-noise hyper chaotic secure communication system based on dynamic delay and state variables switching

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hongjun [Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024 (China); Weifang Vocational College, Weifang 261041 (China); Wang, Xingyuan, E-mail: wangxy@dlut.edu.cn [Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024 (China); Zhu, Quanlong [Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024 (China)

    2011-07-18

    This Letter designs an asynchronous hyper chaotic secure communication system, which possesses high stability against noise, using dynamic delay and state variables switching to ensure the high security. The relationship between the bit error ratio (BER) and the signal-to-noise ratio (SNR) is analyzed by simulation tests, the results show that the BER can be ensured to reach zero by proportionally adjusting the amplitudes of the state variables and the noise figure. The modules of the transmitter and receiver are implemented, and numerical simulations demonstrate the effectiveness of the system. -- Highlights: → Asynchronous anti-noise hyper chaotic secure communication system. → Dynamic delay and state switching to ensure the high security. → BER can reach zero by adjusting the amplitudes of state variables and noise figure.

  11. Normal forms of Hopf-zero singularity

    International Nuclear Information System (INIS)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative–nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov–Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov–Takens singularities. Despite this, the normal form computations of Bogdanov–Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative–nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto–Sivashinsky equations to demonstrate the applicability of our results. (paper)

  12. Normal forms of Hopf-zero singularity

    Science.gov (United States)

    Gazor, Majid; Mokhtari, Fahimeh

    2015-01-01

    The Lie algebra generated by Hopf-zero classical normal forms is decomposed into two versal Lie subalgebras. Some dynamical properties for each subalgebra are described; one is the set of all volume-preserving conservative systems while the other is the maximal Lie algebra of nonconservative systems. This introduces a unique conservative-nonconservative decomposition for the normal form systems. There exists a Lie-subalgebra that is Lie-isomorphic to a large family of vector fields with Bogdanov-Takens singularity. This gives rise to a conclusion that the local dynamics of formal Hopf-zero singularities is well-understood by the study of Bogdanov-Takens singularities. Despite this, the normal form computations of Bogdanov-Takens and Hopf-zero singularities are independent. Thus, by assuming a quadratic nonzero condition, complete results on the simplest Hopf-zero normal forms are obtained in terms of the conservative-nonconservative decomposition. Some practical formulas are derived and the results implemented using Maple. The method has been applied on the Rössler and Kuramoto-Sivashinsky equations to demonstrate the applicability of our results.

  13. Can classical noise enhance quantum transmission?

    International Nuclear Information System (INIS)

    Wilde, Mark M

    2009-01-01

    A modified quantum teleportation protocol broadens the scope of the classical forbidden-interval theorems for stochastic resonance. The fidelity measures performance of quantum communication. The sender encodes the two classical bits for quantum teleportation as weak bipolar subthreshold signals and sends them over a noisy classical channel. Two forbidden-interval theorems provide a necessary and sufficient condition for the occurrence of the nonmonotone stochastic resonance effect in the fidelity of quantum teleportation. The condition is that the noise mean must fall outside a forbidden interval related to the detection threshold and signal value. An optimal amount of classical noise benefits quantum communication when the sender transmits weak signals, the receiver detects with a high threshold and the noise mean lies outside the forbidden interval. Theorems and simulations demonstrate that both finite-variance and infinite-variance noise benefit the fidelity of quantum teleportation.

  14. Quantum features derived from the classical model of a bouncer-walker coupled to a zero-point field

    International Nuclear Information System (INIS)

    Schwabl, H; Mesa Pascasio, J; Fussy, S; Grössing, G

    2012-01-01

    In our bouncer-walker model a quantum is a nonequilibrium steady-state maintained by a permanent throughput of energy. Specifically, we consider a 'particle' as a bouncer whose oscillations are phase-locked with those of the energy-momentum reservoir of the zero-point field (ZPF), and we combine this with the random-walk model of the walker, again driven by the ZPF. Starting with this classical toy model of the bouncer-walker we were able to derive fundamental elements of quantum theory. Here this toy model is revisited with special emphasis on the mechanism of emergence. Especially the derivation of the total energy hω o and the coupling to the ZPF are clarified. For this we make use of a sub-quantum equipartition theorem. It can further be shown that the couplings of both bouncer and walker to the ZPF are identical. Then we follow this path in accordance with Ref. [2], expanding the view from the particle in its rest frame to a particle in motion. The basic features of ballistic diffusion are derived, especially the diffusion constant D, thus providing a missing link between the different approaches of our previous works.

  15. Zero inflated negative binomial-generalized exponential distributionand its applications

    Directory of Open Access Journals (Sweden)

    Sirinapa Aryuyuen

    2014-08-01

    Full Text Available In this paper, we propose a new zero inflated distribution, namely, the zero inflated negative binomial-generalized exponential (ZINB-GE distribution. The new distribution is used for count data with extra zeros and is an alternative for data analysis with over-dispersed count data. Some characteristics of the distribution are given, such as mean, variance, skewness, and kurtosis. Parameter estimation of the ZINB-GE distribution uses maximum likelihood estimation (MLE method. Simulated and observed data are employed to examine this distribution. The results show that the MLE method seems to have high efficiency for large sample sizes. Moreover, the mean square error of parameter estimation is increased when the zero proportion is higher. For the real data sets, this new zero inflated distribution provides a better fit than the zero inflated Poisson and zero inflated negative binomial distributions.

  16. On a method for generating inequalities for the zeros of certain functions

    Science.gov (United States)

    Gatteschi, Luigi; Giordano, Carla

    2007-10-01

    In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.

  17. Position-based coding and convex splitting for private communication over quantum channels

    Science.gov (United States)

    Wilde, Mark M.

    2017-10-01

    The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.

  18. Classical many-particle systems with unique disordered ground states

    Science.gov (United States)

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2017-10-01

    Classical ground states (global energy-minimizing configurations) of many-particle systems are typically unique crystalline structures, implying zero enumeration entropy of distinct patterns (aside from trivial symmetry operations). By contrast, the few previously known disordered classical ground states of many-particle systems are all high-entropy (highly degenerate) states. Here we show computationally that our recently proposed "perfect-glass" many-particle model [Sci. Rep. 6, 36963 (2016), 10.1038/srep36963] possesses disordered classical ground states with a zero entropy: a highly counterintuitive situation . For all of the system sizes, parameters, and space dimensions that we have numerically investigated, the disordered ground states are unique such that they can always be superposed onto each other or their mirror image. At low energies, the density of states obtained from simulations matches those calculated from the harmonic approximation near a single ground state, further confirming ground-state uniqueness. Our discovery provides singular examples in which entropy and disorder are at odds with one another. The zero-entropy ground states provide a unique perspective on the celebrated Kauzmann-entropy crisis in which the extrapolated entropy of a supercooled liquid drops below that of the crystal. We expect that our disordered unique patterns to be of value in fields beyond glass physics, including applications in cryptography as pseudorandom functions with tunable computational complexity.

  19. Angular discretization errors in transport theory

    International Nuclear Information System (INIS)

    Nelson, P.; Yu, F.

    1992-01-01

    Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed

  20. Studies on the zeros of Bessel functions and methods for their computation

    Science.gov (United States)

    Kerimov, M. K.

    2014-09-01

    The zeros of Bessel functions play an important role in computational mathematics, mathematical physics, and other areas of natural sciences. Studies addressing these zeros (their properties, computational methods) can be found in various sources. This paper offers a detailed overview of the results concerning the real zeros of the Bessel functions of the first and second kinds and general cylinder functions. The author intends to publish several overviews on this subject. In this first publication, works dealing with real zeros are analyzed. Primary emphasis is placed on classical results, which are still important. Some of the most recent publications are also discussed.

  1. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  2. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  3. Trajectory description of the quantum–classical transition for wave packet interference

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw

    2016-08-15

    The quantum–classical transition for wave packet interference is investigated using a hydrodynamic description. A nonlinear quantum–classical transition equation is obtained by introducing a degree of quantumness ranging from zero to one into the classical time-dependent Schrödinger equation. This equation provides a continuous description for the transition process of physical systems from purely quantum to purely classical regimes. In this study, the transition trajectory formalism is developed to provide a hydrodynamic description for the quantum–classical transition. The flow momentum of transition trajectories is defined by the gradient of the action function in the transition wave function and these trajectories follow the main features of the evolving probability density. Then, the transition trajectory formalism is employed to analyze the quantum–classical transition of wave packet interference. For the collision-like wave packet interference where the propagation velocity is faster than the spreading speed of the wave packet, the interference process remains collision-like for all the degree of quantumness. However, the interference features demonstrated by transition trajectories gradually disappear when the degree of quantumness approaches zero. For the diffraction-like wave packet interference, the interference process changes continuously from a diffraction-like to collision-like case when the degree of quantumness gradually decreases. This study provides an insightful trajectory interpretation for the quantum–classical transition of wave packet interference.

  4. Robust Timing Synchronization in Aeronautical Mobile Communication Systems

    Science.gov (United States)

    Xiong, Fu-Qin; Pinchak, Stanley

    2004-01-01

    This work details a study of robust synchronization schemes suitable for satellite to mobile aeronautical applications. A new scheme, the Modified Sliding Window Synchronizer (MSWS), is devised and compared with existing schemes, including the traditional Early-Late Gate Synchronizer (ELGS), the Gardner Zero-Crossing Detector (GZCD), and the Sliding Window Synchronizer (SWS). Performance of the synchronization schemes is evaluated by a set of metrics that indicate performance in digital communications systems. The metrics are convergence time, mean square phase error (or root mean-square phase error), lowest SNR for locking, initial frequency offset performance, midstream frequency offset performance, and system complexity. The performance of the synchronizers is evaluated by means of Matlab simulation models. A simulation platform is devised to model the satellite to mobile aeronautical channel, consisting of a Quadrature Phase Shift Keying modulator, an additive white Gaussian noise channel, and a demodulator front end. Simulation results show that the MSWS provides the most robust performance at the cost of system complexity. The GZCD provides a good tradeoff between robustness and system complexity for communication systems that require high symbol rates or low overall system costs. The ELGS has a high system complexity despite its average performance. Overall, the SWS, originally designed for multi-carrier systems, performs very poorly in single-carrier communications systems. Table 5.1 in Section 5 provides a ranking of each of the synchronization schemes in terms of the metrics set forth in Section 4.1. Details of comparison are given in Section 5. Based on the results presented in Table 5, it is safe to say that the most robust synchronization scheme examined in this work is the high-sample-rate Modified Sliding Window Synchronizer. A close second is its low-sample-rate cousin. The tradeoff between complexity and lowest mean-square phase error determines

  5. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    International Nuclear Information System (INIS)

    Chau, H.F.

    2002-01-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1√(5)≅27.6%, thereby making it the most error resistant scheme known to date

  6. Optimization of Wireless Optical Communication System Based on Augmented Lagrange Algorithm

    International Nuclear Information System (INIS)

    He Suxiang; Meng Hongchao; Wang Hui; Zhao Yanli

    2011-01-01

    The optimal model for wireless optical communication system with Gaussian pointing loss factor is studied, in which the value of bit error probability (BEP) is prespecified and the optimal system parameters is to be found. For the superiority of augmented Lagrange method, the model considered is solved by using a classical quadratic augmented Lagrange algorithm. The detailed numerical results are reported. Accordingly, the optimal system parameters such as transmitter power, transmitter wavelength, transmitter telescope gain and receiver telescope gain can be established, which provide a scheme for efficient operation of the wireless optical communication system.

  7. Using CAS to Solve Classical Mathematics Problems

    Science.gov (United States)

    Burke, Maurice J.; Burroughs, Elizabeth A.

    2009-01-01

    Historically, calculus has displaced many algebraic methods for solving classical problems. This article illustrates an algebraic method for finding the zeros of polynomial functions that is closely related to Newton's method (devised in 1669, published in 1711), which is encountered in calculus. By exploring this problem, precalculus students…

  8. Being an honest broker of hydrology: Uncovering, communicating and addressing model error in a climate change streamflow dataset

    Science.gov (United States)

    Chegwidden, O.; Nijssen, B.; Pytlak, E.

    2017-12-01

    Any model simulation has errors, including errors in meteorological data, process understanding, model structure, and model parameters. These errors may express themselves as bias, timing lags, and differences in sensitivity between the model and the physical world. The evaluation and handling of these errors can greatly affect the legitimacy, validity and usefulness of the resulting scientific product. In this presentation we will discuss a case study of handling and communicating model errors during the development of a hydrologic climate change dataset for the Pacific Northwestern United States. The dataset was the result of a four-year collaboration between the University of Washington, Oregon State University, the Bonneville Power Administration, the United States Army Corps of Engineers and the Bureau of Reclamation. Along the way, the partnership facilitated the discovery of multiple systematic errors in the streamflow dataset. Through an iterative review process, some of those errors could be resolved. For the errors that remained, honest communication of the shortcomings promoted the dataset's legitimacy. Thoroughly explaining errors also improved ways in which the dataset would be used in follow-on impact studies. Finally, we will discuss the development of the "streamflow bias-correction" step often applied to climate change datasets that will be used in impact modeling contexts. We will describe the development of a series of bias-correction techniques through close collaboration among universities and stakeholders. Through that process, both universities and stakeholders learned about the others' expectations and workflows. This mutual learning process allowed for the development of methods that accommodated the stakeholders' specific engineering requirements. The iterative revision process also produced a functional and actionable dataset while preserving its scientific merit. We will describe how encountering earlier techniques' pitfalls allowed us

  9. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  10. Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series

    Science.gov (United States)

    Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.

    2018-03-01

    Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.

  11. Improved ensemble-mean forecast skills of ENSO events by a zero-mean stochastic model-error model of an intermediate coupled model

    Science.gov (United States)

    Zheng, F.; Zhu, J.

    2015-12-01

    To perform an ensemble-based ENSO probabilistic forecast, the crucial issue is to design a reliable ensemble prediction strategy that should include the major uncertainties of a forecast system. In this study, we developed a new general ensemble perturbation technique to improve the ensemble-mean predictive skill of forecasting ENSO using an intermediate coupled model (ICM). The model uncertainties are first estimated and analyzed from EnKF analysis results through assimilating observed SST. Then, based on the pre-analyzed properties of the model errors, a zero-mean stochastic model-error model is developed to mainly represent the model uncertainties induced by some important physical processes missed in the coupled model (i.e., stochastic atmospheric forcing/MJO, extra-tropical cooling and warming, Indian Ocean Dipole mode, etc.). Each member of an ensemble forecast is perturbed by the stochastic model-error model at each step during the 12-month forecast process, and the stochastical perturbations are added into the modeled physical fields to mimic the presence of these high-frequency stochastic noises and model biases and their effect on the predictability of the coupled system. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr retrospective forecast experiments. The two forecast schemes are differentiated by whether they considered the model stochastic perturbations, with both initialized by the ensemble-mean analysis states from EnKF. The comparison results suggest that the stochastic model-error perturbations have significant and positive impacts on improving the ensemble-mean prediction skills during the entire 12-month forecast process. Because the nonlinear feature of the coupled model can induce the nonlinear growth of the added stochastic model errors with model integration, especially through the nonlinear heating mechanism with the vertical advection term of the model, the

  12. Locking classical correlations in quantum States.

    Science.gov (United States)

    DiVincenzo, David P; Horodecki, Michał; Leung, Debbie W; Smolin, John A; Terhal, Barbara M

    2004-02-13

    We show that there exist bipartite quantum states which contain a large locked classical correlation that is unlocked by a disproportionately small amount of classical communication. In particular, there are (2n+1)-qubit states for which a one-bit message doubles the optimal classical mutual information between measurement results on the subsystems, from n/2 bits to n bits. This phenomenon is impossible classically. However, states exhibiting this behavior need not be entangled. We study the range of states exhibiting this phenomenon and bound its magnitude.

  13. Classical Communication and Entanglement Cost in Preparing a Class of Multi-qubit States

    International Nuclear Information System (INIS)

    Pan Guixia; Liu Yimin; Zhang Zhanjun

    2008-01-01

    Recently, several similar protocols [J. Opt. B 4 (2002) 380; Phys. Lett. A 316 (2003) 159; Phys. Lett. A 355 (2006) 285; Phys. Lett. A 336 (2005) 317] for remotely preparing a class of multi-qubit states (i.e, α|0...0> + β|1...1>) were proposed, respectively. In this paper, by applying the controlled-not (CNOT) gate, a new simple protocol is proposed for remotely preparing such class of states. Compared to the previous protocols, both classical communication cost and required quantum entanglement in our protocol are remarkably reduced. Moreover, the difficulty of identifying some quantum states in our protocol is also degraded. Hence our protocol is more economical and feasible.

  14. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  15. Zeroes of functions of Fresnel complementary integral type

    Directory of Open Access Journals (Sweden)

    Mario Alberto Villalobos Arias

    2017-02-01

    Full Text Available Theoretical upper and lower bounds are established for zeroes of a parametric family of functions which are defined by integrals of the same type as  the Fresnel complementary integral. Asymptotic properties for these bounds are obtained as well as monotony properties of the localization  intervals.  Given the value of the parameter an analytical-numerical procedure is deduced to enclose all  zeros of a given function with an a priori error.

  16. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  17. A weak zero-one law for sequences of random distance graphs

    Energy Technology Data Exchange (ETDEWEB)

    Zhukovskii, Maksim E [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2012-07-31

    We study zero-one laws for properties of random distance graphs. Properties written in a first-order language are considered. For p(N) such that pN{sup {alpha}}{yields}{infinity} as N{yields}{infinity}, and (1-p)N{sup {alpha}} {yields} {infinity} as N {yields} {infinity} for any {alpha}>0, we succeed in refuting the law. In this connection, we consider a weak zero-one j-law. For this law, we obtain results for random distance graphs which are similar to the assertions concerning the classical zero-one law for random graphs. Bibliography: 18 titles.

  18. Referential Zero Point

    Directory of Open Access Journals (Sweden)

    Matjaž Potrč

    2016-04-01

    Full Text Available Perhaps the most important controversy in which ordinary language philosophy was involved is that of definite descriptions, presenting referential act as a community-involving communication-intention endeavor, thereby opposing the direct acquaintance-based and logical proper names inspired reference aimed at securing truth conditions of referential expression. The problem of reference is that of obtaining access to the matters in the world. This access may be forthcoming through the senses, or through descriptions. A review of how the problem of reference is handled shows though that one main practice is to indulge in relations of acquaintance supporting logical proper names, demonstratives, indexicals and causal or historical chains. This testifies that the problem of reference involves the zero point, and with it phenomenology of intentionality. Communication-intention is but one dimension of rich phenomenology that constitutes an agent’s experiential space, his experiential world. Zero point is another constitutive aspect of phenomenology involved in the referential relation. Realizing that the problem of reference is phenomenology based opens a new perspective upon the contribution of analytical philosophy in this area, reconciling it with continental approach, and demonstrating variations of the impossibility related to the real. Chromatic illumination from the cognitive background empowers the referential act, in the best tradition of ordinary language philosophy.

  19. Solving the patient zero inverse problem by using generalized simulated annealing

    Science.gov (United States)

    Menin, Olavo H.; Bauch, Chris T.

    2018-01-01

    Identifying patient zero - the initially infected source of a given outbreak - is an important step in epidemiological investigations of both existing and emerging infectious diseases. Here, the use of the Generalized Simulated Annealing algorithm (GSA) to solve the inverse problem of finding the source of an outbreak is studied. The classical disease natural histories susceptible-infected (SI), susceptible-infected-susceptible (SIS), susceptible-infected-recovered (SIR) and susceptible-infected-recovered-susceptible (SIRS) in a regular lattice are addressed. Both the position of patient zero and its time of infection are considered unknown. The algorithm performance with respect to the generalization parameter q˜v and the fraction ρ of infected nodes for whom infection was ascertained is assessed. Numerical experiments show the algorithm is able to retrieve the epidemic source with good accuracy, even when ρ is small, but present no evidence to support that GSA performs better than its classical version. Our results suggest that simulated annealing could be a helpful tool for identifying patient zero in an outbreak where not all cases can be ascertained.

  20. Phase shift and zeros in K/sup +/p

    Energy Technology Data Exchange (ETDEWEB)

    Urban, M [California Univ., Berkeley (USA). Lawrence Berkeley Lab.

    1975-12-22

    A specific example--K/sup +/p elastic scattering--is used to show some drawbacks of the classical phase shift analysis (PSA). The recently introduced method of zeros is used to obtain new results concerning K/sup +/p elastic and to prove that PSA are not well fit to the study of this interaction.

  1. Integrability of Hamiltonian systems with homogeneous potentials of degree zero

    Energy Technology Data Exchange (ETDEWEB)

    Casale, Guy, E-mail: guy.casale@univ-rennes1.f [IRMAR UMR 6625, Universite de Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex (France); Duval, Guillaume, E-mail: dduuvvaall@wanadoo.f [1 Chemin du Chateau, 76 430 Les Trois Pierres (France); Maciejewski, Andrzej J., E-mail: maciejka@astro.ia.uz.zgora.p [Institute of Astronomy, University of Zielona Gora, Licealna 9, PL-65-417 Zielona Gora (Poland); Przybylska, Maria, E-mail: Maria.Przybylska@astri.uni.torun.p [Torun Centre for Astronomy, N. Copernicus University, Gagarina 11, PL-87-100 Torun (Poland)

    2010-01-04

    We derive necessary conditions for integrability in the Liouville sense of classical Hamiltonian systems with homogeneous potentials of degree zero. We obtain these conditions through an analysis of the differential Galois group of variational equations along a particular solution generated by a non-zero solution d element of C{sup n} of nonlinear equation gradV(d)=d. We prove that when the system is integrable the Hessian matrix V{sup ''}(d) has only integer eigenvalues and is diagonalizable.

  2. 2.3 Gbit/s underwater wireless optical communications using directly modulated 520 nm laser diode

    KAUST Repository

    Oubei, Hassan M.

    2015-07-30

    We experimentally demonstrate a record high-speed underwater wireless optical communication (UWOC) over 7 m distance using on-off keying non-return-to-zero (OOK-NRZ) modulation scheme. The communication link uses a commercial TO-9 packaged pigtailed 520 nm laser diode (LD) with 1.2 GHz bandwidth as the optical transmitter and an avalanche photodiode (APD) module as the receiver. At 2.3 Gbit/s transmission, the measured bit error rate of the received data is 2.23×10−4, well below the forward error correction (FEC) threshold of 2×10−3 required for error-free operation. The high bandwidth of the LD coupled with high sensitivity APD and optimized operating conditions is the key enabling factor in obtaining high bit rate transmission in our proposed system. To the best of our knowledge, this result presents the highest data rate ever achieved in UWOC systems thus far.

  3. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  4. Simulated disclosure of a medical error by residents: development of a course in specific communication skills.

    Science.gov (United States)

    Raper, Steven E; Resnick, Andrew S; Morris, Jon B

    2014-01-01

    Surgery residents are expected to demonstrate the ability to communicate with patients, families, and the public in a wide array of settings on a wide variety of issues. One important setting in which residents may be required to communicate with patients is in the disclosure of medical error. This article details one approach to developing a course in the disclosure of medical errors by residents. Before the development of this course, residents had no education in the skills necessary to disclose medical errors to patients. Residents viewed a Web-based video didactic session and associated slide deck and then were filmed disclosing a wrong-site surgery to a standardized patient (SP). The filmed encounter was reviewed by faculty, who then along with the SP scored each encounter (5-point Likert scale) over 10 domains of physician-patient communication. The residents received individualized written critique, the numerical analysis of their individual scenario, and an opportunity to provide feedback over a number of domains. A mean score of 4.00 or greater was considered satisfactory. Faculty and SP assessments were compared with Student t test. Residents were filmed in a one-on-one scenario in which they had to disclose a wrong-site surgery to a SP in a Simulation Center. A total of 12 residents, shortly to enter the clinical postgraduate year 4, were invited to participate, as they will assume service leadership roles. All were finishing their laboratory experiences, and all accepted the invitation. Residents demonstrated satisfactory competence in 4 of the 10 domains assessed by the course faculty. There were significant differences in the perceptions of the faculty and SP in 5 domains. The residents found this didactic, simulated experience of value (Likert score ≥4 in 5 of 7 domains assessed in a feedback tool). Qualitative feedback from the residents confirmed the realistic feel of the encounter and other impressions. We were able to quantitatively

  5. Modelling the ethanol-induced sleeping time in mice through a zero inflated model

    OpenAIRE

    FOGAP, Njinju Tongwa

    2007-01-01

    In the analysis of data in statistics, it is imperative to select most suitable models. Wrong choice of model selection leads to bias parameter estimates and standard errors. In the ethanol anesthesia data set used in this thesis, we observe more than expected zero counts, usually termed zero-inflation. Traditional application of Poisson and negative binomial distributions for model fitting may not be adequate due to the presence of excess zeros. This zero-inflation comes from two sources;...

  6. Optimal linear precoding for indoor visible light communication system

    KAUST Repository

    Sifaou, Houssem

    2017-07-31

    Visible light communication (VLC) is an emerging technique that uses light-emitting diodes (LED) to combine communication and illumination. It is considered as a promising scheme for indoor wireless communication that can be deployed at reduced costs while offering high data rate performance. In this paper, we focus on the design of the downlink of a multi-user VLC system. Inherent to multi-user systems is the interference caused by the broadcast nature of the medium. Linear precoding based schemes are among the most popular solutions that have recently been proposed to mitigate inter-user interference. This paper focuses on the design of the optimal linear precoding scheme that solves the max-min signal-to-interference-plus-noise ratio (SINR) problem. The performance of the proposed precoding scheme is studied under different working conditions and compared with the classical zero-forcing precoding. Simulations have been provided to illustrate the high gain of the proposed scheme.

  7. Mixed quantum-classical electrodynamics: Understanding spontaneous decay and zero-point energy

    Science.gov (United States)

    Li, Tao E.; Nitzan, Abraham; Sukharev, Maxim; Martinez, Todd; Chen, Hsing-Ta; Subotnik, Joseph E.

    2018-03-01

    The dynamics of an electronic two-level system coupled to an electromagnetic field are simulated explicitly for one- and three-dimensional systems through semiclassical propagation of the Maxwell-Liouville equations. We consider three flavors of mixed quantum-classical dynamics: (i) the classical path approximation (CPA), (ii) Ehrenfest dynamics, and (iii) symmetrical quasiclassical (SQC) dynamics. Our findings are as follows: (i) The CPA fails to recover a consistent description of spontaneous emission, (ii) a consistent "spontaneous" emission can be obtained from Ehrenfest dynamics, provided that one starts in an electronic superposition state, and (iii) spontaneous emission is always obtained using SQC dynamics. Using the SQC and Ehrenfest frameworks, we further calculate the dynamics following an incoming pulse, but here we find very different responses: SQC and Ehrenfest dynamics deviate sometimes strongly in the calculated rate of decay of the transient excited state. Nevertheless, our work confirms the earlier observations by Miller [J. Chem. Phys. 69, 2188 (1978), 10.1063/1.436793] that Ehrenfest dynamics can effectively describe some aspects of spontaneous emission and highlights interesting possibilities for studying light-matter interactions with semiclassical mechanics.

  8. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  9. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    Science.gov (United States)

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  10. Zero-Axis Virtual Synchronous Coordinate Based Current Control Strategy for Grid-Connected Inverter

    Directory of Open Access Journals (Sweden)

    Longyue Yang

    2018-05-01

    Full Text Available Unbalanced power has a great influence on the safe and stable operation of the distribution network system. The static power compensator, which is essentially a grid-connected inverter, is an effective solution to the three-phase power imbalance problem. In order to solve the tracking error problem of zero-sequence AC current signals, a novel control strategy based on zero-axis virtual synchronous coordinates is proposed in this paper. By configuring the operation of filter transmission matrices, a specific orthogonal signal is obtained for zero-axis reconstruction. In addition, a controller design scheme based on this method is proposed. Compared with the traditional zero-axis direct control, this control strategy is equivalent to adding a frequency tuning module by the orthogonal signal generator. The control gain of an open loop system can be equivalently promoted through linear transformation. With its clear mathematical meaning, zero- sequence current control can be controlled with only a first-order linear controller. Through reasonable parameter design, zero steady-state error, fast response and strong stability can be achieved. Finally, the performance of the proposed control strategy is verified by both simulations and experiments.

  11. Decoy-state quantum key distribution with two-way classical postprocessing

    International Nuclear Information System (INIS)

    Ma Xiongfeng; Fung, C.-H.F.; Chen Kai; Lo, H.-K.; Dupuis, Frederic; Tamaki, Kiyoshi

    2006-01-01

    Decoy states have recently been proposed as a useful method for substantially improving the performance of quantum key distribution (QKD) protocols when a coherent-state source is used. Previously, data postprocessing schemes based on one-way classical communications were considered for use with decoy states. In this paper, we develop two data postprocessing schemes for the decoy-state method using two-way classical communications. Our numerical simulation (using parameters from a specific QKD experiment as an example) results show that our scheme is able to extend the maximal secure distance from 142 km (using only one-way classical communications with decoy states) to 181 km. The second scheme is able to achieve a 10% greater key generation rate in the whole regime of distances. We conclude that decoy-state QKD with two-way classical postprocessing is of practical interest

  12. Classical limit of a quantum particle in an external Yang-Mills field

    International Nuclear Information System (INIS)

    Moschella, U.

    1989-01-01

    It is studied the classical limit of a quantum particle in an external non-abelian gauge field. It is shown that the unitary group describing the quantum fluctuations around any classic phase orbit has a classical limit when h tends to zero under very general conditions on the potentials. It is also proved the self-adjointness of the Hamilton's operator of the quantum theory for a large class of potentials. Some applications of the theory are finally exposed

  13. Zero-crossing detection algorithm for arrays of optical spatial filtering velocimetry sensors

    DEFF Research Database (Denmark)

    Jakobsen, Michael Linde; Pedersen, Finn; Hanson, Steen Grüner

    2008-01-01

    This paper presents a zero-crossing detection algorithm for arrays of compact low-cost optical sensors based on spatial filtering for measuring fluctuations in angular velocity of rotating solid structures. The algorithm is applicable for signals with moderate signal-to-noise ratios, and delivers...... repeating the same measurement error for each revolution of the target, and to gain high performance measurement of angular velocity. The traditional zero-crossing detection is extended by 1) inserting an appropriate band-pass filter before the zero-crossing detection, 2) measuring time periods between zero...

  14. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  15. Gauge-fields and integrated quantum-classical theory

    International Nuclear Information System (INIS)

    Stapp, H.P.

    1986-01-01

    Physical situations in which quantum systems communicate continuously to their classically described environment are not covered by contemporary quantum theory, which requires a temporary separation of quantum degrees of freedom from classical ones. A generalization would be needed to cover these situations. An incomplete proposal is advanced for combining the quantum and classical degrees of freedom into a unified objective description. It is based on the use of certain quantum-classical structures of light that arise from gauge invariance to coordinate the quantum and classical degrees of freedom. Also discussed is the question of where experimenters should look to find phenomena pertaining to the quantum-classical connection. 17 refs

  16. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    Science.gov (United States)

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  17. Separation of zeros for source signature identification under reverberant path conditions.

    Science.gov (United States)

    Hasegawa, Tomomi; Tohyama, Mikio

    2011-10-01

    This paper presents an approach to distinguishing the zeros representing a sound source from those representing the transfer function on the basis of Lyon's residue-sign model. In machinery noise diagnostics, the source signature must be separated from observation records under reverberant path conditions. In numerical examples and an experimental piano-string vibration analysis, the modal responses could be synthesized by using clustered line-spectrum modeling. The modeling error represented the source signature subject to the source characteristics being given by a finite impulse response. The modeling error can be interpreted as a remainder function necessary for the zeros representing the source signature. © 2011 Acoustical Society of America

  18. Geometrical approach to the distribution of the zeros for the Husimi function

    International Nuclear Information System (INIS)

    Toscano, Fabricio; Almeida, M. Ozorio de

    1999-03-01

    We construct a semiclassical expression for the Husimi function of autonomous systems in one degree of freedom, by smoothing with a Gaussian function an expression that captures the essential features of the Wigner function in the semiclassical limit. Our approximation reveals the center and chord structure that the Husimi function inherits from the Wigner function, down to very shallow valleys, where lie the Husimi zeros. This explanation for the distribution of zeros along curves relies on the geometry of the classical torus, rather the complex analytic properties of the WKB method in the Bargmann representation. We evaluate the zeros for several examples. (author)

  19. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  20. Driven topological systems in the classical limit

    Science.gov (United States)

    Duncan, Callum W.; Öhberg, Patrik; Valiente, Manuel

    2017-03-01

    Periodically driven quantum systems can exhibit topologically nontrivial behavior, even when their quasienergy bands have zero Chern numbers. Much work has been conducted on noninteracting quantum-mechanical models where this kind of behavior is present. However, the inclusion of interactions in out-of-equilibrium quantum systems can prove to be quite challenging. On the other hand, the classical counterpart of hard-core interactions can be simulated efficiently via constrained random walks. The noninteracting model, proposed by Rudner et al. [Phys. Rev. X 3, 031005 (2013), 10.1103/PhysRevX.3.031005], has a special point for which the system is equivalent to a classical random walk. We consider the classical counterpart of this model, which is exact at a special point even when hard-core interactions are present, and show how these quantitatively affect the edge currents in a strip geometry. We find that the interacting classical system is well described by a mean-field theory. Using this we simulate the dynamics of the classical system, which show that the interactions play the role of Markovian, or time-dependent disorder. By comparing the evolution of classical and quantum edge currents in small lattices, we find regimes where the classical limit considered gives good insight into the quantum problem.

  1. Experimental assessment of unvalidated assumptions in classical plasticity theory.

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, Rebecca Moss (University of Utah, Salt Lake City, UT); Burghardt, Jeffrey A. (University of Utah, Salt Lake City, UT); Bauer, Stephen J.; Bronowski, David R.

    2009-01-01

    This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

  2. A Genetic algorithm for evaluating the zeros (roots) of polynomial ...

    African Journals Online (AJOL)

    This paper presents a Genetic Algorithm software (which is a computational, search technique) for finding the zeros (roots) of any given polynomial function, and optimizing and solving N-dimensional systems of equations. The software is particularly useful since most of the classic schemes are not all embracing.

  3. Zero in the brain: A voxel-based lesion-symptom mapping study in right hemisphere damaged patients.

    Science.gov (United States)

    Benavides-Varela, Silvia; Passarini, Laura; Butterworth, Brian; Rolma, Giuseppe; Burgio, Francesca; Pitteri, Marco; Meneghello, Francesca; Shallice, Tim; Semenza, Carlo

    2016-04-01

    Transcoding numerals containing zero is more problematic than transcoding numbers formed by non-zero digits. However, it is currently unknown whether this is due to zeros requiring brain areas other than those traditionally associated with number representation. Here we hypothesize that transcoding zeros entails visuo-spatial and integrative processes typically associated with the right hemisphere. The investigation involved 22 right-brain-damaged patients and 20 healthy controls who completed tests of reading and writing Arabic numbers. As expected, the most significant deficit among patients involved a failure to cope with zeros. Moreover, a voxel-based lesion-symptom mapping (VLSM) analysis showed that the most common zero-errors were maximally associated to the right insula which was previously related to sensorimotor integration, attention, and response selection, yet for the first time linked to transcoding processes. Error categories involving other digits corresponded to the so-called Neglect errors, which however, constituted only about 10% of the total reading and 3% of the writing mistakes made by the patients. We argue that damage to the right hemisphere impairs the mechanism of parsing, and the ability to set-up empty-slot structures required for processing zeros in complex numbers; moreover, we suggest that the brain areas located in proximity to the right insula play a role in the integration of the information resulting from the temporary application of transcoding procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Optimized quantization in Zero Leakage Helper data systems

    NARCIS (Netherlands)

    Stanko, T.; Andini, F.N.; Skoric, B.

    2017-01-01

    Helper data systems are a cryptographic primitive that allows for the reproducible extraction of secrets from noisy measurements. Redundancy data called helper data makes it possible to do error correction while leaking little or nothing (Zero Leakage) about the extracted secret string. We study the

  5. Student laboratory experiments exploring optical fibre communication systems, eye diagrams, and bit error rates

    Science.gov (United States)

    Walsh, Douglas; Moodie, David; Mauchline, Iain; Conner, Steve; Johnstone, Walter; Culshaw, Brian

    2005-06-01

    Optical fibre communications has proved to be one of the key application areas, which created, and ultimately propelled the global growth of the photonics industry over the last twenty years. Consequently the teaching of the principles of optical fibre communications has become integral to many university courses covering photonics technology. However to reinforce the fundamental principles and key technical issues students examine in their lecture courses and to develop their experimental skills, it is critical that the students also obtain hands-on practical experience of photonics components, instruments and systems in an associated teaching laboratory. In recognition of this need OptoSci, in collaboration with university academics, commercially developed a fibre optic communications based educational package (ED-COM). This educator kit enables students to; investigate the characteristics of the individual communications system components (sources, transmitters, fibre, receiver), examine and interpret the overall system performance limitations imposed by attenuation and dispersion, conduct system design and performance analysis. To further enhance the experimental programme examined in the fibre optic communications kit, an extension module to ED-COM has recently been introduced examining one of the most significant performance parameters of digital communications systems, the bit error rate (BER). This add-on module, BER(COM), enables students to generate, evaluate and investigate signal quality trends by examining eye patterns, and explore the bit-rate limitations imposed on communication systems by noise, attenuation and dispersion. This paper will examine the educational objectives, background theory, and typical results for these educator kits, with particular emphasis on BER(COM).

  6. Putting a face on medical errors: a patient perspective.

    Science.gov (United States)

    Kooienga, Sarah; Stewart, Valerie T

    2011-01-01

    Knowledge of the patient's perspective on medical error is limited. Research efforts have centered on how best to disclose error and how patients desire to have medical error disclosed. On the basis of a qualitative descriptive component of a mixed method study, a purposive sample of 30 community members told their stories of medical error. Their experiences focused on lack of communication, missed communication, or provider's poor interpersonal style of communication, greatly contrasting with the formal definition of error as failure to follow a set standard of care. For these participants, being a patient was more important than error or how an error is disclosed. The patient's understanding of error must be a key aspect of any quality improvement strategy. © 2010 National Association for Healthcare Quality.

  7. Learning Classical Music Club

    CERN Multimedia

    Learning Classical Music Club

    2010-01-01

    There is a new CERN Club called “Learning Classical Music at CERN”. We are aiming to give classical music lessons for different instruments (see link) for students from 5 to 100 years old. We are now ready to start our activities in the CERN barracks. We are now in the enrollment phase and hope to start lessons very soon ! Club info can be found in the list of CERN Club: http://user.web.cern.ch/user/Communication/SocialLifeActivities/Clubs/Clubs.html Salvatore Buontempo Club President

  8. Spatially multiplexed orbital-angular-momentum-encoded single photon and classical channels in a free-space optical communication link.

    Science.gov (United States)

    Ren, Yongxiong; Liu, Cong; Pang, Kai; Zhao, Jiapeng; Cao, Yinwen; Xie, Guodong; Li, Long; Liao, Peicheng; Zhao, Zhe; Tur, Moshe; Boyd, Robert W; Willner, Alan E

    2017-12-01

    We experimentally demonstrate spatial multiplexing of an orbital angular momentum (OAM)-encoded quantum channel and a classical Gaussian beam with a different wavelength and orthogonal polarization. Data rates as large as 100 MHz are achieved by encoding on two different OAM states by employing a combination of independently modulated laser diodes and helical phase holograms. The influence of OAM mode spacing, encoding bandwidth, and interference from the co-propagating Gaussian beam on registered photon count rates and quantum bit error rates is investigated. Our results show that the deleterious effects of intermodal crosstalk effects on system performance become less important for OAM mode spacing Δ≥2 (corresponding to a crosstalk value of less than -18.5  dB). The use of OAM domain can additionally offer at least 10.4 dB isolation besides that provided by wavelength and polarization, leading to a further suppression of interference from the classical channel.

  9. Spacecraft Data Simulator for the test of level zero processing systems

    Science.gov (United States)

    Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem

    1994-01-01

    The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.

  10. Analysis of positron annihilation lifetime data by numerical Laplace inversion: Corrections for source terms and zero-time shift errors

    International Nuclear Information System (INIS)

    Gregory, R.B.

    1991-01-01

    We have recently described modifications to the program CONTIN for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a refernce material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminium (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene. (orig.)

  11. Transfer Error and Correction Approach in Mobile Network

    Science.gov (United States)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  12. A quantum secure direct communication protocol based on a five-particle cluster state and classical XOR operation

    International Nuclear Information System (INIS)

    Li Jian; Song Danjie; Guo Xiaojing; Jing Bo

    2012-01-01

    In order to transmit secure messages, a quantum secure direct communication protocol based on a five-particle cluster state and classical XOR operation is presented. The five-particle cluster state is used to detect eavesdroppers, and the classical XOR operation serving as a one-time-pad is used to ensure the security of the protocol. In the security analysis, the entropy theory method is introduced, and three detection strategies are compared quantitatively by using the constraint between the information that the eavesdroppers can obtain and the interference introduced. If the eavesdroppers intend to obtain all the information, the detection rate of the original ping-pong protocol is 50%; the second protocol, using two particles of the Einstein-Podolsky-Rosen pair as detection particles, is also 50%; while the presented protocol is 89%. Finally, the security of the proposed protocol is discussed, and the analysis results indicate that the protocol in this paper is more secure than the other two. (authors)

  13. Quasi-superactivation for the classical capacity of quantum channels

    Energy Technology Data Exchange (ETDEWEB)

    Gyongyosi, Laszlo, E-mail: gyongyosi@hit.bme.hu [Quantum Technologies Laboratory, Department of Telecommunications, Budapest University of Technology and Economics, 2 Magyar tudosok krt, Budapest, H-1117, Hungary and Information Systems Research Group, Mathematics and Natural Sciences, Hungarian Ac (Hungary); Imre, Sandor [Quantum Technologies Laboratory, Department of Telecommunications, Budapest University of Technology and Economics, 2 Magyar tudosok krt, Budapest, H-1117 (Hungary)

    2014-12-04

    The superactivation effect has its roots in the extreme violation of additivity of the channel capacity and enables to reliably transmit quantum information over zero-capacity quantum channels. In this work we demonstrate a similar effect for the classical capacity of a quantum channel which previously was thought to be impossible.

  14. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  15. Relations between zeros of special polynomials associated with the Painleve equations

    International Nuclear Information System (INIS)

    Kudryashov, Nikolai A.; Demina, Maria V.

    2007-01-01

    A method for finding relations of roots of polynomials is presented. Our approach allows us to get a number of relations between the zeros of the classical polynomials as well as the roots of special polynomials associated with rational solutions of the Painleve equations. We apply the method to obtain the relations for the zeros of several polynomials. These are: the Hermite polynomials, the Laguerre polynomials, the Yablonskii-Vorob'ev polynomials, the generalized Okamoto polynomials, and the generalized Hermite polynomials. All the relations found can be considered as analogues of generalized Stieltjes relations

  16. The CLASSIC Project

    CERN Document Server

    Iselin, F Christoph

    1996-01-01

    Exchange of data and algorithms among accelerator physics programs is difficult because of unnecessary differences in input formats and internal data structures. To alleviate these problems a C++ class library called CLASSIC (Clas Library for Accelerator System Simulation and Control) is being developed with the goal to provide standard building blocks for computer programs used in accelerator lattice structures in computer memory using a standard input language, a graphical user interface, or a programmed algorithm. It also provides simulation algorithms. These can easily be replaced by modules which communicate with the control system of the accelerator. Exchange of both data and algorithm between different programs using the CLASSIC library should present no difficulty.

  17. Long-distance quantum communication. Decoherence-avoiding mechanisms

    International Nuclear Information System (INIS)

    Kolb Bernardes, Nadja

    2012-01-01

    Entanglement is the essence of most quantum information processes. For instance, it is used as a resource for quantum teleportation or perfectly secure classical communication. Unfortunately, inevitable noise in the quantum channel will typically affect the distribution of entanglement. Owing to fundamental principles, common procedures used in classical communication, such as amplification, cannot be applied. Therefore, the fidelity and rate of transmission will be limited by the length of the channel. Quantum repeaters were proposed to avoid the exponential decay with the distance and to permit long-distance quantum communication. Long-distance quantum communication constitutes the framework for the results presented in this thesis. The main question addressed in this thesis is how the performance of quantum repeaters are affected by various sources of decoherence. Moreover, what can be done against decoherence to improve the performance of the repeater. We are especially interested in the so-called hybrid quantum repeater; however, many of the results presented here are sufficiently general and may be applied to other systems as well. First, we present a detailed entanglement generation rate analysis for the quantum repeater. In contrast to what is commonly found in the literature, our analysis is general and analytical. Moreover, various sources of errors are considered, such as imperfect local two-qubit operations and imperfect memories, making it possible to determine the requirements for memory decoherence times. More specifically, we apply our formulae in the context of a hybrid quantum repeater and we show that in a possible experimental scenario, our hybrid system can create near-maximally entangled pairs over a distance of 1280 km at rates of the order of 100 Hz. Furthermore, aiming to protect the system against different types of errors, we analyze the hybrid quantum repeater when supplemented by quantum error correction. We propose a scheme for

  18. Long-distance quantum communication. Decoherence-avoiding mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Kolb Bernardes, Nadja

    2012-12-17

    Entanglement is the essence of most quantum information processes. For instance, it is used as a resource for quantum teleportation or perfectly secure classical communication. Unfortunately, inevitable noise in the quantum channel will typically affect the distribution of entanglement. Owing to fundamental principles, common procedures used in classical communication, such as amplification, cannot be applied. Therefore, the fidelity and rate of transmission will be limited by the length of the channel. Quantum repeaters were proposed to avoid the exponential decay with the distance and to permit long-distance quantum communication. Long-distance quantum communication constitutes the framework for the results presented in this thesis. The main question addressed in this thesis is how the performance of quantum repeaters are affected by various sources of decoherence. Moreover, what can be done against decoherence to improve the performance of the repeater. We are especially interested in the so-called hybrid quantum repeater; however, many of the results presented here are sufficiently general and may be applied to other systems as well. First, we present a detailed entanglement generation rate analysis for the quantum repeater. In contrast to what is commonly found in the literature, our analysis is general and analytical. Moreover, various sources of errors are considered, such as imperfect local two-qubit operations and imperfect memories, making it possible to determine the requirements for memory decoherence times. More specifically, we apply our formulae in the context of a hybrid quantum repeater and we show that in a possible experimental scenario, our hybrid system can create near-maximally entangled pairs over a distance of 1280 km at rates of the order of 100 Hz. Furthermore, aiming to protect the system against different types of errors, we analyze the hybrid quantum repeater when supplemented by quantum error correction. We propose a scheme for

  19. Study of Frequency of Errors and Areas of Weaknesses in Business Communications Classes at Kapiolani Community College.

    Science.gov (United States)

    Uehara, Soichi

    This study was made to determine the most prevalent errors, areas of weakness, and their frequency in the writing of letters so that a course in business communications classes at Kapiolani Community College (Hawaii) could be prepared that would help students learn to write effectively. The 55 participating students were divided into two groups…

  20. glmmTMB balances speed and flexibility among packages for Zero-inflated Generalized Linear Mixed Modeling

    DEFF Research Database (Denmark)

    Brooks, Mollie Elizabeth; Kristensen, Kasper; van Benthem, Koen J.

    2017-01-01

    Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmm...

  1. Local tuning of the order parameter in superconducting weak links: A zero-inductance nanodevice

    Science.gov (United States)

    Winik, Roni; Holzman, Itamar; Dalla Torre, Emanuele G.; Buks, Eyal; Ivry, Yachin

    2018-03-01

    Controlling both the amplitude and the phase of the superconducting quantum order parameter (" separators="|ψ ) in nanostructures is important for next-generation information and communication technologies. The lack of electric resistance in superconductors, which may be advantageous for some technologies, hinders convenient voltage-bias tuning and hence limits the tunability of ψ at the microscopic scale. Here, we demonstrate the local tunability of the phase and amplitude of ψ, obtained by patterning with a single lithography step a Nb nano-superconducting quantum interference device (nano-SQUID) that is biased at its nanobridges. We accompany our experimental results by a semi-classical linearized model that is valid for generic nano-SQUIDs with multiple ports and helps simplify the modelling of non-linear couplings among the Josephson junctions. Our design helped us reveal unusual electric characteristics with effective zero inductance, which is promising for nanoscale magnetic sensing and quantum technologies.

  2. Feasible quantum communication complexity protocol

    International Nuclear Information System (INIS)

    Galvao, Ernesto F.

    2002-01-01

    I show that a simple multiparty communication task can be performed more efficiently with quantum communication than with classical communication, even with low detection efficiency η. The task is a communication complexity problem in which distant parties need to compute a function of the distributed inputs, while minimizing the amount of communication between them. A realistic quantum optical setup is suggested that can demonstrate a five-party quantum protocol with higher-than-classical performance, provided η>0.33

  3. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan; Hart, Jeffrey D.; Janicki, Ryan; Carroll, Raymond J.

    2010-01-01

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal

  4. Spoiling of radiation zeros at the one-loop level and infrared finiteness

    International Nuclear Information System (INIS)

    Laursen, M.L.; Samuel, M.A.; Sen, A.

    1983-01-01

    We consider the amplitude for the radiative decay W - →phi 1 phi 2 #betta# (scalar quarks) including one-loop gluon corrections. We study this process to see if the amplitude (radiation) zeros found in lowest order survive at the one-loop level. The subset of diagrams containing self-mass insertions preserves the zero. Seagull types are shown to have a violation which is similar to kappanot =1. Triangle and box diagrams spoil the zeros as they do in the case of a scalar W. However, the amplitude is completely free of any mass singularities in the classical null zone. We conjecture that this will remain true for spin-(1/2) quarks

  5. Statistical-mechanics approach to wide-band digital communication.

    Science.gov (United States)

    Efraim, Hadar; Peleg, Yitzhak; Kanter, Ido; Shental, Ori; Kabashima, Yoshiyuki

    2010-12-01

    The emerging popular scheme of fourth generation wireless communication, orthogonal frequency-division multiplexing, is mapped onto a variant of a random field Ising Hamiltonian and results in an efficient physical intercarrier interference (ICI) cancellation decoding scheme. This scheme is based on Monte Carlo (MC) dynamics at zero temperature as well as at the Nishimori temperature and demonstrates improved bit error rate (BER) and robust convergence time compared to the state of the art ICI cancellation decoding scheme. An optimal BER performance is achieved with MC dynamics at the Nishimori temperature but with a substantial computational cost overhead. The suggested ICI cancellation scheme also supports the transmission of biased signals.

  6. Designing Holistic Zero Energy Homes in Denmark

    DEFF Research Database (Denmark)

    Bejder, Anne Kirkegaard; Knudstrup, Mary-Ann

    2016-01-01

    Designing zero-energy buildings (ZEB) is a complex but not an impossible task, which has also been illustrated through demonstration projects, including houses that produce as much energy as they use on a yearly basis. Over the last years an increased interest for ZEBs is also seen in practice......, however, designing ZEBs is still challenging. In order to gain further currency, we need to collect new knowledge and communicate it in an easy applicable way for the building industry. This paper presents the development and objectives of a publication entitled “Zero Energy Buildings – Design Principles...

  7. Optimal reducibility of all W states equivalent under stochastic local operations and classical communication

    Energy Technology Data Exchange (ETDEWEB)

    Rana, Swapan; Parashar, Preeti [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 BT Road, Kolkata (India)

    2011-11-15

    We show that all multipartite pure states that are stochastic local operation and classical communication (SLOCC) equivalent to the N-qubit W state can be uniquely determined (among arbitrary states) from their bipartite marginals. We also prove that only (N-1) of the bipartite marginals are sufficient and that this is also the optimal number. Thus, contrary to the Greenberger-Horne-Zeilinger (GHZ) class, W-type states preserve their reducibility under SLOCC. We also study the optimal reducibility of some larger classes of states. The generic Dicke states |GD{sub N}{sup l}> are shown to be optimally determined by their (l+1)-partite marginals. The class of ''G'' states (superposition of W and W) are shown to be optimally determined by just two (N-2)-partite marginals.

  8. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    Directory of Open Access Journals (Sweden)

    Hugues Santin-Janin

    Full Text Available BACKGROUND: Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal with respect to extrinsic factors (the Moran effect in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. METHODOLOGY/PRINCIPAL FINDINGS: The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i has been previously estimated, and (ii has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. CONCLUSION/SIGNIFICANCE: The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for

  9. Distinguishing the elements of a full product basis set needs only projective measurements and classical communication

    International Nuclear Information System (INIS)

    Chen Pingxing; Li Chengzu

    2004-01-01

    Nonlocality without entanglement is an interesting field. A manifestation of quantum nonlocality without entanglement is the possible local indistinguishability of orthogonal product states. In this paper we analyze the character of operators to distinguish the elements of a full product basis set in a multipartite system, and show that distinguishing perfectly these product bases needs only local projective measurements and classical communication, and these measurements cannot damage each product basis. Employing these conclusions one can discuss local distinguishability of the elements of any full product basis set easily. Finally we discuss the generalization of these results to the locally distinguishability of the elements of incomplete product basis set

  10. Entanglement enhances security in quantum communication

    International Nuclear Information System (INIS)

    Demkowicz-Dobrzanski, Rafal; Sen, Aditi; Sen, Ujjwal; Lewenstein, Maciej

    2009-01-01

    Secret sharing is a protocol in which a 'boss' wants to send a classical message secretly to two 'subordinates', such that none of the subordinates is able to know the message alone, while they can find it if they cooperate. Quantum mechanics is known to allow for such a possibility. We analyze tolerable quantum bit error rates in such secret sharing protocols in the physically relevant case when the eavesdropping is local with respect to the two channels of information transfer from the boss to the two subordinates. We find that using entangled encoding states is advantageous to legitimate users of the protocol. We therefore find that entanglement is useful for secure quantum communication. We also find that bound entangled states with positive partial transpose are not useful as a local eavesdropping resource. Moreover, we provide a criterion for security in secret sharing--a parallel of the Csiszar-Koerner criterion in single-receiver classical cryptography.

  11. The classic project

    International Nuclear Information System (INIS)

    Iselin, F. Christoph

    1997-01-01

    Exchange of data and algorithms among accelerator physics programs is difficult because of unnecessary differences in input formats and internal data structures. To alleviate these problems a C++ class library called CLASSIC (Class Library for Accelerator System Simulation and Control) is being developed with the goal to provide standard building blocks for computer programs used in accelerator design. It includes modules for building accelerator lattice structures in computer memory using a standard input language, a graphical user interface, or a programmed algorithm. It also provides simulation algorithms. These can easily be replaced by modules which communicate with the control system of the accelerator. Exchange of both data and algorithm between different programs using the CLASSIC library should present no difficulty

  12. A zero-one programming approach to Gulliksen's matched random subtests method

    NARCIS (Netherlands)

    van der Linden, Willem J.; Boekkooi-Timminga, Ellen

    1988-01-01

    Gulliksen’s matched random subtests method is a graphical method to split a test into parallel test halves. The method has practical relevance because it maximizes coefficient α as a lower bound to the classical test reliability coefficient. In this paper the same problem is formulated as a zero-one

  13. Finding the Right Distribution for Highly Skewed Zero-inflated Clinical Data

    Directory of Open Access Journals (Sweden)

    Resmi Gupta

    2013-03-01

    Full Text Available Discrete, highly skewed distributions with excess numbers of zeros often result in biased estimates and misleading inferences if the zeros are not properly addressed. A clinical example of children with electrophysiologic disorders in which many of the children are treated without surgery is provided. The purpose of the current study was to identify the optimal modeling strategy for highly skewed, zeroinflated data often observed in the clinical setting by: (a simulating skewed, zero-inflated count data; (b fitting simulated data with Poisson, Negative Binomial, Zero-Inflated Poisson (ZIP and Zero-inflated Negative Binomial (ZINB models; and, (c applying the aforementioned models to actual, highlyskewed, clinical data of children with an EP disorder. The ZIP model was observed to be the optimal model based on traditional fit statistics as well as estimates of bias, mean-squared error, and coverage.  

  14. Coalitions in the quantum Minority game: Classical cheats and quantum bullies

    International Nuclear Information System (INIS)

    Flitney, Adrian P.; Greentree, Andrew D.

    2007-01-01

    In a one-off Minority game, when a group of players agree to collaborate they gain an advantage over the remaining players. We consider the advantage obtained in a quantum Minority game by a coalition sharing an initially entangled state versus that obtained by a coalition that uses classical communication to arrive at an optimal group strategy. In a model of the quantum Minority game where the final measurement basis is randomized, quantum coalitions outperform classical ones when carried out by up to four players, but an unrestricted amount of classical communication is better for larger coalition sizes

  15. Classical relativistic constituent particles and composite-particle scattering

    International Nuclear Information System (INIS)

    King, M.J.

    1984-01-01

    A nonlocal Lagrangian formalism is developed to describe a classical many-particle system. The nonstandard Lagrangian is a function of a single parameter s which is not, in general, associated with the physical clock. The particles are constrained to be constituents of composite systems, which in turn can decompose into asymptotic composite states representing free observable particles. To demonstrate this, explicit models of composite-composite particle scattering are constructed. Space-time conservation laws are not imposed separately on the system, but follow upon requiring the constituents to ''pair up'' into free composites at s = +infinity,-infinity. One model is characterized by the appearance of an ''external'' zero-mass composite particle which participates in the scattering process without affecting the space-time conservation laws of the two-composite system. Initial conditions on the two incoming composite particles and the zero-mass participant determine the scattering angle and the final states of the two outgoing composite particles. Although the formalism is classical, the model displays some features usually associated with quantum field theory, such as particle scattering by means of constituent exchange, creation and annihilation of particles, and restriction of values of angular momentum

  16. On the zeros of the Husimi functions of the spin boson model

    International Nuclear Information System (INIS)

    Cibils, M.B.; Cuche, Y.; Leboeuf, P.; Wreszinski, W.F.

    1992-03-01

    The distribution of zeros of the Husimi functions for the spin-boson model is studied, following an approach introduced by Leboeuf and Voros. The interest lies in the model's double feature of possessing both a classical integrable to chaotic transition and an unbounded four-dimensional phase space. The latter gives rise to several new questions regarding the Husimi zeros which are discussed and partially answered. Some significant results occur in spite of the fact that the case of spin one-half is treated. (authors) 20 refs., 4 figs

  17. Escaping the crunch: Gravitational effects in classical transitions

    International Nuclear Information System (INIS)

    Johnson, Matthew C.; Yang, I-Sheng

    2010-01-01

    During eternal inflation, a landscape of vacua can be populated by the nucleation of bubbles. These bubbles inevitably collide, and collisions sometimes displace the field into a new minimum in a process known as a classical transition. In this paper, we examine some new features of classical transitions that arise when gravitational effects are included. Using the junction condition formalism, we study the conditions for energy conservation in detail, and solve explicitly for the types of allowed classical transition geometries. We show that the repulsive nature of domain walls, and the de Sitter expansion associated with a positive energy minimum, can allow for classical transitions to vacua of higher energy than that of the colliding bubbles. Transitions can be made out of negative or zero energy (terminal) vacua to a de Sitter phase, restarting eternal inflation, and populating new vacua. However, the classical transition cannot produce vacua with energy higher than the original parent vacuum, which agrees with previous results on the construction of pockets of false vacuum. We briefly comment on the possible implications of these results for various measure proposals in eternal inflation.

  18. Classification of Four-Qubit States by Means of a Stochastic Local Operation and the Classical Communication Invariant

    International Nuclear Information System (INIS)

    Zha Xin-Wei; Ma Gang-Long

    2011-01-01

    It is a recent observation that entanglement classification for qubits is closely related to stochastic local operations and classical communication (SLOCC) invariants. Verstraete et al.[Phys. Rev. A 65 (2002) 052112] showed that for pure states of four qubits there are nine different degenerate SLOCC entanglement classes. Li et al.[Phys. Rev. A 76 (2007) 052311] showed that there are at feast 28 distinct true SLOCC entanglement classes for four qubits by means of the SLOCC invariant and semi-invariant. We give 16 different entanglement classes for four qubits by means of basic SLOCC invariants. (general)

  19. Low-Latency Digital Signal Processing for Feedback and Feedforward in Quantum Computing and Communication

    Science.gov (United States)

    Salathé, Yves; Kurpiers, Philipp; Karg, Thomas; Lang, Christian; Andersen, Christian Kraglund; Akin, Abdulkadir; Krinner, Sebastian; Eichler, Christopher; Wallraff, Andreas

    2018-03-01

    Quantum computing architectures rely on classical electronics for control and readout. Employing classical electronics in a feedback loop with the quantum system allows us to stabilize states, correct errors, and realize specific feedforward-based quantum computing and communication schemes such as deterministic quantum teleportation. These feedback and feedforward operations are required to be fast compared to the coherence time of the quantum system to minimize the probability of errors. We present a field-programmable-gate-array-based digital signal processing system capable of real-time quadrature demodulation, a determination of the qubit state, and a generation of state-dependent feedback trigger signals. The feedback trigger is generated with a latency of 110 ns with respect to the timing of the analog input signal. We characterize the performance of the system for an active qubit initialization protocol based on the dispersive readout of a superconducting qubit and discuss potential applications in feedback and feedforward algorithms.

  20. Modeling Data with Excess Zeros and Measurement Error: Application to Evaluating Relationships between Episodically Consumed Foods and Health Outcomes

    KAUST Repository

    Kipnis, Victor

    2009-03-03

    Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006, Journal of the American Dietetic Association 106, 1575-1587) describe a general statistical approach (National Cancer Institute method) for modeling such food intakes reported on two or more 24-hour recalls (24HRs) and demonstrate its use to estimate the distribution of the food\\'s usual intake in the general population. In this article, we propose an extension of this method to predict individual usual intake of such foods and to evaluate the relationships of usual intakes with health outcomes. Following the regression calibration approach for measurement error correction, individual usual intake is generally predicted as the conditional mean intake given 24HR-reported intake and other covariates in the health model. One feature of the proposed method is that additional covariates potentially related to usual intake may be used to increase the precision of estimates of usual intake and of diet-health outcome associations. Applying the method to data from the Eating at America\\'s Table Study, we quantify the increased precision obtained from including reported frequency of intake on a food frequency questionnaire (FFQ) as a covariate in the calibration model. We then demonstrate the method in evaluating the linear relationship between log blood mercury levels and fish intake in women by using data from the National Health and Nutrition Examination Survey, and show increased precision when including the FFQ information. Finally, we present simulation results evaluating the performance of the proposed method in this context.

  1. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  2. Masked Visual Analysis: Minimizing Type I Error in Visually Guided Single-Case Design for Communication Disorders.

    Science.gov (United States)

    Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John

    2017-06-10

    Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.

  3. Integration of a Zero-footprint Cloud-based Picture Archiving and Communication System with Customizable Forms for Radiology Research and Education.

    Science.gov (United States)

    Hostetter, Jason; Khanna, Nishanth; Mandell, Jacob C

    2018-06-01

    The purpose of this study was to integrate web-based forms with a zero-footprint cloud-based Picture Archiving and Communication Systems (PACS) to create a tool of potential benefit to radiology research and education. Web-based forms were created with a front-end and back-end architecture utilizing common programming languages including Vue.js, Node.js and MongoDB, and integrated into an existing zero-footprint cloud-based PACS. The web-based forms application can be accessed in any modern internet browser on desktop or mobile devices and allows the creation of customizable forms consisting of a variety of questions types. Each form can be linked to an individual DICOM examination or a collection of DICOM examinations. Several uses are demonstrated through a series of case studies, including implementation of a research platform for multi-reader multi-case (MRMC) studies and other imaging research, and creation of an online Objective Structure Clinical Examination (OSCE) and an educational case file. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  4. The Segal chronogeometric redshift - a classical analysis

    International Nuclear Information System (INIS)

    Fairchild, E.E. Jr.; Washington Univ., St. Louis, Mo.

    1977-01-01

    An error is shown to exist in the Segal chronogeometric redshift theory. The redshift distance relation of z=tan 2 (d/2R) derived by Segal using quantum theory violates the classical correspondence limit. The corrected result derived using simple classical arguments is z=tan 2 (d/R). This result gives the same predictions for small redshift objects but differs for large redshift objects such as quasars. The difference is shown to be caused by inconsistencies in the quantum derivation. Correcting these makes the quantum result equal to the classical result as one would expect from the correspondence principle. The impact of the correction on the predictions of the theory is discussed. (orig.) [de

  5. Classical behavior of few-electron parabolic quantum dots

    International Nuclear Information System (INIS)

    Ciftja, O.

    2009-01-01

    Quantum dots are intricate and fascinating systems to study novel phenomena of great theoretical and practical interest because low dimensionality coupled with the interplay between strong correlations, quantum confinement and magnetic field creates unique conditions for emergence of fundamentally new physics. In this work we consider two-dimensional semiconductor quantum dot systems consisting of few interacting electrons confined in an isotropic parabolic potential. We study the many-electron quantum ground state properties of such systems in presence of a perpendicular magnetic field as the number of electrons is varied using exact numerical diagonalizations and other approaches. The results derived from the calculations of the quantum model are then compared to corresponding results for a classical model of parabolically confined point charges who interact with a Coulomb potential. We find that, for a wide range of parameters and magnetic fields considered in this work, the quantum ground state energy is very close to the classical energy of the most stable classical configuration under the condition that the classical energy is properly adjusted to incorporate the quantum zero point motion.

  6. Constructing quantum dynamics from mixed quantum-classical descriptions

    International Nuclear Information System (INIS)

    Barsegov, V.; Rossky, P.J.

    2004-01-01

    The influence of quantum bath effects on the dynamics of a quantum two-level system linearly coupled to a harmonic bath is studied when the coupling is both diagonal and off-diagonal. It is shown that the pure dephasing kernel and the non-adiabatic quantum transition rate between Born-Oppenheimer states of the subsystem can be decomposed into a contribution from thermally excited bath modes plus a zero point energy contribution. This quantum rate can be modewise factorized exactly into a product of a mixed quantum subsystem-classical bath transition rate and a quantum correction factor. This factor determines dynamics of quantum bath correlations. Quantum bath corrections to both the transition rate and the pure dephasing kernel are shown to be readily evaluated via a mixed quantum-classical simulation. Hence, quantum dynamics can be recovered from a mixed quantum-classical counterpart by incorporating the missing quantum bath corrections. Within a mixed quantum-classical framework, a simple approach for evaluating quantum bath corrections in calculation of the non-adiabatic transition rate is presented

  7. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  8. A Formal Approach to the Selection by Minimum Error and Pattern Method for Sensor Data Loss Reduction in Unstable Wireless Sensor Network Communications.

    Science.gov (United States)

    Kim, Changhwa; Shin, DongHyun

    2017-05-12

    There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss.

  9. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result...

  10. Persistence of plasmids, cholera toxin genes, and prophage DNA in classical Vibrio cholerae O1.

    OpenAIRE

    Cook, W L; Wachsmuth, K; Johnson, S R; Birkness, K A; Samadi, A R

    1984-01-01

    Plasmid profiles, the location of cholera toxin subunit A genes, and the presence of the defective VcA1 prophage genome in classical Vibrio cholerae isolated from patients in Bangladesh in 1982 were compared with those in older classical strains isolated during the sixth pandemic and with those in selected eltor and nontoxigenic O1 isolates. Classical strains typically had two plasmids (21 and 3 megadaltons), eltor strains typically had no plasmids, and nontoxigenic O1 strains had zero to thr...

  11. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  12. Common errors of drug administration in infants: causes and avoidance.

    Science.gov (United States)

    Anderson, B J; Ellis, J F

    1999-01-01

    Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.

  13. Quantum error-correcting code for ternary logic

    Science.gov (United States)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  14. On the multiple zeros of a real analytic function with applications to the averaging theory of differential equations

    Science.gov (United States)

    García, Isaac A.; Llibre, Jaume; Maza, Susanna

    2018-06-01

    In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.

  15. Barrelet zeros and elastic π+p partial waves

    International Nuclear Information System (INIS)

    Chew, D.M.; Urban, M.

    1976-06-01

    A procedure is proposed for constructing low-order partial-wave amplitudes from a knowledge of Barrelet zeros near the physical region. The method is applied to the zeros already obtained for elastic π + p scattering data between 1.2 and 2.2 GeV cm energies. The partial waves emerge with errors that are straight-forwardly related to the accuracy of the data and satisfy unitarity without any constraint being imposed. There are significant differences from the partial waves obtained by other methods; this can be partially explained by the fact that no previous partial-wave analysis has been able to solve the discrete ambiguity. The cost of the analysis is much less

  16. Triple-Error-Correcting Codec ASIC

    Science.gov (United States)

    Jones, Robert E.; Segallis, Greg P.; Boyd, Robert

    1994-01-01

    Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.

  17. Quantum versus classical hyperfine-induced dynamics in a quantum dota)

    Science.gov (United States)

    Coish, W. A.; Loss, Daniel; Yuzbashyan, E. A.; Altshuler, B. L.

    2007-04-01

    In this article we analyze spin dynamics for electrons confined to semiconductor quantum dots due to the contact hyperfine interaction. We compare mean-field (classical) evolution of an electron spin in the presence of a nuclear field with the exact quantum evolution for the special case of uniform hyperfine coupling constants. We find that (in this special case) the zero-magnetic-field dynamics due to the mean-field approximation and quantum evolution are similar. However, in a finite magnetic field, the quantum and classical solutions agree only up to a certain time scale t <τc, after which they differ markedly.

  18. Random electrodynamics : a classical foundation for key quantum concepts

    International Nuclear Information System (INIS)

    Sachidanandam, S.

    1981-01-01

    The model of random electrodynamics, in which electromagnetic particles are subjected, in a classical manner, to the forces of radiation damping and the fluctuating zero-point fields provides the framework in which the following results are obtained: (1) The precession dynamics of a long-lived, non-relativistic particle with a magnetic moment proportional to its spin, leads to a self-consistent determination of the spin value as one-half. (2) The internal dynamic underlying the intrinsic magnetic moment of a Dirac particle yields a classically visualizable picture of the spin-magnetic moment. (3) The Bose correlation among indistinguishable, non-interacting, spin-zero Particles arises from the coupling through the common- zero point fields and the radiation reaction fields when the particles are close together in both the r vector and the energy spaces. (4) The (exclusion principle-induced) correlation among identical, non-interacting magnetic particles with spin 1/2 is brought about by the coupling, (through the common fields of radiation reaction and the vacuum fluctuations), of the spins as well as the translational motions when the particles are close together in r vector and the energy spaces. (5) A dilute gas of free electrons has a Maxwellian distribution of velocities and the correct value of the djamagnetic moment in the presence of a magnetic field. Considerations on the centre of mass motion of a composite neutral particle lead to a simple resolution of the foundational paradoxes of statistical mechanics. (6) An approximate treatment of the hydrogen atom leads to a description of the evolution to the ground state at absolute zero and an estimation of the mass frequency and the line-width of the radiation emitted when an excited atom decays. (author)

  19. Origin of constraints in relativistic classical Hamiltonian dynamics

    International Nuclear Information System (INIS)

    Mallik, S.; Hugentobler, E.

    1979-01-01

    We investigate the null-plane or the front form of relativistic classical Hamiltonian dynamics as proposed by Dirac and developed by Leutwyler and Stern. For systems of two spinless particles we show that the algebra of Poincare generators is equivalent to describing dynamics in terms of two covariant constraint equations, the Poisson bracket of the two constraints being weakly zero. The latter condition is solved for certain simple forms of constraints

  20. Solution of Large Systems of Linear Equations in the Presence of Errors. A Constructive Criticism of the Least Squares Method

    Energy Technology Data Exchange (ETDEWEB)

    Nygaard, K

    1968-09-15

    From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra.

  1. Solution of Large Systems of Linear Equations in the Presence of Errors. A Constructive Criticism of the Least Squares Method

    International Nuclear Information System (INIS)

    Nygaard, K.

    1968-09-01

    From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra

  2. Temperature error in digital bathythermograph data

    Digital Repository Service at National Institute of Oceanography (India)

    Pankajakshan, T.; Reddy, G.V.; Ratnakaran, L.; Sarupria, J.S.; RameshBabu, V.

    Sciences Vol. 32(3), September 2003, pp. 234-236 Short Communication Temperature error in digital bathythermograph data Thadathil Pankajakshan, G. V. Reddy, Lasitha Ratnakaran, J. S. Sarupria & V. Ramesh Babu Data and Information Division... Oceanographic Data Centre (JODC) 17,305 Short communication 235 Mean difference between DBT and Nansen temperature (here after referred to ‘error’) from surface to 800 m depth and for the two cruises is given in Fig. 3. Error bars are provided...

  3. Towards Multimodal Error Management:Experimental Evaluation of User Strategies in Event of Faulty Application Behavior in Automotive Environments

    Directory of Open Access Journals (Sweden)

    Gregor McGlaun

    2004-10-01

    Full Text Available In this work, we present the results of a study analyzing the reactions of subjects on simulated errors of a dedicated in-car interface for controlling infotainment and communication services. The test persons could operate the system, using different input modalities, such as natural or command speech as well as head and hand gestures, or classical tactile paradigms. In various situational contexts, we scrutinized the interaction patterns the test participants applied to overcome different operation tasks. Moreover, we evaluated individual user behavior concerning modality transitions and individual fallback strategies in case of system errors. Two different error types (Hidden System Errors and Apparent System Errors were provoked. As a result, we found out that initially, i.e. with the system working properly, most users prefer tactile or speech interaction. In case of Hidden System Errors, mostly changes from speech to tactile interaction and vice versa occurred. Concerning Apparent System Errors, 87% of the subjects automatically interrupted or cancelled their input procedure. 73% of all test persons who continued interaction, when the reason for the faulty system behavior was gone, strictly kept the selected modality. Regarding the given input vocabulary, none of the subjects selected head or hand gesture input as the leading fallback modality.

  4. Error Management in ATLAS TDAQ: An Intelligent Systems approach

    CERN Document Server

    Slopper, John Erik

    2010-01-01

    This thesis is concerned with the use of intelligent system techniques (IST) within a large distributed software system, specically the ATLAS TDAQ system which has been developed and is currently in use at the European Laboratory for Particle Physics(CERN). The overall aim is to investigate and evaluate a range of ITS techniques in order to improve the error management system (EMS) currently used within the TDAQ system via error detection and classication. The thesis work will provide a reference for future research and development of such methods in the TDAQ system. The thesis begins by describing the TDAQ system and the existing EMS, with a focus on the underlying expert system approach, in order to identify areas where improvements can be made using IST techniques. It then discusses measures of evaluating error detection and classication techniques and the factors specic to the TDAQ system. Error conditions are then simulated in a controlled manner using an experimental setup and datasets were gathered fro...

  5. Digital scrambling for shuttle communication links: Do drawbacks outweigh advantages?

    Science.gov (United States)

    Dessouky, K.

    1985-01-01

    Digital data scrambling has been considered for communication systems using NRZ (non-return to zero) symbol formats. The purpose is to increase the number of transitions in the data to improve the performance of the symbol synchronizer. This is accomplished without expanding the bandwidth but at the expense of increasing the data bit error rate (BER). Models for the scramblers/descramblers of practical interest are presented together with the appropriate link model. The effects of scrambling on the performance of coded and uncoded links are studied. The results are illustrated by application to the Tracking and Data Relay Satellite System links. Conclusions regarding the usefulness of scrambling are also given.

  6. Performance Limits of Communication with Energy Harvesting

    KAUST Repository

    Znaidi, Mohamed Ridha

    2016-04-01

    In energy harvesting communications, the transmitters have to adapt transmission to the availability of energy harvested during communication. The performance of the transmission depends on the channel conditions which vary randomly due to mobility and environmental changes. During this work, we consider the problem of power allocation taking into account the energy arrivals over time and the quality of channel state information (CSI) available at the transmitter, in order to maximize the throughput. Differently from previous work, the CSI at the transmitter is not perfect and may include estimation errors. We solve this problem with respect to the energy harvesting constraints. Assuming a perfect knowledge of the CSI at the receiver, we determine the optimal power policy for different models of the energy arrival process (offline and online model). Indeed, we obtain the power allocation scheme when the transmitter has either perfect CSI or no CSI. We also investigate of utmost interest the case of fading channels with imperfect CSI. Moreover, a study of the asymptotic behavior of the communication system is proposed. Specifically, we analyze of the average throughput in a system where the average recharge rate goes asymptotically to zero and when it is very high.

  7. SimCommSys: taking the errors out of error-correcting code simulations

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.

  8. Exact, E = 0, classical and quantum solutions for general power-law oscillators

    International Nuclear Information System (INIS)

    Nieto, M.M.; Daboul, J.

    1994-01-01

    For zero energy, E = 0, we derive exact, classical and quantum solutions for all power-law oscillators with potentials V(r) = -γ/r ν , γ > 0 and -∞ 0 (t))] 1/μ , with μ = ν/2 - 1 ≠ 0. For ν > 2, the orbits are bound and go through the origin. We calculate the periods and precessions of these bound orbits, and graph a number of specific examples. The unbound orbits are also discussed in detail. Quantum mechanically, this system is also exactly solvable. We find that when ν > 2 the solutions are normalizable (bound), as in the classical case. Also, there are normalizable discrete, yet unbound, state which correspond to unbound classical particles which reach infinity in a finite time. These and other interesting comparisons to the classical system will be discussed

  9. Error correcting code with chip kill capability and power saving enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY

    2011-08-30

    A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.

  10. Multiple-Access Quantum-Classical Networks

    Science.gov (United States)

    Razavi, Mohsen

    2011-10-01

    A multi-user network that supports both classical and quantum communication is proposed. By relying on optical code-division multiple access techniques, this system offers simultaneous key exchange between multiple pairs of network users. A lower bound on the secure key generation rate will be derived for decoy-state quantum key distribution protocols.

  11. Quantum cosmology of classically constrained gravity

    International Nuclear Information System (INIS)

    Gabadadze, Gregory; Shang Yanwen

    2006-01-01

    In [G. Gabadadze, Y. Shang, hep-th/0506040] we discussed a classically constrained model of gravity. This theory contains known solutions of General Relativity (GR), and admits solutions that are absent in GR. Here we study cosmological implications of some of these new solutions. We show that a spatially-flat de Sitter universe can be created from 'nothing'. This universe has boundaries, and its total energy equals to zero. Although the probability to create such a universe is exponentially suppressed, it favors initial conditions suitable for inflation. Then we discuss a finite-energy solution with a nonzero cosmological constant and zero space-time curvature. There is no tunneling suppression to fluctuate into this state. We show that for a positive cosmological constant this state is unstable-it can rapidly transition to a de Sitter universe providing a new unsuppressed channel for inflation. For a negative cosmological constant the space-time flat solutions is stable.

  12. Quantum Zeno and anti-Zeno effects on quantum and classical correlations

    International Nuclear Information System (INIS)

    Francica, F.; Plastina, F.; Maniscalco, S.

    2010-01-01

    In this paper we study the possibility of modifying the dynamics of both quantum correlations, such as entanglement and discord, and classical correlations of an open bipartite system by means of the quantum Zeno effect. We consider two qubits coupled to a common boson reservoir at zero temperature. This model describes, for example, two atoms interacting with a quantized mode of a lossy cavity. We show that when the frequencies of the two atoms are symmetrically detuned from that of the cavity mode, oscillations between the Zeno and anti-Zeno regimes occur. We also calculate analytically the time evolution of both classical correlations and quantum discord, and we compare the Zeno dynamics of entanglement with the Zeno dynamics of classical correlations and discord.

  13. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  14. Soliton surfaces via a zero-curvature representation of differential equations

    International Nuclear Information System (INIS)

    Grundland, A M; Post, S

    2012-01-01

    The main aim of this paper is to introduce a new version of the Fokas–Gel’fand formula for immersion of soliton surfaces in Lie algebras. The paper contains a detailed exposition of the technique for obtaining exact forms of 2D surfaces associated with any solution of a given nonlinear ordinary differential equation which can be written in the zero-curvature form. That is, for any generalized symmetry of the zero-curvature condition of the associated integrable model, it is possible to construct soliton surfaces whose Gauss–Mainardi–Codazzi equations are equivalent to infinitesimal deformations of the zero-curvature representation of the considered model. Conversely, it is shown (proposition 1) that for a given immersion function of a 2D soliton surface in a Lie algebra, it is possible to derive the associated generalized vector field in the evolutionary form which characterizes all symmetries of the zero-curvature condition. The theoretical considerations are illustrated via surfaces associated with the Painlevé equations P1, P2 and P3, including transcendental functions, the special cases of the rational and Airy solutions of P2 and the classical solutions of P3. (paper)

  15. Object permanence in adult common marmosets (Callithrix jacchus): not everything is an "A-not-B" error that seems to be one.

    Science.gov (United States)

    Kis, Anna; Gácsi, Márta; Range, Friederike; Virányi, Zsófia

    2012-01-01

    In this paper, we describe a behaviour pattern similar to the "A-not-B" error found in human infants and young apes in a monkey species, the common marmosets (Callithrix jacchus). In contrast to the classical explanation, recently it has been suggested that the "A-not-B" error committed by human infants is at least partially due to misinterpretation of the hider's ostensively communicated object hiding actions as potential 'teaching' demonstrations during the A trials. We tested whether this so-called Natural Pedagogy hypothesis would account for the A-not-B error that marmosets commit in a standard object permanence task, but found no support for the hypothesis in this species. Alternatively, we present evidence that lower level mechanisms, such as attention and motivation, play an important role in committing the "A-not-B" error in marmosets. We argue that these simple mechanisms might contribute to the effect of undeveloped object representational skills in other species including young non-human primates that commit the A-not-B error.

  16. High-dimensional free-space optical communications based on orbital angular momentum coding

    Science.gov (United States)

    Zou, Li; Gu, Xiaofan; Wang, Le

    2018-03-01

    In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.

  17. Medication errors: an overview for clinicians.

    Science.gov (United States)

    Wittich, Christopher M; Burkle, Christopher M; Lanier, William L

    2014-08-01

    Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  18. On zero-point energy, stability and Hagedorn behavior of Type IIB strings on pp-waves

    International Nuclear Information System (INIS)

    Bigazzi, F.; Cotrone, A.L.

    2003-06-01

    Type IIB strings on many pp-wave backgrounds, supported either by 5-form or 3-form fluxes, have negative light-cone zero-point energy. This raises the question of their stability and poses possible problems in the definition of their thermodynamic properties. After having pointed out the correct way of calculating the zero-point energy, an issue not fully discussed in literature, we show that these Type IIB strings are classically stable and have well defined thermal properties, exhibiting a Hagedorn behavior. (author)

  19. Towards zero-power ICT

    Science.gov (United States)

    Gammaitoni, Luca; Chiuchiú, D.; Madami, M.; Carlotti, G.

    2015-06-01

    Is it possible to operate a computing device with zero energy expenditure? This question, once considered just an academic dilemma, has recently become strategic for the future of information and communication technology. In fact, in the last forty years the semiconductor industry has been driven by its ability to scale down the size of the complementary metal-oxide semiconductor-field-effect transistor, the building block of present computing devices, and to increase computing capability density up to a point where the power dissipated in heat during computation has become a serious limitation. To overcome such a limitation, since 2004 the Nanoelectronics Research Initiative has launched a grand challenge to address the fundamental limits of the physics of switches. In Europe, the European Commission has recently funded a set of projects with the aim of minimizing the energy consumption of computing. In this article we briefly review state-of-the-art zero-power computing, with special attention paid to the aspects of energy dissipation at the micro- and nanoscales.

  20. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  1. The effect of zero-point energy differences on the isotope dependence of the formation of ozone: a classical trajectory study.

    Science.gov (United States)

    Schinke, Reinhard; Fleurat-Lessard, Paul

    2005-03-01

    The effect of zero-point energy differences (DeltaZPE) between the possible fragmentation channels of highly excited O(3) complexes on the isotope dependence of the formation of ozone is investigated by means of classical trajectory calculations and a strong-collision model. DeltaZPE is incorporated in the calculations in a phenomenological way by adjusting the potential energy surface in the product channels so that the correct exothermicities and endothermicities are matched. The model contains two parameters, the frequency of stabilizing collisions omega and an energy dependent parameter Delta(damp), which favors the lower energies in the Maxwell-Boltzmann distribution. The stabilization frequency is used to adjust the pressure dependence of the absolute formation rate while Delta(damp) is utilized to control its isotope dependence. The calculations for several isotope combinations of oxygen atoms show a clear dependence of relative formation rates on DeltaZPE. The results are similar to those of Gao and Marcus [J. Chem. Phys. 116, 137 (2002)] obtained within a statistical model. In particular, like in the statistical approach an ad hoc parameter eta approximately 1.14, which effectively reduces the formation rates of the symmetric ABA ozone molecules, has to be introduced in order to obtain good agreement with the measured relative rates of Janssen et al. [Phys. Chem. Chem. Phys. 3, 4718 (2001)]. The temperature dependence of the recombination rate is also addressed.

  2. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  3. Classical dynamics on graphs

    International Nuclear Information System (INIS)

    Barra, F.; Gaspard, P.

    2001-01-01

    We consider the classical evolution of a particle on a graph by using a time-continuous Frobenius-Perron operator that generalizes previous propositions. In this way, the relaxation rates as well as the chaotic properties can be defined for the time-continuous classical dynamics on graphs. These properties are given as the zeros of some periodic-orbit zeta functions. We consider in detail the case of infinite periodic graphs where the particle undergoes a diffusion process. The infinite spatial extension is taken into account by Fourier transforms that decompose the observables and probability densities into sectors corresponding to different values of the wave number. The hydrodynamic modes of diffusion are studied by an eigenvalue problem of a Frobenius-Perron operator corresponding to a given sector. The diffusion coefficient is obtained from the hydrodynamic modes of diffusion and has the Green-Kubo form. Moreover, we study finite but large open graphs that converge to the infinite periodic graph when their size goes to infinity. The lifetime of the particle on the open graph is shown to correspond to the lifetime of a system that undergoes a diffusion process before it escapes

  4. Classical-quantal coupling in the capture of muons by hydrogen atoms

    International Nuclear Information System (INIS)

    Kwong, N.H.; Garcia, J.D.

    1989-01-01

    We describe a self-consistent semiclassical approach to the problem of muon capture by hydrogen atoms. The dynamics of the heavier muon and proton are treated classically, and the electron quantally, with the potentials for both being self-consistently determined. Our numerical results are compared to classical-trajectory Monte Carlo (CTMC) and adiabatic ionisation (AI) results. Our capture cross sections are larger at low energy but fall more rapidly to zero. Our results provide the corrections to the dynamics beyond the adiabatic picture, which were missing in other approaches; interesting questions concerning the quantal nature of the events are discussed. (author)

  5. Logical reformulation of quantum mechanics. III. Classical limit and irreversibility

    International Nuclear Information System (INIS)

    Omnes, R.

    1988-01-01

    This paper deals with two questions: (1) It contains a proof of the fact that consistent quantum representations of logic tend to the classical representation of logic when Planck's constant tends to zero. This result is obtained by using the microlocal analysis of partial differential equations and the Weyl calculus, which turn out to be the proper mathematical framework for this type of problems. (2) The analysis of the limitations of this proof turn out to be of physical significance, in particular when one considers quantum systems having for their classical version a Kolmogorov K-system. These limitations are used to show the existence of a best classical description for such a system leading to an objective definition of entropy. It is shown that in such a description the approach to equilibrium is strictly reduced to a Markov process

  6. Population structure of the Classic period Maya.

    Science.gov (United States)

    Scherer, Andrew K

    2007-03-01

    This study examines the population structure of Classic period (A.D. 250-900) Maya populations through analysis of odontometric variation of 827 skeletons from 12 archaeological sites in Mexico, Guatemala, Belize, and Honduras. The hypothesis that isolation by distance characterized Classic period Maya population structure is tested using Relethford and Blangero's (Hum Biol 62 (1990) 5-25) approach to R matrix analysis for quantitative traits. These results provide important biological data for understanding ancient Maya population history, particularly the effects of the competing Tikal and Calakmul hegemonies on patterns of lowland Maya site interaction. An overall F(ST) of 0.018 is found for the Maya area, indicating little among-group variation for the Classic Maya sites tested. Principal coordinates plots derived from the R matrix analysis show little regional patterning in the data, though the geographic outliers of Kaminaljuyu and a pooled Pacific Coast sample did not cluster with the lowland Maya sites. Mantel tests comparing the biological distance matrix to a geographic distance matrix found no association between genetic and geographic distance. In the Relethford-Blangero analysis, most sites possess negative or near-zero residuals, indicating minimal extraregional gene flow. The exceptions were Barton Ramie, Kaminaljuyu, and Seibal. A scaled R matrix analysis clarifies that genetic drift is a consideration for understanding Classic Maya population structure. All results indicate that isolation by distance does not describe Classic period Maya population structure. (c) 2006 Wiley-Liss, Inc.

  7. Theoretical statistics of zero-age cataclysmic variables

    International Nuclear Information System (INIS)

    Politano, M.J.

    1988-01-01

    The distribution of the white dwarf masses, the distribution of the mass ratios and the distribution of the orbital periods in cataclysmic variables which are forming at the present time are calculated. These systems are referred to as zero-age cataclysmic variables. The results show that 60% of the systems being formed contain helium white dwarfs and 40% contain carbon-oxygen white dwarfs. The mean dwarf mass in those systems containing helium white dwarfs is 0.34. The mean white dwarf mass in those systems containing carbon-oxygen white dwarfs is 0.75. The orbital period distribution identifies four main classes of zero-age cataclysmic variables: (1) short-period systems containing helium white dwarfs, (2) systems containing carbon-oxygen white dwarfs whose secondaries are convectively stable against rapid mass transfer to the white dwarf, (3) systems containing carbon-oxygen white dwarfs whose secondaries are radiatively stable against rapid mass transfer to the white dwarf and (4) long period systems with evolved secondaries. The white dwarf mass distribution in zero-age cataclysmic variables has direct application to the calculation of the frequency of outburst in classical novae as a function of the mass of the white dwarf. The method developed in this thesis to calculate the distributions of the orbital parameters in zero-age cataclysmic variables can be used to calculate theoretical statistics of any class of binary systems. This method provides a theoretical framework from which to investigate the statistical properties and the evolution of the orbital parameters of binary systems

  8. Quantum Capacity under Adversarial Quantum Noise: Arbitrarily Varying Quantum Channels

    Science.gov (United States)

    Ahlswede, Rudolf; Bjelaković, Igor; Boche, Holger; Nötzel, Janis

    2013-01-01

    We investigate entanglement transmission over an unknown channel in the presence of a third party (called the adversary), which is enabled to choose the channel from a given set of memoryless but non-stationary channels without informing the legitimate sender and receiver about the particular choice that he made. This channel model is called an arbitrarily varying quantum channel (AVQC). We derive a quantum version of Ahlswede's dichotomy for classical arbitrarily varying channels. This includes a regularized formula for the common randomness-assisted capacity for entanglement transmission of an AVQC. Quite surprisingly and in contrast to the classical analog of the problem involving the maximal and average error probability, we find that the capacity for entanglement transmission of an AVQC always equals its strong subspace transmission capacity. These results are accompanied by different notions of symmetrizability (zero-capacity conditions) as well as by conditions for an AVQC to have a capacity described by a single-letter formula. In the final part of the paper the capacity of the erasure-AVQC is computed and some light shed on the connection between AVQCs and zero-error capacities. Additionally, we show by entirely elementary and operational arguments motivated by the theory of AVQCs that the quantum, classical, and entanglement-assisted zero-error capacities of quantum channels are generically zero and are discontinuous at every positivity point.

  9. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  10. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    Science.gov (United States)

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  11. My objective: zero contempt, not zero risk

    International Nuclear Information System (INIS)

    Delevoye, J.P.

    2009-01-01

    With technology, scientific research and dissemination of knowledge, medical practice has improved thereby achieving an efficient health care system. However, it would be appropriate to consider the human dimension of medicine as a key development. There are two major challenges in risk management: organizational management of risk on one hand and the management of human relationship with the patient especially when problems arise, on the other. It is therefore a question of achieving awareness, managing a culture change in the medical circle i.e. moving from a culture of guilt to a culture of error and finally relaxing the atmosphere of mutual distrust that exists between health professionals and patients. Indeed, the relation 'health professional-patient' has deteriorated over time due to poor risk management. An educational effort must be done to avoid frustration of the patient and contribute to zero contempt. On reflection, this means that the quality of a system is due to the individual quality of its members, the quality of methods and the organization in place. (author)

  12. NEUTRON-PROTON EFFECTIVE RANGE PARAMETERS AND ZERO-ENERGY SHAPE DEPENDENCE.

    Energy Technology Data Exchange (ETDEWEB)

    HACKENBURG, R.W.

    2005-06-01

    A completely model-independent effective range theory fit to available, unpolarized, np scattering data below 3 MeV determines the zero-energy free proton cross section {sigma}{sub 0} = 20.4287 {+-} 0.0078 b, the singlet apparent effective range r{sub s} = 2.754 {+-} 0.018{sub stat} {+-} 0.056{sub syst} fm, and improves the error slightly on the parahydrogen coherent scattering length, a{sub c} = -3.7406 {+-} 0.0010 fm. The triplet and singlet scattering lengths and the triplet mixed effective range are calculated to be a{sub t} = 5.4114 {+-} 0.0015 fm, a{sub s} = -23.7153 {+-} 0.0043 fm, and {rho}{sub t}(0,-{epsilon}{sub t}) = 1.7468 {+-} 0.0019 fm. The model-independent analysis also determines the zero-energy effective ranges by treating them as separate fit parameters without the constraint from the deuteron binding energy {epsilon}{sub t}. These are determined to be {rho}{sub t}(0,0) = 1.705 {+-} 0.023 fm and {rho}{sub s}(0,0) = 2.665 {+-} 0.056 fm. This determination of {rho}{sub t}(0,0) and {rho}{sub s}(0,0) is most sensitive to the sparse data between about 20 and 600 keV, where the correlation between the determined values of {rho}{sub t}(0,0) and {rho}{sub s}(0,0) is at a minimum. This correlation is responsible for the large systematic error in r{sub s}. More precise data in this range are needed. The present data do not event determine (with confidence) that {rho}{sub t}(0,0) {ne} {rho}{sub t}(0, -{epsilon}{sub t}), referred to here as ''zero-energy shape dependence''. The widely used measurement of {sigma}{sub 0} = 20.491 {+-} 0.014 b from W. Dilg, Phys. Rev. C 11, 103 (1975), is argued to be in error.

  13. Nonlinear control of ships minimizing the position tracking errors

    Directory of Open Access Journals (Sweden)

    Svein P. Berge

    1999-07-01

    Full Text Available In this paper, a nonlinear tracking controller with integral action for ships is presented. The controller is based on state feedback linearization. Exponential convergence of the vessel-fixed position and velocity errors are proven by using Lyapunov stability theory. Since we only have two control devices, a rudder and a propeller, we choose to control the longship and the sideship position errors to zero while the heading is stabilized indirectly. A Virtual Reference Point (VRP is defined at the bow or ahead of the ship. The VRP is used for tracking control. It is shown that the distance from the center of rotation to the VRP will influence on the stability of the zero dynamics. By selecting the VRP at the bow or even ahead of the bow, the damping in yaw can be increased and the zero dynamics is stabilized. Hence, the heading angle will be less sensitive to wind, currents and waves. The control law is simulated by using a nonlinear model of the Japanese training ship Shiojimaru with excellent results. Wind forces are added to demonstrate the robustness and performance of the integral controller.

  14. Ensemble simulations with discrete classical dynamics

    DEFF Research Database (Denmark)

    Toxværd, Søren

    2013-01-01

    For discrete classical Molecular dynamics (MD) obtained by the "Verlet" algorithm (VA) with the time increment $h$ there exist a shadow Hamiltonian $\\tilde{H}$ with energy $\\tilde{E}(h)$, for which the discrete particle positions lie on the analytic trajectories for $\\tilde{H}$. $\\tilde......{E}(h)$ is employed to determine the relation with the corresponding energy, $E$ for the analytic dynamics with $h=0$ and the zero-order estimate $E_0(h)$ of the energy for discrete dynamics, appearing in the literature for MD with VA. We derive a corresponding time reversible VA algorithm for canonical dynamics...

  15. Zero Gravity Research Facility (Zero-G)

    Data.gov (United States)

    Federal Laboratory Consortium — The Zero Gravity Research Facility (Zero-G) provides a near weightless or microgravity environment for a duration of 5.18 seconds. This is accomplished by allowing...

  16. Decoherence and the quantum-to-classical transition

    CERN Document Server

    Schlosshauer, Maximilian

    2007-01-01

    The ultimate introduction, textbook, and reference on decoherence and the quantum-to-classical transition. This detailed but accessible text describes the concepts, formalism, interpretation, and experimental observation of decoherence and explains how decoherence is responsible for the emergence, from the realm of quantum mechanics, of the classical world of our experience. Topics include: • Foundational problems at the quantum–classical border; • The role of the environment and entanglement; • Environment-induced loss of coherence and superselection; • Scattering-induced decoherence and spatial localization; • Master equations; • Decoherence models; • Experimental realization of "Schrödinger kittens" and their decoherence; • Quantum computing, quantum error correction, and decoherence-free subspaces; • Implications of decoherence for interpretations of quantum mechanics and for the "measurement problem"; • Decoherence in the brain. Written in a lucid and concise style that is accessib...

  17. A study on infinite number of integrals of motion in classically integrable system with boundary: Pt.1

    International Nuclear Information System (INIS)

    Chen Yixin; Luo Xudong

    1998-01-01

    By the zero curvature condition in classically integrable system, the generating functions for integrals of motion and equations for solving K +- matrices are obtained in two-dimensional integrable systems on a finite interval with independent boundary conditions on each end. Classically integrable boundary conditions will be found by solving K +- matrices. The authors develop a Hamiltonian method in classically integrable system with independent boundary conditions on each end. The result can be applied to more integrable systems than those associated with E.K. Sklyanin's approach

  18. Dimensional discontinuity in quantum communication complexity at dimension seven

    Science.gov (United States)

    Tavakoli, Armin; Pawłowski, Marcin; Żukowski, Marek; Bourennane, Mohamed

    2017-02-01

    Entanglement-assisted classical communication and transmission of a quantum system are the two quantum resources for information processing. Many information tasks can be performed using either quantum resource. However, this equivalence is not always present since entanglement-assisted classical communication is sometimes known to be the better performing resource. Here, we show not only the opposite phenomenon, that there exist tasks for which transmission of a quantum system is a more powerful resource than entanglement-assisted classical communication, but also that such phenomena can have a surprisingly strong dependence on the dimension of Hilbert space. We introduce a family of communication complexity problems parametrized by the dimension of Hilbert space and study the performance of each quantum resource. Under an additional assumption of a linear strategy for the receiving party, we find that for low dimensions the two resources perform equally well, whereas for dimension seven and above the equivalence is suddenly broken and transmission of a quantum system becomes more powerful than entanglement-assisted classical communication. Moreover, we find that transmission of a quantum system may even outperform classical communication assisted by the stronger-than-quantum correlations obtained from the principle of macroscopic locality.

  19. Bifurcated states of the error-field-induced magnetic islands

    International Nuclear Information System (INIS)

    Zheng, L.-J.; Li, B.; Hazeltine, R.D.

    2008-01-01

    We find that the formation of the magnetic islands due to error fields shows bifurcation when neoclassical effects are included. The bifurcation, which follows from including bootstrap current terms in a description of island growth in the presence of error fields, provides a path to avoid the island-width pole in the classical description. The theory offers possible theoretical explanations for the recent DIII-D and JT-60 experimental observations concerning confinement deterioration with increasing error field

  20. Classical mechanics in non-commutative phase space

    International Nuclear Information System (INIS)

    Wei Gaofeng; Long Chaoyun; Long Zhengwen; Qin Shuijie

    2008-01-01

    In this paper the laws of motion of classical particles have been investigated in a non-commutative phase space. The corresponding non-commutative relations contain not only spatial non-commutativity but also momentum non-commutativity. First, new Poisson brackets have been defined in non-commutative phase space. They contain corrections due to the non-commutativity of coordinates and momenta. On the basis of this new Poisson brackets, a new modified second law of Newton has been obtained. For two cases, the free particle and the harmonic oscillator, the equations of motion are derived on basis of the modified second law of Newton and the linear transformation (Phys. Rev. D, 2005, 72: 025010). The consistency between both methods is demonstrated. It is shown that a free particle in commutative space is not a free particle with zero-acceleration in the non-commutative phase space, but it remains a free particle with zero-acceleration in non-commutative space if only the coordinates are non-commutative. (authors)

  1. Signed reward prediction errors drive declarative learning

    NARCIS (Netherlands)

    De Loof, E.; Ergo, K.; Naert, L.; Janssens, C.; Talsma, D.; van Opstal, F.; Verguts, T.

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We

  2. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    Science.gov (United States)

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  3. My objective: zero contempt, not zero risk;Mon objectif n'est pas le risque zero mais le zero mepris

    Energy Technology Data Exchange (ETDEWEB)

    Delevoye, J.P. [Mediateur de la Republique, 75 - Paris (France)

    2009-12-15

    With technology, scientific research and dissemination of knowledge, medical practice has improved thereby achieving an efficient health care system. However, it would be appropriate to consider the human dimension of medicine as a key development. There are two major challenges in risk management: organizational management of risk on one hand and the management of human relationship with the patient especially when problems arise, on the other. It is therefore a question of achieving awareness, managing a culture change in the medical circle i.e. moving from a culture of guilt to a culture of error and finally relaxing the atmosphere of mutual distrust that exists between health professionals and patients. Indeed, the relation 'health professional-patient' has deteriorated over time due to poor risk management. An educational effort must be done to avoid frustration of the patient and contribute to zero contempt. On reflection, this means that the quality of a system is due to the individual quality of its members, the quality of methods and the organization in place. (author)

  4. Decoherence and the quantum-to-classical transition

    International Nuclear Information System (INIS)

    Schlosshauer, M.A.

    2007-01-01

    The ultimate introduction, textbook, and reference on decoherence and the quantum-to-classical transition. This detailed but accessible text describes the concepts, formalism, interpretation, and experimental observation of decoherence and explains how decoherence is responsible for the emergence, from the realm of quantum mechanics, of the classical world of our experience. Topics include: - Foundational problems at the quantum-classical border; - The role of the environment and entanglement; - Environment-induced loss of coherence and superselection; - Scattering-induced decoherence and spatial localization; - Master equations; - Decoherence models; - Experimental realization of ''Schroedinger's kittens'' and their decoherence; - Quantum computing, quantum error correction, and decoherence-free subspaces; - Implications of decoherence for interpretations of quantum mechanics and for the ''measurement problem''; - Decoherence in the brain. Written in a lucid and concise style that is accessible to all readers with a basic knowledge of quantum mechanics, this stimulating book tells the ''classical from quantum'' story in a comprehensive and coherent manner that brings together the foundational, technical, and experimental aspects of decoherence. It will be an indispensable resource for newcomers and experts alike. (orig.)

  5. Experimental bifurcation analysis—Continuation for noise-contaminated zero problems

    DEFF Research Database (Denmark)

    Schilder, Frank; Bureau, Emil; Santos, Ilmar Ferreira

    2015-01-01

    Noise contaminated zero problems involve functions that cannot be evaluated directly, but only indirectly via observations. In addition, such observations are affected by a non-deterministic observation error (noise). We investigate the application of numerical bifurcation analysis for studying...... the solution set of such noise contaminated zero problems, which is highly relevant in the context of equation-free analysis (coarse grained analysis) and bifurcation analysis in experiments, and develop specialized algorithms to address challenges that arise due to the presence of noise. As a working example......, we demonstrate and test our algorithms on a mechanical nonlinear oscillator experiment using control based continuation, which we used as a main application and test case for development of the Coco compatible Matlab toolbox Continex that implements our algorithms....

  6. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  7. Decay of the vortex tangle at zero temperature and quasiclassical turbulence

    International Nuclear Information System (INIS)

    Nemirovskii, Sergej K.

    2013-01-01

    We review and analyze a series of works, both experimental and numerical and theoretical, dealing with the decay of quantum turbulence at zero temperature. Free decay of the vortex tangle is a key argument in favor of the idea that a chaotic set of quantum vortices can mimic classical turbulence, or at least reproduce many of the basic features. The corresponding topic is referred as the quasiclassical turbulence. Appreciating significance of the challenging problem of classical turbulence it can be expressed that the idea to study it in terms of quantized line is indeed very important and may be regarded as a breakthrough. For this reason, the whole theory, together with the supporting experimental results and numerical simulations should be carefully scrutinized. One of the main arguments, supporting the idea of quasiclassical turbulence is the fact that vortex tangle decays at zero temperature, when the mutual friction is absent. Since all other possible mechanisms of dissipation of the vortex energy, discussed in literature, are related to the small scales, it is natural to suggest that the Kolmogorov cascade takes place with the flow of the energy in space of scales, just like as in the classical turbulence. In the present work we discuss an alternative mechanism of decay of the vortex tangle, which is not associated with dissipation at small scales. This mechanism is a diffusive-like spreading of the vortex tangle due to evaporation of small vortex loops. We discuss a number of experiments and numerical simulations, considering them from the point of view of alternative mechanism.

  8. Postcultural Communication?

    DEFF Research Database (Denmark)

    Jensen, Iben

    2015-01-01

    When we as scholars use the concept of intercultural communication in its classic definition, as communication between people with different cultural backgrounds, we perpetuate the notion that national differences influence communication more than other differences; in doing so, ethnic minorities...... is presented as a postcultural prism composed by practice theory (Schatzki 1996, Reckwitz 2002, Nicolini 2012, Kemmis 2012), Intersectionality (Brah, Phoenix, Collins Rahsack) and positioning theory (Harre & Langenhove 1998)....

  9. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  10. Communication: A new ab initio potential energy surface for HCl-H2O, diffusion Monte Carlo calculations of D0 and a delocalized zero-point wavefunction.

    Science.gov (United States)

    Mancini, John S; Bowman, Joel M

    2013-03-28

    We report a global, full-dimensional, ab initio potential energy surface describing the HCl-H2O dimer. The potential is constructed from a permutationally invariant fit, using Morse-like variables, to over 44,000 CCSD(T)-F12b∕aug-cc-pVTZ energies. The surface describes the complex and dissociated monomers with a total RMS fitting error of 24 cm(-1). The normal modes of the minima, low-energy saddle point and separated monomers, the double minimum isomerization pathway and electronic dissociation energy are accurately described by the surface. Rigorous quantum mechanical diffusion Monte Carlo (DMC) calculations are performed to determine the zero-point energy and wavefunction of the complex and the separated fragments. The calculated zero-point energies together with a De value calculated from CCSD(T) with a complete basis set extrapolation gives a D0 value of 1348 ± 3 cm(-1), in good agreement with the recent experimentally reported value of 1334 ± 10 cm(-1) [B. E. Casterline, A. K. Mollner, L. C. Ch'ng, and H. Reisler, J. Phys. Chem. A 114, 9774 (2010)]. Examination of the DMC wavefunction allows for confident characterization of the zero-point geometry to be dominant at the C(2v) double-well saddle point and not the C(s) global minimum. Additional support for the delocalized zero-point geometry is given by numerical solutions to the 1D Schrödinger equation along the imaginary-frequency out-of-plane bending mode, where the zero-point energy is calculated to be 52 cm(-1) above the isomerization barrier. The D0 of the fully deuterated isotopologue is calculated to be 1476 ± 3 cm(-1), which we hope will stand as a benchmark for future experimental work.

  11. Reduced phase error through optimized control of a superconducting qubit

    International Nuclear Information System (INIS)

    Lucero, Erik; Kelly, Julian; Bialczak, Radoslaw C.; Lenander, Mike; Mariantoni, Matteo; Neeley, Matthew; O'Connell, A. D.; Sank, Daniel; Wang, H.; Weides, Martin; Wenner, James; Cleland, A. N.; Martinis, John M.; Yamamoto, Tsuyoshi

    2010-01-01

    Minimizing phase and other errors in experimental quantum gates allows higher fidelity quantum processing. To quantify and correct for phase errors, in particular, we have developed an experimental metrology - amplified phase error (APE) pulses - that amplifies and helps identify phase errors in general multilevel qubit architectures. In order to correct for both phase and amplitude errors specific to virtual transitions and leakage outside of the qubit manifold, we implement 'half derivative', an experimental simplification of derivative reduction by adiabatic gate (DRAG) control theory. The phase errors are lowered by about a factor of five using this method to ∼1.6 deg. per gate, and can be tuned to zero. Leakage outside the qubit manifold, to the qubit |2> state, is also reduced to ∼10 -4 for 20% faster gates.

  12. Technical Note: Interference errors in infrared remote sounding of the atmosphere

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2007-07-01

    Full Text Available Classical error analysis in remote sounding distinguishes between four classes: "smoothing errors," "model parameter errors," "forward model errors," and "retrieval noise errors". For infrared sounding "interference errors", which, in general, cannot be described by these four terms, can be significant. Interference errors originate from spectral residuals due to "interfering species" whose spectral features overlap with the signatures of the target species. A general method for quantification of interference errors is presented, which covers all possible algorithmic implementations, i.e., fine-grid retrievals of the interfering species or coarse-grid retrievals, and cases where the interfering species are not retrieved. In classical retrieval setups interference errors can exceed smoothing errors and can vary by orders of magnitude due to state dependency. An optimum strategy is suggested which practically eliminates interference errors by systematically minimizing the regularization strength applied to joint profile retrieval of the interfering species. This leads to an interfering-species selective deweighting of the retrieval. Details of microwindow selection are no longer critical for this optimum retrieval and widened microwindows even lead to reduced overall (smoothing and interference errors. Since computational power will increase, more and more operational algorithms will be able to utilize this optimum strategy in the future. The findings of this paper can be applied to soundings of all infrared-active atmospheric species, which include more than two dozen different gases relevant to climate and ozone. This holds for all kinds of infrared remote sounding systems, i.e., retrievals from ground-based, balloon-borne, airborne, or satellite spectroradiometers.

  13. Error Mitigation for Short-Depth Quantum Circuits

    Science.gov (United States)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  14. Zero-point energy effects in anion solvation shells.

    Science.gov (United States)

    Habershon, Scott

    2014-05-21

    By comparing classical and quantum-mechanical (path-integral-based) molecular simulations of solvated halide anions X(-) [X = F, Cl, Br and I], we identify an ion-specific quantum contribution to anion-water hydrogen-bond dynamics; this effect has not been identified in previous simulation studies. For anions such as fluoride, which strongly bind water molecules in the first solvation shell, quantum simulations exhibit hydrogen-bond dynamics nearly 40% faster than the corresponding classical results, whereas those anions which form a weakly bound solvation shell, such as iodide, exhibit a quantum effect of around 10%. This observation can be rationalized by considering the different zero-point energy (ZPE) of the water vibrational modes in the first solvation shell; for strongly binding anions, the ZPE of bound water molecules is larger, giving rise to faster dynamics in quantum simulations. These results are consistent with experimental investigations of anion-bound water vibrational and reorientational motion.

  15. Stochastic semi-classical description of sub-barrier fusion reactions

    Directory of Open Access Journals (Sweden)

    Ayik Sakir

    2011-10-01

    Full Text Available A semi-classical method that incorporates the quantum effects of the low-lying vibrational modes is applied to fusion reactions. The quantum effect is simulated by stochastic sampling of initial zero-point fluctuations of the surface modes. In this model, dissipation of the relative energy into non-collective excitations of nuclei can be included straightforwardly. The inclusion of dissipation is shown to increase the agreement with the fusion cross section data of Ni isotopes.

  16. Zero-One Law for Regular Languages and Semigroups with Zero

    OpenAIRE

    Sin'ya, Ryoma

    2015-01-01

    A regular language has the zero-one law if its asymptotic density converges to either zero or one. We prove that the class of all zero-one languages is closed under Boolean operations and quotients. Moreover, we prove that a regular language has the zero-one law if and only if its syntactic monoid has a zero element. Our proof gives both algebraic and automata characterisation of the zero-one law for regular languages, and it leads the following two corollaries: (i) There is an O(n log n) alg...

  17. Prediction Errors of Molecular Machine Learning Models Lower than Hybrid DFT Error.

    Science.gov (United States)

    Faber, Felix A; Hutchison, Luke; Huang, Bing; Gilmer, Justin; Schoenholz, Samuel S; Dahl, George E; Vinyals, Oriol; Kearnes, Steven; Riley, Patrick F; von Lilienfeld, O Anatole

    2017-11-14

    We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of 13 electronic ground-state properties of organic molecules. The performance of each regressor/representation/property combination is assessed using learning curves which report out-of-sample errors as a function of training set size with up to ∼118k distinct molecules. Molecular structures and properties at the hybrid density functional theory (DFT) level of theory come from the QM9 database [ Ramakrishnan et al. Sci. Data 2014 , 1 , 140022 ] and include enthalpies and free energies of atomization, HOMO/LUMO energies and gap, dipole moment, polarizability, zero point vibrational energy, heat capacity, and the highest fundamental vibrational frequency. Various molecular representations have been studied (Coulomb matrix, bag of bonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed distribution based variants including histograms of distances (HD), angles (HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian ridge regression (BR) and linear regression with elastic net regularization (EN)), random forest (RF), kernel ridge regression (KRR), and two types of neural networks, graph convolutions (GC) and gated graph networks (GG). Out-of sample errors are strongly dependent on the choice of representation and regressor and molecular property. Electronic properties are typically best accounted for by MG and GC, while energetic properties are better described by HDAD and KRR. The specific combinations with the lowest out-of-sample errors in the ∼118k training set size limit are (free) energies and enthalpies of atomization (HDAD/KRR), HOMO/LUMO eigenvalue and gap (MG/GC), dipole moment (MG/GC), static polarizability (MG/GG), zero point vibrational energy (HDAD/KRR), heat capacity at room temperature (HDAD/KRR), and highest fundamental vibrational frequency (BAML/RF). We present numerical

  18. A comparison of zero-order, first-order, and Monod biotransformation models

    International Nuclear Information System (INIS)

    Bekins, B.A.; Warren, E.; Godsy, E.M.

    1998-01-01

    Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K S , this assumption is often made without verification of this condition. The authors present a formal error analysis showing that the relative error in the first-order approximation is S/K S and in the zero-order approximation the error is K S /S. They then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K S , it may be better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of K S for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, the authors apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set

  19. Error of quantum-logic simulation via vector-soliton collisions

    International Nuclear Information System (INIS)

    Janutka, Andrzej

    2007-01-01

    In a concept of simulating the quantum logic with vector solitons by the author (Janutka 2006 J. Phys. A: Math. Gen. 39 12505), the soliton polarization is thought of as a state vector of a system of cebits (classical counterparts of qubits) switched via collisions with other solitons. The advantage of this method of information processing compared to schemes using linear optics is the possibility of the determination of the information-register state in a single measurement. Minimization of the information-processing error for different optical realizations of the logical systems is studied in the framework of a quantum analysis of soliton fluctuations. The problem is considered with relevance to general difficulties of the quantum error-correction schemes for the classical analogies of the quantum-information processing

  20. On the Amortized Complexity of Zero Knowledge Protocols for Multiplicative Relations

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Pastro, Valerio

    2012-01-01

    We present a protocol that allows to prove in zero-knowledge that committed values xi, yi, zi, i = 1,…,l satisfy xiyi = zi, where the values are taken from a finite field. For error probability 2− u the size of the proof is linear in u and only logarithmic in l. Therefore, for any fixed error...... theoretically secure. Using this type of commitments we obtain, in the preprocessing model, a perfect zero-knowledge interactive proof for circuit satisfiability of circuit C where the proof has size O(|C|). We then generalize our basic scheme to a protocol that verifies l instances of an algebraic circuit D...... over K with v inputs, in the following sense: given committed values xi,j and zi, with i = 1,…,l and j = 1,…,v, the prover shows that D(xi,1,…,xi,v) = zi for i = 1,…,l. The interesting property is that the amortized complexity of verifying one circuit only depends on the multiplicative depth...

  1. Probing the non-classicality of temporal correlations

    Directory of Open Access Journals (Sweden)

    Martin Ringbauer

    2017-11-01

    Full Text Available Correlations between spacelike separated measurements on entangled quantum systems are stronger than any classical correlations and are at the heart of numerous quantum technologies. In practice, however, spacelike separation is often not guaranteed and we typically face situations where measurements have an underlying time order. Here we aim to provide a fair comparison of classical and quantum models of temporal correlations on a single particle, as well as timelike-separated correlations on multiple particles. We use a causal modeling approach to show, in theory and experiment, that quantum correlations outperform their classical counterpart when allowed equal, but limited communication resources. This provides a clearer picture of the role of quantum correlations in timelike separated scenarios, which play an important role in foundational and practical aspects of quantum information processing.

  2. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  3. Classical and quantum aspects of topological solitons (using numerical methods)

    International Nuclear Information System (INIS)

    Weidig, T.

    1999-08-01

    In Introduction, we review integrable and topological solitons. In Numerical Methods, we describe how to minimise functionals, time-integrate configurations and solve eigenvalue problems. We also present the Simulated Annealing scheme for minimisation in solitonic systems. In Classical Aspects, we analyse the effect of the potential term on the structure of minimal-energy solutions for any topological charge n. The simplest holomorphic baby Skyrme model has no known stable minimal-energy solution for n > 1. The one-vacuum baby Skyrme model possesses non-radially symmetric multi-skyrmions that look like 'skyrmion lattices' formed by skyrmions with n = 2. The two-vacua baby Skyrme model has radially symmetric multi-skyrmions. We implement Simulated Annealing and it works well for higher order terms. We find that the spatial part of the six-derivative term is zero. In Quantum Aspects, we find the first order quantum mass correction for the φ 4 kink using the semi-classical expansion. We derive a trace formula which gives the mass correction by using the eigenmodes and values of the soliton and vacuum perturbations. We show that the zero mode is the most important contribution. We compute the mass correction of φ 4 kink and Sine-Gordon numerically by solving the eigenvalue equations and substituting into the trace formula. (author)

  4. A model of quantum communication device for quantum hashing

    International Nuclear Information System (INIS)

    Vasiliev, A

    2016-01-01

    In this paper we consider a model of quantum communications between classical computers aided with quantum processors, connected by a classical and a quantum channel. This type of communications implying both classical and quantum messages with moderate use of quantum processing is implicitly used in many quantum protocols, such as quantum key distribution or quantum digital signature. We show that using the model of a quantum processor on multiatomic ensembles in the common QED cavity we can speed up quantum hashing, which can be the basis of quantum digital signature and other communication protocols. (paper)

  5. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  6. From zero to infinity what makes numbers interesting

    CERN Document Server

    Reid, Constance

    2006-01-01

    From Zero to Infinity is a combination of number lore, number history, and sparkling descriptions of the simply stated but exceedingly difficult problems posed by the most ordinary numbers that first appeared in 1955 and has been kept in print continuously ever since. With the fifth edition this classic has been updated to report on advances in number theory over the last 50 years, including the proof of Fermat's Last Theorem. Deceptively simple in style and structure, it is a book to which the reader will return again and again, gaining greater understanding and satisfaction with each reading

  7. Boosting work characteristics and overall heat engine performance via shortcuts to adiabaticity: quantum and classical systems

    OpenAIRE

    Deng, Jiawen; Wang, Qing-hai; Liu, Zhihao; Hanggi, Peter; Gong, Jiangbin

    2013-01-01

    Under a general framework, shortcuts to adiabatic processes are shown to be possible in classical systems. We then study the distribution function of the work done on a small system initially prepared at thermal equilibrium. It is found that the work fluctuations can be significantly reduced via shortcuts to adiabatic processes. For example, in the classical case probabilities of having very large or almost zero work values are suppressed. In the quantum case negative work may be totally remo...

  8. Location-dependent communications using quantum entanglement

    International Nuclear Information System (INIS)

    Malaney, Robert A.

    2010-01-01

    The ability to unconditionally verify the location of a communication receiver would lead to a wide range of new security paradigms. However, it is known that unconditional location verification in classical communication systems is impossible. In this work we show how unconditional location verification can be achieved with the use of quantum communication channels. Our verification remains unconditional irrespective of the number of receivers, computational capacity, or any other physical resource held by an adversary. Quantum location verification represents an application of quantum entanglement that delivers a feat not possible in the classical-only channel. It gives us the ability to deliver real-time communications viable only at specified geographical coordinates.

  9. Performance Analysis of Communications under Energy Harvesting Constraints with noisy CSI

    KAUST Repository

    Znaidi, Mohamed Ridha Ali

    2016-01-06

    In energy harvesting communications, the transmitters have to adapt transmission to availability of energy harvested during the course of communication. The performance of the transmission depends on the channel conditions which vary randomly due to environmental changes. In this work, we consider the problem of power allocation taking into account the energy arrivals over time and the degree of channel state information (CSI) available at the transmitter, to maximize the throughput. Differently from previous work, the CSI at the transmitter is not perfect and may include estimation errors. We solve this problem with respect to the Energy Harvesting constraints. We determine the optimal power in the case where the channel is assumed to be perfectly known at the receiver. Also, we obtain the power policy when the transmitter has no CSI. Furthermore, we analyze the asymptotic average throughput in a system where the average recharge rate goes asymptotically to zero and when it is very high.

  10. Performance limits of energy harvesting communications under imperfect channel state information

    KAUST Repository

    Zenaidi, Mohamed Ridha

    2016-07-26

    In energy harvesting communications, the transmitters have to adapt transmission to availability of energy harvested during the course of communication. The performance of the transmission depends on the channel conditions which vary randomly due to mobility and environmental changes. In this paper, we consider the problem of power allocation taking into account the energy arrivals over time and the degree of channel state information (CSI) available at the transmitter, in order to maximize the throughput. Differently from previous work, the CSI at the transmitter is not perfect and may include estimation errors. We solve this problem with respect to the causality and energy storage constraints. We determine the optimal offline policy in the case where the channel is assumed to be perfectly known at the receiver. Also, we obtain the power policy when the transmitter has no CSI. Furthermore, we analyze the asymptotic average throughput in a system where the average recharge rate goes asymptotically to zero. © 2016 IEEE.

  11. Modelling noise in second generation sequencing forensic genetics STR data using a one-inflated (zero-truncated) negative binomial model

    DEFF Research Database (Denmark)

    Vilsen, Søren B.; Tvedebrink, Torben; Mogensen, Helle Smidt

    2015-01-01

    We present a model fitting the distribution of non-systematic errors in STR second generation sequencing, SGS, analysis. The model fits the distribution of non-systematic errors, i.e. the noise, using a one-inflated, zero-truncated, negative binomial model. The model is a two component model...

  12. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  13. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  14. Quantum secret sharing with classical Bobs

    International Nuclear Information System (INIS)

    Li Lvzhou; Qiu Daowen; Mateus, Paulo

    2013-01-01

    Boyer et al (2007 Phys. Rev. Lett. 99 140501) proposed a novel idea of semi-quantum key distribution, where a key can be securely distributed between Alice, who can perform any quantum operation, and Bob, who is classical. Extending the ‘semi-quantum’ idea to other tasks of quantum information processing is of interest and worth considering. In this paper, we consider the issue of semi-quantum secret sharing, where a quantum participant Alice can share a secret key with two classical participants, Bobs. After analyzing the existing protocol, we propose a new protocol of semi-quantum secret sharing. Our protocol is more realistic, since it utilizes product states instead of entangled states. We prove that any attempt of an adversary to obtain information necessarily induces some errors that the legitimate users could notice. (paper)

  15. Multidimensional zero-correlation attacks on lightweight block cipher HIGHT: Improved cryptanalysis of an ISO standard

    DEFF Research Database (Denmark)

    Wen, Long; Wang, Meiqin; Bogdanov, Andrey

    2014-01-01

    results on HIGHT, its security evaluation against the recent zero-correlation linear attacks is still lacking. At the same time, the Feistel-type structure of HIGHT suggests that it might be susceptible to this type of cryptanalysis. In this paper, we aim to bridge this gap. We identify zero......-correlation linear approximations over 16 rounds of HIGHT. Based upon those, we attack 27-round HIGHT (round 4 to round 30) with improved time complexity and practical memory requirements. This attack of ours is the best result on HIGHT to date in the classical single-key setting. We also provide the first attack...

  16. Quantum secret sharing based on quantum error-correcting codes

    International Nuclear Information System (INIS)

    Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu

    2011-01-01

    Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)

  17. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    Science.gov (United States)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  18. Quantum communication under channel uncertainty

    International Nuclear Information System (INIS)

    Noetzel, Janis Christian Gregor

    2012-01-01

    This work contains results concerning transmission of entanglement and subspaces as well as generation of entanglement in the limit of arbitrary many uses of compound- and arbitrarily varying quantum channels (CQC, AVQC). In both cases, the channel is described by a set of memoryless channels. Only forward communication between one sender and one receiver is allowed. A code is said to be ''good'' only, if it is ''good'' for every channel out of the set. Both settings describe a scenario, in which sender and receiver have only limited channel knowledge. For different amounts of information about the channel available to sender or receiver, coding theorems are proven for the CQC. For the AVQC, both deterministic and randomised coding schemes are considered. Coding theorems are proven, as well as a quantum analogue of the Ahlswede-dichotomy. The connection to zero-error capacities of stationary memoryless quantum channels is investigated. The notion of symmetrisability is defined and used for both classes of channels.

  19. Semiclassical approach to mesoscopic systems classical trajectory correlations and wave interference

    CERN Document Server

    Waltner, Daniel

    2012-01-01

    This volume describes mesoscopic systems with classically chaotic dynamics using semiclassical methods which combine elements of classical dynamics and quantum interference effects. Experiments and numerical studies show that Random Matrix Theory (RMT) explains physical properties of these systems well. This was conjectured more than 25 years ago by Bohigas, Giannoni and Schmit for the spectral properties. Since then, it has been a challenge to understand this connection analytically.  The author offers his readers a clearly-written and up-to-date treatment of the topics covered. He extends previous semiclassical approaches that treated spectral and conductance properties. He shows that RMT results can in general only be obtained semiclassically when taking into account classical configurations not considered previously, for example those containing multiply traversed periodic orbits. Furthermore, semiclassics is capable of describing effects beyond RMT. In this context he studies the effect of a non-zero Eh...

  20. Psychology of communications

    International Nuclear Information System (INIS)

    Hunns, D.M.

    1980-01-01

    A theory is proposed relating to the structuring of mental models and this theory used to account for a number of human error mechanisms. Communications amongst operators and the systems around them is seen as a vital factor in the area of human error and a technique, communications analysis, is proposed as one approach to systematically predicting the ways in which actual system state and the operators' perceptions of that state can get out of step and lead to catastrophe. To be most effective it is expected that the analyst would apply communications analyst with an interactive computer system. Of particular importance is the ability to trace the operator-system communication scenarios in various abnormal system configurations. (orig.)

  1. Atomic collision experiments at the border line between classical and quantum mechanics

    International Nuclear Information System (INIS)

    Aquilanti, V.

    1984-01-01

    In order to understand atomic and molecular interactions, one has to learn how to live with the wave-particle duality, considering classical nuclei and quantum electrons. A better way, illustrated by reference to experiments, is by quasiclassical (or semi-classical) mechanics, governing a world with a quasi-zero Planck's constant. One thus explains optical analogs (shadows, rainbows, glories) as interference effects in atomic collisions. Reference is also made to Wheeler's 'black bird' on the inversion problem from spectroscopy and scattering to molecular structure. The paper concludes outlining a journey in the hyperspace to escape from Einstein's torus and to find interferences and resonances in three body scattering and reactions. (Auth.)

  2. Local gauge invariant Lagrangeans in classical field theories

    International Nuclear Information System (INIS)

    Grigore, D.R.

    1982-07-01

    We investigate the most general local gauge invariant Lagrangean in the framework of classical field theory. We rederive esentially Utiyama's result with a slight generalization. Our proof makes clear the importance of the so called current conditions, i.e. the requirement that the Noether currents are different from zero. This condition is of importance both in the general motivation for the introduction of the Yang-Mills fields and for the actual proof. Some comments are made about the basic mathematical structure of the problem - the gauge group. (author)

  3. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  4. Improving Pathologists' Communication Skills.

    Science.gov (United States)

    Dintzis, Suzanne

    2016-08-01

    The 2015 Institute of Medicine report on diagnostic error has placed a national spotlight on the importance of improving communication among clinicians and between clinicians and patients [1]. The report emphasizes the critical role that communication plays in patient safety and outlines ways that pathologists can support this process. Despite recognition of communication as an essential element in patient care, pathologists currently undergo limited (if any) formal training in communication skills. To address this gap, we at the University of Washington Medical Center developed communication training with the goal of establishing best practice procedures for effective pathology communication. The course includes lectures, role playing, and simulated clinician-pathologist interactions for training and evaluation of pathology communication performance. Providing communication training can help create reliable communication pathways that anticipate and address potential barriers and errors before they happen. © 2016 American Medical Association. All Rights Reserved.

  5. Reliability assessment of fiber optic communication lines depending on external factors and diagnostic errors

    Science.gov (United States)

    Bogachkov, I. V.; Lutchenko, S. S.

    2018-05-01

    The article deals with the method for the assessment of the fiber optic communication lines (FOCL) reliability taking into account the effect of the optical fiber tension, the temperature influence and the built-in diagnostic equipment errors of the first kind. The reliability is assessed in terms of the availability factor using the theory of Markov chains and probabilistic mathematical modeling. To obtain a mathematical model, the following steps are performed: the FOCL state is defined and validated; the state graph and system transitions are described; the system transition of states that occur at a certain point is specified; the real and the observed time of system presence in the considered states are identified. According to the permissible value of the availability factor, it is possible to determine the limiting frequency of FOCL maintenance.

  6. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  7. Is Zero-Based Budgeting Different from Planning--Programming--Budgeting Systems?

    Science.gov (United States)

    Hentschke, Guilbert C.

    1977-01-01

    Successful adoption of zero-base budgeting (ZBB) will be greater than that of planning-programming-budgeting-systems (PPBS) because perceived problems inherent in PPBS are largely missing in ZBB; ZBB appears to fit current school district budgeting behavior; and ZBB seems to improve communication about the need for budget reform. (Author/IRT)

  8. Errors in dual x-ray beam differential absorptiometry

    International Nuclear Information System (INIS)

    Bolin, F.; Preuss, L.; Gilbert, K.; Bugenis, C.

    1977-01-01

    Errors pertinent to the dual beam absorptiometry system have been studied and five areas are given in detail: (1) scattering, in which a computer analysis of multiple scattering shows little error due to this effect; (2) geometrical configuration effects, in which the slope of the sample is shown to influence the accuracy of the measurement; (3) Poisson variations, wherein it is shown that a simultaneous reduction can be obtained in both dosage and statistical error; (4) absorption coefficients, in which the effect of variation in absorption coefficient compilations is shown to have a critical effect on the interpretations of experimental data; and (5) filtering, wherein is shown the need for filters on dual beam systems using a characteristic x-ray output. A zero filter system is outlined

  9. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    Science.gov (United States)

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  10. Errors in veterinary practice: preliminary lessons for building better veterinary teams.

    Science.gov (United States)

    Kinnison, T; Guile, D; May, S A

    2015-11-14

    Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.

  11. Reducing number entry errors: solving a widespread, serious problem.

    Science.gov (United States)

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  12. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  13. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    Science.gov (United States)

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  14. Communication: Symmetrical quasi-classical analysis of linear optical spectroscopy

    Science.gov (United States)

    Provazza, Justin; Coker, David F.

    2018-05-01

    The symmetrical quasi-classical approach for propagation of a many degree of freedom density matrix is explored in the context of computing linear spectra. Calculations on a simple two state model for which exact results are available suggest that the approach gives a qualitative description of peak positions, relative amplitudes, and line broadening. Short time details in the computed dipole autocorrelation function result in exaggerated tails in the spectrum.

  15. A Classic Through Eternity

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    FIVE years ago, an ancient Chinese air was beamed to outer space as a PR exercise. To humankind, music is a universal language, so the tune seemed an ideal medium for communication with extraterrestrial intelligence. So far there has been no response, but it is believed that the tune will play for a billion years, and eventually be heard and understood. The melody is called High Mountain and Flowing Stream, and it is played on the guqin, a seven-stringed classical musical instrument similar to the zither.

  16. Accounting for Berkson and Classical Measurement Error in Radon Exposure Using a Bayesian Structural Approach in the Analysis of Lung Cancer Mortality in the French Cohort of Uranium Miners.

    Science.gov (United States)

    Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie

    2017-02-01

    Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.

  17. Computational Error Estimate for the Power Series Solution of Odes ...

    African Journals Online (AJOL)

    This paper compares the error estimation of power series solution with recursive Tau method for solving ordinary differential equations. From the computational viewpoint, the power series using zeros of Chebyshevpolunomial is effective, accurate and easy to use. Keywords: Lanczos Tau method, Chebyshev polynomial, ...

  18. Seeing your error alters my pointing: observing systematic pointing errors induces sensori-motor after-effects.

    Directory of Open Access Journals (Sweden)

    Roberta Ronchi

    Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

  19. Safety assessment of inter-channel / inter-system digital communications: A defensive measures approach

    International Nuclear Information System (INIS)

    Thuy, N. N. Q.

    2006-01-01

    Inappropriately designed inter-channel and inter-system digital communications could initiate common cause failure of multiple channels or multiple systems. Defensive measures were introduced in EPRI report TR-1002835 (Guideline for Performing Defense-in-Depth and Diversity Assessments for Digital Upgrades) to assess, on a deterministic basis, the susceptibility of digital systems architectures to common-cause failures. This paper suggests how this approach could be applied to assess inter-channel and inter-system digital communications from a safety standpoint. The first step of the approach is to systematically identify the so called 'influence factors' that one end of the data communication path can have on the other. Potential factors to be considered would typically include data values, data volumes and data rates. The second step of the approach is to characterize the ways possible failures of a given end of the communication path could affect these influence factors (e.g., incorrect data values, excessive data rates, time-outs, incorrect data volumes). The third step is to analyze the designed-in measures taken to guarantee independence of the other end. In addition to classical error detection and correction codes, typical defensive measures are one-way data communication, fixed-rate data communication, fixed-volume data communication, validation of data values. (authors)

  20. A novel unified expression for the capacity and bit error probability of wireless communication systems over generalized fading channels

    KAUST Repository

    Yilmaz, Ferkan

    2012-07-01

    Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE.

  1. Why do adult dogs (Canis familiaris) commit the A-not-B search error?

    Science.gov (United States)

    Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József

    2014-02-01

    It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.

  2. Real-time detection and elimination of nonorthogonality error in interference fringe processing

    International Nuclear Information System (INIS)

    Hu Haijiang; Zhang Fengdeng

    2011-01-01

    In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.

  3. Luigi Gatteschi's work on asymptotics of special functions and their zeros

    Science.gov (United States)

    Gautschi, Walter; Giordano, Carla

    2008-12-01

    A good portion of Gatteschi's research publications-about 65%-is devoted to asymptotics of special functions and their zeros. Most prominently among the special functions studied figure classical orthogonal polynomials, notably Jacobi polynomials and their special cases, Laguerre polynomials, and Hermite polynomials by implication. Other important classes of special functions dealt with are Bessel functions of the first and second kind, Airy functions, and confluent hypergeometric functions, both in Tricomi's and Whittaker's form. This work is reviewed here, and organized along methodological lines.

  4. Two statistics for evaluating parameter identifiability and error reduction

    Science.gov (United States)

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  5. The behaviour of the local error in splitting methods applied to stiff problems

    International Nuclear Information System (INIS)

    Kozlov, Roman; Kvaernoe, Anne; Owren, Brynjulf

    2004-01-01

    Splitting methods are frequently used in solving stiff differential equations and it is common to split the system of equations into a stiff and a nonstiff part. The classical theory for the local order of consistency is valid only for stepsizes which are smaller than what one would typically prefer to use in the integration. Error control and stepsize selection devices based on classical local order theory may lead to unstable error behaviour and inefficient stepsize sequences. Here, the behaviour of the local error in the Strang and Godunov splitting methods is explained by using two different tools, Lie series and singular perturbation theory. The two approaches provide an understanding of the phenomena from different points of view, but both are consistent with what is observed in numerical experiments

  6. Energy Efficient Error-Correcting Coding for Wireless Systems

    NARCIS (Netherlands)

    Shao, X.

    2010-01-01

    The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required

  7. Investigating the significance of zero-point motion in small molecular clusters of sulphuric acid and water

    International Nuclear Information System (INIS)

    Stinson, Jake L.; Ford, Ian J.; Kathmann, Shawn M.

    2014-01-01

    The nucleation of particles from trace gases in the atmosphere is an important source of cloud condensation nuclei, and these are vital for the formation of clouds in view of the high supersaturations required for homogeneous water droplet nucleation. The methods of quantum chemistry have increasingly been employed to model nucleation due to their high accuracy and efficiency in calculating configurational energies; and nucleation rates can be obtained from the associated free energies of particle formation. However, even in such advanced approaches, it is typically assumed that the nuclei have a classical nature, which is questionable for some systems. The importance of zero-point motion (also known as quantum nuclear dynamics) in modelling small clusters of sulphuric acid and water is tested here using the path integral molecular dynamics method at the density functional level of theory. The general effect of zero-point motion is to distort the mean structure slightly, and to promote the extent of proton transfer with respect to classical behaviour. In a particular configuration of one sulphuric acid molecule with three waters, the range of positions explored by a proton between a sulphuric acid and a water molecule at 300 K (a broad range in contrast to the confinement suggested by geometry optimisation at 0 K) is clearly affected by the inclusion of zero point motion, and similar effects are observed for other configurations

  8. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  9. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  10. Does general relativity theory possess the classical newtonian limit

    International Nuclear Information System (INIS)

    Denisov, V.I.; Logunov, A.A.

    1980-01-01

    A detailed comparison of newtonian approximation of the Einstein theory and the Newton theory of gravity is made. A difference of principle between these two theories is clarified at the stage of obtaining integrals of motion. Exact eqautions of motion and Einstein equations shows the existence only zero integrals of motion as well as in the newtonian approximation. A conclusion is that GRT has no classical newtonian limit, since the integrals of motion in the Newton theory of gravity and in the newtonian approximation of the Einstein theory do not coincide [ru

  11. Optimal linear precoding for indoor visible light communication system

    KAUST Repository

    Sifaou, Houssem; Park, Kihong; Kammoun, Abla; Alouini, Mohamed-Slim

    2017-01-01

    ) problem. The performance of the proposed precoding scheme is studied under different working conditions and compared with the classical zero-forcing precoding. Simulations have been provided to illustrate the high gain of the proposed scheme.

  12. Communication: Proper treatment of classically forbidden electronic transitions significantly improves detailed balance in surface hopping

    Energy Technology Data Exchange (ETDEWEB)

    Sifain, Andrew E. [Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089-0485 (United States); Wang, Linjun [Department of Chemistry, Zhejiang University, Hangzhou 310027 (China); Prezhdo, Oleg V. [Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089-0485 (United States); Department of Chemistry, University of Southern California, Los Angeles, California 90089-1062 (United States)

    2016-06-07

    Surface hopping is the most popular method for nonadiabatic molecular dynamics. Many have reported that it does not rigorously attain detailed balance at thermal equilibrium, but does so approximately. We show that convergence to the Boltzmann populations is significantly improved when the nuclear velocity is reversed after a classically forbidden hop. The proposed prescription significantly reduces the total number of classically forbidden hops encountered along a trajectory, suggesting that some randomization in nuclear velocity is needed when classically forbidden hops constitute a large fraction of attempted hops. Our results are verified computationally using two- and three-level quantum subsystems, coupled to a classical bath undergoing Langevin dynamics.

  13. Revealing the consequences and errors of substance arising from the inverse confusion between the crystal (ligand) field quantities and the zero-field splitting ones

    Energy Technology Data Exchange (ETDEWEB)

    Rudowicz, Czesław, E-mail: crudowicz@zut.edu.pl [Institute of Physics, West Pomeranian University of Technology, Al. Piastów 17, 70-310 Szczecin (Poland); Karbowiak, Mirosław [Faculty of Chemistry, University of Wrocław, ul. F. Joliot-Curie 14, 50-383 Wrocław (Poland)

    2015-01-01

    , most recently, have lead to pitfalls and errors of substance bearing on understanding of physical properties. Clarification of the incorrect terminology is timely in order to bring about better understanding of the physical principles and prevent further proliferation of the confusion. - Highlights: • Confusion between crystal field quantities and zero-field splitting ones elucidated. • Consequences of this confusion and errors of substance revealed. • Literature survey of notational and terminological problems presented. • Invalid direct conversions between the CF parameters and ZFS ones exposed. • Terminological clarifications enable better understanding of physical principles.

  14. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  15. On Montgomery's pair correlation conjecture to the zeros of Riedmann zeta function

    OpenAIRE

    Li, Pei

    2005-01-01

    In this thesis, we are interested in Montgomery's pair correlation conjecture which is about the distribution of.the spacings between consecutive zeros of the Riemann Zeta function. Our goal is to explain and study Montgomery's pair correlation conjecture and discuss its connection with the random matrix theory. In Chapter One, we will explain how to define the Ftiemann Zeta function by using the analytic continuation. After this, several classical properties of the Ftiemann Zeta function wil...

  16. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  17. When long-range zero-lag synchronization is feasible in cortical networks

    Directory of Open Access Journals (Sweden)

    Atthaphon eViriyopase

    2012-07-01

    Full Text Available Many studies have reported long-range synchronization of neuronal activity between brain areas, in particular in the gamma-band with frequencies in the range of 40-80 Hz. Several studies have reported synchrony with zero phase lag, which is remarkable considering the synaptic and conduction delays inherent in the connections between distant brain areas. This result has led to many speculations about the possible functional role of zero-lag synchrony, e.g., for neuronal communication in attention, memory and feature binding. However, recent studies using recordings of single-unit activity and local field potentials report that neuronal synchronization occurs with non-zero phase lags. This raises the questions whether zero-lag synchrony can occur in the brain and, if so, under which conditions.We used analytical methods and computer simulations to investigate which connectivity between neuronal populations allows or prohibits zero-lag synchrony. We did so for a model where two oscillators interact via a relay oscillator. Analytical results and computer simulations were obtained for both type I Mirollo-Strogatz neurons and type II Hodgkin-Huxley neurons. We have investigated the dynamics of the model for various types of synaptic coupling and importantly considered the potential impact of Spike-Timing Dependent Plasticity (STDP and its learning window. We confirm previous results that zero-lag synchrony can be achieved in this configuration. This is much easier to achieve with Hodgkin-Huxley neurons, which have a biphasic phase response curve, than for type I neurons. STDP facilitates zero-lag synchrony as it adjusts the synaptic strengths such that zero-lag synchrony is feasible for a much larger range of parameters than without STDP.

  18. Principles of digital communication and coding

    CERN Document Server

    Viterbi, Andrew J

    2009-01-01

    This classic by two digital communications experts is geared toward students of communications theory and to designers of channels, links, terminals, modems, or networks used to transmit and receive digital messages. 1979 edition.

  19. Evaluation of parameters for particles acceleration by the zero-point field of quantum electrodynamics

    Science.gov (United States)

    Rueda, A.

    1985-01-01

    That particles may be accelerated by vacuum effects in quantum field theory has been repeatedly proposed in the last few years. A natural upshot of this is a mechanism for cosmic rays (CR) primaries acceleration. A mechanism for acceleration by the zero-point field (ZPE) when the ZPE is taken in a realistic sense (in opposition to a virtual field) was considered. Originally the idea was developed within a semiclassical context. The classical Einstein-Hopf model (EHM) was used to show that free isolated electromagnrtically interacting particles performed a random walk in phase space and more importantly in momentum space when submitted to the perennial action of the so called classical electromagnrtic ZPE.

  20. Statistical analysis of lifetime determinations in the presence of large errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1984-01-01

    The lifetimes of the new particles are very short, and most of the experiments which measure decay times are subject to measurement errors which are not negligible compared with the decay times themselves. Bartlett has analyzed the problem of lifetime estimation if the error on each event is small or zero. For the case of non-negligible measurement errors, σsub(i), on each event, we are interested in a few basic questions: How well does maximum likelihood work. That is, (a) are the errors reasonable, (b) is the answer unbiased, and (c) are there other estimators with superior performance. We concentrate on the results of our Monte Carlo investigation for the case in which the experiment is sensitive over all times -infinity< xsub(i)< infinity

  1. Communication Complexity

    Indian Academy of Sciences (India)

    Jaikumar Radhakrishnan

    We allow a small probability of error. Goal: minimize the total number of bits transmitted. ... using tools from combinatorics, coding theory, algebra, analysis, etc. Jaikumar Radhakrishnan. Communication .... Assume Alice and Bob know a good error correcting code. E : {0, 1}n → {0, 1}10n with distance, say, 3n. Alice.

  2. Determination of the point-of-zero, charge of manganese oxides with different methods including an improved salt titration method

    NARCIS (Netherlands)

    Tan, W.F.; Lu, S.J.; Liu, F.; Feng, X.H.; He, J.Z.; Koopal, L.K.

    2008-01-01

    Manganese (Mn) oxides are important components in soils and sediments. Points-of-zero charge (PZC) of three synthetic Mn oxides (birnessite, cryptomelane, and todorokite) were determined by using three classical techniques (potentiometric titration or PT, rapid PT or R-PT, and salt titration or ST)

  3. Flight Operations . [Zero Knowledge to Mission Complete

    Science.gov (United States)

    Forest, Greg; Apyan, Alex; Hillin, Andrew

    2016-01-01

    Outline the process that takes new hires with zero knowledge all the way to the point of completing missions in Flight Operations. Audience members should be able to outline the attributes of a flight controller and instructor, outline the training flow for flight controllers and instructors, and identify how the flight controller and instructor attributes are necessary to ensure operational excellence in mission prep and execution. Identify how the simulation environment is used to develop crisis management, communication, teamwork, and leadership skills for SGT employees beyond what can be provided by classroom training.

  4. "Apologies" from pathologists: why, when, and how to say "sorry" after committing a medical error.

    Science.gov (United States)

    Dewar, Rajan; Parkash, Vinita; Forrow, Lachlan; Truog, Robert D

    2014-05-01

    How pathologists communicate an error is complicated by the absence of a direct physician-patient relationship. Using 2 examples, we elaborate on how other physician colleagues routinely play an intermediary role in our day-to-day transactions and in the communication of a pathologist error to the patient. The concept of a "dual-hybrid" mind-set in the intermediary physician and its role in representing the pathologists' viewpoint adequately is considered. In a dual-hybrid mind-set, the intermediary physician can align with the patients' philosophy and like the patient, consider the smallest deviation from norm to be an error. Alternatively, they might embrace the traditional physician philosophy and communicate only those errors that resulted in a clinically inappropriate outcome. Neither may effectively reflect the pathologists' interests. We propose that pathologists develop strategies to communicate errors that include considerations of meeting with the patients directly. Such interactions promote healing for the patient and are relieving to the well-intentioned pathologist.

  5. Communication Education and Instructional Communication: Genesis and Evolution as Fields of Inquiry

    Science.gov (United States)

    Morreale, Sherwyn; Backlund, Philip; Sparks, Leyla

    2014-01-01

    Communication education is concerned with the communicative aspects of teaching and learning in various situations and contexts. Although the historical roots of this area of inquiry date back to the classical study of rhetoric by the Greeks and Romans, this report focuses on the field's emergence as an important area of modern scholarly…

  6. Indeterminism in Classical Dynamics of Particle Motion

    Science.gov (United States)

    Eyink, Gregory; Vishniac, Ethan; Lalescu, Cristian; Aluie, Hussein; Kanov, Kalin; Burns, Randal; Meneveau, Charles; Szalay, Alex

    2013-03-01

    We show that ``God plays dice'' not only in quantum mechanics but also in the classical dynamics of particles advected by turbulent fluids. With a fixed deterministic flow velocity and an exactly known initial position, the particle motion is nevertheless completely unpredictable! In analogy with spontaneous magnetization in ferromagnets which persists as external field is taken to zero, the particle trajectories in turbulent flow remain random as external noise vanishes. The necessary ingredient is a rough advecting field with a power-law energy spectrum extending to smaller scales as noise is taken to zero. The physical mechanism of ``spontaneous stochasticity'' is the explosive dispersion of particle pairs proposed by L. F. Richardson in 1926, so the phenomenon should be observable in laboratory and natural turbulent flows. We present here the first empirical corroboration of these effects in high Reynolds-number numerical simulations of hydrodynamic and magnetohydrodynamic fluid turbulence. Since power-law spectra are seen in many other systems in condensed matter, geophysics and astrophysics, the phenomenon should occur rather widely. Fast reconnection in solar flares and other astrophysical systems can be explained by spontaneous stochasticity of magnetic field-line motion

  7. Quantum-Classical Correspondence: Dynamical Quantization and the Classical Limit

    International Nuclear Information System (INIS)

    Turner, L

    2004-01-01

    In only 150 pages, not counting appendices, references, or the index, this book is one author's perspective of the massive theoretical and philosophical hurdles in the no-man's-land separating the classical and quantum domains of physics. It ends with him emphasizing his own theoretical contribution to this area. In his own words, he has attempted to answer: 1. How can we obtain the quantum dynamics of open systems initially described by the equations of motion of classical physics (quantization process) 2. How can we retrieve classical dynamics from the quantum mechanical equations of motion by means of a classical limiting process (dequantization process). However, this monograph seems overly ambitious. Although the publisher's description refers to this book as an accessible entre, we find that this author scrambles too hastily over the peaks of information that are contained in his large collection of 272 references. Introductory motivating discussions are lacking. Profound ideas are glossed over superficially and shoddily. Equations morph. But no new convincing understanding of the physical world results. The author takes the viewpoint that physical systems are always in interaction with their environment and are thus not isolated and, therefore, not Hamiltonian. This impels him to produce a method of quantization of these stochastic systems without the need of a Hamiltonian. He also has interest in obtaining the classical limit of the quantized results. However, this reviewer does not understand why one needs to consider open systems to understand quantum-classical correspondence. The author demonstrates his method using various examples of the Smoluchowski form of the Fokker--Planck equation. He then renders these equations in a Wigner representation, uses what he terms an infinitesimality condition, and associates with a constant having the dimensions of an action. He thereby claims to develop master equations, such as the Caldeira-Leggett equation, without

  8. Repulsive Casimir force at zero and finite temperature

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    We study the zero and finite temperature Casimir force acting on a perfectly conducting piston with arbitrary cross section moving inside a closed cylinder with infinitely permeable walls. We show that at any temperature, the Casimir force always tends to move the piston away from the walls and toward its equilibrium position. In the case of a rectangular piston, exact expressions for the Casimir force are derived. In the high-temperature regime, we show that the leading term of the Casimir force is linear in temperature and therefore the Casimir force has a classical limit. Due to duality, all these results also hold for an infinitely permeable piston moving inside a closed cylinder with perfectly conducting walls.

  9. Functional Basis for Efficient Physical Layer Classical Control in Quantum Processors

    Science.gov (United States)

    Ball, Harrison; Nguyen, Trung; Leong, Philip H. W.; Biercuk, Michael J.

    2016-12-01

    The rapid progress seen in the development of quantum-coherent devices for information processing has motivated serious consideration of quantum computer architecture and organization. One topic which remains open for investigation and optimization relates to the design of the classical-quantum interface, where control operations on individual qubits are applied according to higher-level algorithms; accommodating competing demands on performance and scalability remains a major outstanding challenge. In this work, we present a resource-efficient, scalable framework for the implementation of embedded physical layer classical controllers for quantum-information systems. Design drivers and key functionalities are introduced, leading to the selection of Walsh functions as an effective functional basis for both programing and controller hardware implementation. This approach leverages the simplicity of real-time Walsh-function generation in classical digital hardware, and the fact that a wide variety of physical layer controls, such as dynamic error suppression, are known to fall within the Walsh family. We experimentally implement a real-time field-programmable-gate-array-based Walsh controller producing Walsh timing signals and Walsh-synthesized analog waveforms appropriate for critical tasks in error-resistant quantum control and noise characterization. These demonstrations represent the first step towards a unified framework for the realization of physical layer controls compatible with large-scale quantum-information processing.

  10. Zero modes in de Sitter background

    Energy Technology Data Exchange (ETDEWEB)

    Einhorn, Martin B. [Kavli Institute for Theoretical Physics, University of California,Santa Barbara, CA 93106-4030 (United States); Jones, D.R. Timothy [Kavli Institute for Theoretical Physics, University of California,Santa Barbara, CA 93106-4030 (United States); Dept. of Mathematical Sciences, University of Liverpool,Liverpool L69 3BX (United Kingdom)

    2017-03-28

    There are five well-known zero modes among the fluctuations of the metric of de Sitter (dS) spacetime. For Euclidean signature, they can be associated with certain spherical harmonics on the S{sup 4} sphere, viz., the vector representation 5 of the global SO(5) isometry. They appear, for example, in the perturbative calculation of the on-shell effective action of dS space, as well as in models containing matter fields. These modes are shown to be associated with collective modes of S{sup 4} corresponding to certain coherent fluctuations. When dS space is embedded in flat five dimensions E{sup 5}, they may be seen as a legacy of translation of the center of the S{sup 4} sphere. Rigid translations of the S{sup 4}-sphere on E{sup 5} leave the classical action invariant but are unobservable displacements from the point of view of gravitational dynamics on S{sup 4}. Thus, unlike similar moduli, the center of the sphere is not promoted to a dynamical degree of freedom. As a result, these zero modes do not signify the possibility of physically realizable fluctuations or flat directions for the metric of dS space. They are not associated with Killing vectors on S{sup 4} but can be identified with certain non-isometric, conformal Killing forms that locally correspond to a rescaling of the volume element dV{sub 4}. We frame much of our discussion in the context of renormalizable gravity, but, to the extent that they only depend upon the global symmetry of the background, the conclusions should apply equally to the corresponding zero modes found in Einstein gravity. Although their existence has only been demonstrated at one-loop, we expect that these zero modes will be present to all orders in perturbation theory. They will occur for Lorentzian signature as well, so long as the hyperboloid H{sup 4} is locally stable, but there remain certain infrared issues that need to be clarified. We conjecture that they will appear in any gravitational theory having dS background as a

  11. Boosting work characteristics and overall heat-engine performance via shortcuts to adiabaticity: quantum and classical systems.

    Science.gov (United States)

    Deng, Jiawen; Wang, Qing-hai; Liu, Zhihao; Hänggi, Peter; Gong, Jiangbin

    2013-12-01

    Under a general framework, shortcuts to adiabatic processes are shown to be possible in classical systems. We study the distribution function of the work done on a small system initially prepared at thermal equilibrium. We find that the work fluctuations can be significantly reduced via shortcuts to adiabatic processes. For example, in the classical case, probabilities of having very large or almost zero work values are suppressed. In the quantum case, negative work may be totally removed from the otherwise non-positive-definite work values. We also apply our findings to a micro Otto-cycle-based heat engine. It is shown that the use of shortcuts, which directly enhances the engine output power, can also increase the heat-engine efficiency substantially, in both quantum and classical regimes.

  12. Accounting for Zero Inflation of Mussel Parasite Counts Using Discrete Regression Models

    Directory of Open Access Journals (Sweden)

    Emel Çankaya

    2017-06-01

    Full Text Available In many ecological applications, the absences of species are inevitable due to either detection faults in samples or uninhabitable conditions for their existence, resulting in high number of zero counts or abundance. Usual practice for modelling such data is regression modelling of log(abundance+1 and it is well know that resulting model is inadequate for prediction purposes. New discrete models accounting for zero abundances, namely zero-inflated regression (ZIP and ZINB, Hurdle-Poisson (HP and Hurdle-Negative Binomial (HNB amongst others are widely preferred to the classical regression models. Due to the fact that mussels are one of the economically most important aquatic products of Turkey, the purpose of this study is therefore to examine the performances of these four models in determination of the significant biotic and abiotic factors on the occurrences of Nematopsis legeri parasite harming the existence of Mediterranean mussels (Mytilus galloprovincialis L.. The data collected from the three coastal regions of Sinop city in Turkey showed more than 50% of parasite counts on the average are zero-valued and model comparisons were based on information criterion. The results showed that the probability of the occurrence of this parasite is here best formulated by ZINB or HNB models and influential factors of models were found to be correspondent with ecological differences of the regions.

  13. Medication errors in home care: a qualitative focus group study.

    Science.gov (United States)

    Berland, Astrid; Bentsen, Signe Berit

    2017-11-01

    To explore registered nurses' experiences of medication errors and patient safety in home care. The focus of care for older patients has shifted from institutional care towards a model of home care. Medication errors are common in this situation and can result in patient morbidity and mortality. An exploratory qualitative design with focus group interviews was used. Four focus group interviews were conducted with 20 registered nurses in home care. The data were analysed using content analysis. Five categories were identified as follows: lack of information, lack of competence, reporting medication errors, trade name products vs. generic name products, and improving routines. Medication errors occur frequently in home care and can threaten the safety of patients. Insufficient exchange of information and poor communication between the specialist and home-care health services, and between general practitioners and healthcare workers can lead to medication errors. A lack of competence in healthcare workers can also lead to medication errors. To prevent these, it is important that there should be up-to-date information and communication between healthcare workers during the transfer of patients from specialist to home care. Ensuring competence among healthcare workers with regard to medication is also important. In addition, there should be openness and accurate reporting of medication errors, as well as in setting routines for the preparation, alteration and administration of medicines. To prevent medication errors in home care, up-to-date information and communication between healthcare workers is important when patients are transferred from specialist to home care. It is also important to ensure adequate competence with regard to medication, and that there should be openness when medication errors occur, as well as in setting routines for the preparation, alteration and administration of medications. © 2017 John Wiley & Sons Ltd.

  14. Reproductive value, sensitivity, and nonlinearity: Population-management heuristics derived from classical demography

    OpenAIRE

    Karsten R.; Teismann H.; Vogels A.

    2013-01-01

    In classical demographic theory, reproductive value and stable age distribution are proportional to the sensitivities of the asymptotic population size to changes in mortality and maternity, respectively. In this note we point out that analogous relationships hold if the maternity function is allowed to depend on the population density. The relevant formulae can essentially be obtained by replacing the growth rate ("Lotka'sr") with zero. These facts may be used to derive heuristics for popula...

  15. Quantum communication under channel uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Noetzel, Janis Christian Gregor

    2012-09-06

    This work contains results concerning transmission of entanglement and subspaces as well as generation of entanglement in the limit of arbitrary many uses of compound- and arbitrarily varying quantum channels (CQC, AVQC). In both cases, the channel is described by a set of memoryless channels. Only forward communication between one sender and one receiver is allowed. A code is said to be ''good'' only, if it is ''good'' for every channel out of the set. Both settings describe a scenario, in which sender and receiver have only limited channel knowledge. For different amounts of information about the channel available to sender or receiver, coding theorems are proven for the CQC. For the AVQC, both deterministic and randomised coding schemes are considered. Coding theorems are proven, as well as a quantum analogue of the Ahlswede-dichotomy. The connection to zero-error capacities of stationary memoryless quantum channels is investigated. The notion of symmetrisability is defined and used for both classes of channels.

  16. Classical and quantum dynamics of a perfect fluid scalar-metric cosmology

    International Nuclear Information System (INIS)

    Vakili, Babak

    2010-01-01

    We study the classical and quantum models of a Friedmann-Robertson-Walker (FRW) cosmology, coupled to a perfect fluid, in the context of the scalar-metric gravity. Using the Schutz' representation for the perfect fluid, we show that, under a particular gauge choice, it may lead to the identification of a time parameter for the corresponding dynamical system. It is shown that the evolution of the universe based on the classical cosmology represents a late time power law expansion coming from a big-bang singularity in which the scale factor goes to zero while the scalar field blows up. Moreover, this formalism gives rise to a Schroedinger-Wheeler-DeWitt (SWD) equation for the quantum-mechanical description of the model under consideration, the eigenfunctions of which can be used to construct the wave function of the universe. We use the resulting wave function in order to investigate the possibility of the avoidance of classical singularities due to quantum effects by means of the many-worlds and ontological interpretation of quantum cosmology.

  17. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim

    2017-01-01

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended

  18. Study of recursive model for pole-zero cancellation circuit

    International Nuclear Information System (INIS)

    Zhou Jianbin; Zhou Wei; Hong Xu; Hu Yunchuan; Wan Xinfeng; Du Xin; Wang Renbo

    2014-01-01

    The output of charge sensitive amplifier (CSA) is a negative exponential signal with long decay time which will result in undershoot after C-R differentiator. Pole-zero cancellation (PZC) circuit is often applied to eliminate undershoot in many radiation detectors. However, it is difficult to use a zero created by PZC circuit to cancel a pole in CSA output signal accurately because of the influences of electronic components inherent error and environmental factors. A novel recursive model for PZC circuit is presented based on Kirchhoff's Current Law (KCL) in this paper. The model is established by numerical differentiation algorithm between the input and the output signal. Some simulation experiments for a negative exponential signal are carried out using Visual Basic for Application (VBA) program and a real x-ray signal is also tested. Simulated results show that the recursive model can reduce the time constant of input signal and eliminate undershoot. (authors)

  19. Pengaruh Dispersi Terhadap Kecepatan Data Komunikasi Optik Menggunakan Pengkodean Return To Zero (RZ Dan Non Return To Zero (NRZ

    Directory of Open Access Journals (Sweden)

    Anggun Fitrian Isnawati

    2009-11-01

    Full Text Available Fiber optic has characteristics for optical transmission system. One of optical characteristics is pulse broadening, known as dispersion. The dispersion is a condition where pulse in output side is larger than pulse in input side. It means that pulse broadening had happened. In the communication system, it’s known as inter symbol interference (ISI. Effect of Inter symbol interference increasing the error bit or BER value. In optical communication system, dispersion is most influence to the data rate that fiber can support. Besides, bandwidth, information capacity, transmission distance, wavelength and fiber type can also influenced by the dispersion.

  20. Data communications

    International Nuclear Information System (INIS)

    Ann, Byeong Ho; Baek, Jeong Hun

    1998-01-01

    The contents of this book are notion of data communications : summary on data communication, data transmission, data communications system, data transmission technology, data conversion, data link control and control over error of data transmission and exchange of data communications network in the first part, computer communications network architecture : data communications architecture, OSI model, lower layer of OSI model, upper layer of OSI model and distributed surroundings in the second part, data information networking : LAN, FDDI, 100 Base T, DQDB and Frame Relay in the third part, Public Network : PSDN, N-ISDN, B-ISDN in the fourth part, internet and PC communication : emulator program, Binary file, BBS, E-mail service and user on-line service in the last part.

  1. The zero-multipole summation method for estimating electrostatic interactions in molecular dynamics: Analysis of the accuracy and application to liquid systems

    Science.gov (United States)

    Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki

    2014-05-01

    In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ˜ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother

  2. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    Directory of Open Access Journals (Sweden)

    Sekhar S Chandra

    2004-01-01

    Full Text Available We address the problem of estimating instantaneous frequency (IF of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE. The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD-based IF estimators for different signal-to-noise ratio (SNR.

  3. Signed reward prediction errors drive declarative learning.

    Directory of Open Access Journals (Sweden)

    Esther De Loof

    Full Text Available Reward prediction errors (RPEs are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning. However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.

  4. Signed reward prediction errors drive declarative learning.

    Science.gov (United States)

    De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.

  5. I-centric Communications

    CERN Document Server

    Arbanowski, S; Steglich, S; Popescu-Zeletin, R

    2001-01-01

    During the last years, a variety of concepts for service integration and corresponding systems have gained momentum. On the one hand, they aim for the interworking and integration of classical telecommunications and data communications services. On the other hand, they are focusing on universal service access from a variety of end user systems. Looking at humans' communication behavior and communication space, it is obvious that human beings interact frequently in a set of contexts in their environment (communication space). Following this view, we want to build communication systems on the analysis of the individual communication spaces. The results are communication systems adapted to the specific demands of each individual. The authors introduce I-centric Communication Systems, an approach to design communication systems which adapt to the individual communication space and individual environment and situation. In this context "I" means I, or individual, "Centric" means adaptable to I requirements and a ce...

  6. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  7. Crisis Communication Online

    DEFF Research Database (Denmark)

    Utz, Sonja; Schultz, Friederike; Glocka, Sandra

    2013-01-01

    Social media play in today's societies a fundamental role for the negotiation and dynamics of crises. However, classical crisis communication theories neglect the role of the medium and focus mainly on the interplay between crisis type and crisis communication strategy. Building on the recently...... developed “networked crisis communication model” we contrast effects of medium (Facebook vs. Twitter vs. online newspaper) and crisis type (intentional vs. victim) in an online experiment. Using the Fukushima Daiichi nuclear disaster as crisis scenario, we show that medium effects are stronger than...... the effects of crisis type. Crisis communication via social media resulted in a higher reputation and less secondary crisis reactions such as boycotting the company than crisis communication in the newspaper. However, secondary crisis communication, e.g. talking about the crisis communication, was higher...

  8. Neural network versus classical time series forecasting models

    Science.gov (United States)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  9. Probability of undetected error after decoding for a concatenated coding scheme

    Science.gov (United States)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.

  10. Maximum run-up behavior of tsunamis under non-zero initial velocity condition

    Directory of Open Access Journals (Sweden)

    Baran AYDIN

    2018-03-01

    Full Text Available The tsunami run-up problem is solved non-linearly under the most general initial conditions, that is, for realistic initial waveforms such as N-waves, as well as standard initial waveforms such as solitary waves, in the presence of initial velocity. An initial-boundary value problem governed by the non-linear shallow-water wave equations is solved analytically utilizing the classical separation of variables technique, which proved to be not only fast but also accurate analytical approach for this type of problems. The results provide important information on maximum tsunami run-up qualitatively. We observed that, although the calculated maximum run-ups increase significantly, going as high as double that of the zero-velocity case, initial waves having non-zero fluid velocity exhibit the same run-up behavior as waves without initial velocity, for all wave types considered in this study.

  11. Learning, Realizability and Games in Classical Arithmetic

    Science.gov (United States)

    Aschieri, Federico

    2010-12-01

    In this dissertation we provide mathematical evidence that the concept of learning can be used to give a new and intuitive computational semantics of classical proofs in various fragments of Predicative Arithmetic. First, we extend Kreisel modified realizability to a classical fragment of first order Arithmetic, Heyting Arithmetic plus EM1 (Excluded middle axiom restricted to Sigma^0_1 formulas). We introduce a new realizability semantics we call "Interactive Learning-Based Realizability". Our realizers are self-correcting programs, which learn from their errors and evolve through time. Secondly, we extend the class of learning based realizers to a classical version PCFclass of PCF and, then, compare the resulting notion of realizability with Coquand game semantics and prove a full soundness and completeness result. In particular, we show there is a one-to-one correspondence between realizers and recursive winning strategies in the 1-Backtracking version of Tarski games. Third, we provide a complete and fully detailed constructive analysis of learning as it arises in learning based realizability for HA+EM1, Avigad's update procedures and epsilon substitution method for Peano Arithmetic PA. We present new constructive techniques to bound the length of learning processes and we apply them to reprove - by means of our theory - the classic result of Godel that provably total functions of PA can be represented in Godel's system T. Last, we give an axiomatization of the kind of learning that is needed to computationally interpret Predicative classical second order Arithmetic. Our work is an extension of Avigad's and generalizes the concept of update procedure to the transfinite case. Transfinite update procedures have to learn values of transfinite sequences of non computable functions in order to extract witnesses from classical proofs.

  12. Two new bivariate zero-inflated generalized Poisson distributions with a flexible correlation structure

    Directory of Open Access Journals (Sweden)

    Chi Zhang

    2015-05-01

    Full Text Available To model correlated bivariate count data with extra zero observations, this paper proposes two new bivariate zero-inflated generalized Poisson (ZIGP distributions by incorporating a multiplicative factor (or dependency parameter λ, named as Type I and Type II bivariate ZIGP distributions, respectively. The proposed distributions possess a flexible correlation structure and can be used to fit either positively or negatively correlated and either over- or under-dispersed count data, comparing to the existing models that can only fit positively correlated count data with over-dispersion. The two marginal distributions of Type I bivariate ZIGP share a common parameter of zero inflation while the two marginal distributions of Type II bivariate ZIGP have their own parameters of zero inflation, resulting in a much wider range of applications. The important distributional properties are explored and some useful statistical inference methods including maximum likelihood estimations of parameters, standard errors estimation, bootstrap confidence intervals and related testing hypotheses are developed for the two distributions. A real data are thoroughly analyzed by using the proposed distributions and statistical methods. Several simulation studies are conducted to evaluate the performance of the proposed methods.

  13. Restrictions on Possible Forms of Classical Matter Fields Carrying no Energy

    International Nuclear Information System (INIS)

    Sokolowski, L.M.

    2004-01-01

    It is postulated in general relativity that the matter energy-momentum tensor vanishes if and only if all the matter fields vanish. In classical Lagrangian field theory the energy and momentum density are described by the variational (symmetric) energy-momentum tensor (named the stress tensor) and a priori it might occur that for some systems the tensor is identically to zero for all field configurations whereas evolution of the system is subject to deterministic Lagrange equations of motion. Such a system would not generate its own gravitational field. To check if these systems can exist in the framework of classical field theory we find a relationship between the stress tensor and the Euler operator (i.e. the Lagrange field equations). We prove that if a system of interacting scalar fields (the number of fields cannot exceed the spacetime dimension d) or a single vector field (in spacetimes with d even) has the stress tensor such that its divergence is identically zero (i.e. ''on and of shell''), then the Lagrange equations of motion hold identically too. These systems have then no propagation equations at all and should be regarded as unphysical. Thus nontrivial field equations require the stress tensor be nontrivial too. This relationship between vanishing (of divergence) of the stress tensor and of the Euler operator breaks down if the number of fields is greater than d. We show on concrete examples that a system of n > d interacting scalars or two interacting vector fields can have the stress tensor equal identically to zero while their propagation equations are nontrivial. This means that non-self-gravitating (and yet detectable) field systems are in principle admissible. Their equations of motion are, however, in some sense degenerate. We also show, that for a system of arbitrary number of interacting scalar fields or for a single vector field (in some specific spacetimes in the latter case), if the stress tensor is not identically zero, then it cannot

  14. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    J. Lang (Jens); J.G. Verwer (Jan)

    2007-01-01

    textabstractThis paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based onthe adjoint method combined with a small sample statistical initialization and the classical approach

  15. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    Lang, J.; Verwer, J.G.

    2007-01-01

    Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach

  16. Spectrum of diagnostic errors in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca

    2010-10-28

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff's complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Error traps need to be uncovered and highlighted, in order to prevent repetition of the same mistakes. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and stresses the malpractice issues in mammography, chest radiology and obstetric sonography. Missed fractures in emergency and communication issues between radiologists and physicians are also discussed.

  17. Efficiency in man-machine communication

    NARCIS (Netherlands)

    Haakma, R.; Engel, F.L.

    1990-01-01

    Expressed in terms of speed and accuracy, intention transfer in goal-oriented inter-human communication can be very efficient. One of the mechanisms that make for efficient communication is early detection and repair of communication errors. Another important efficiency mechanism prevents repeated

  18. Extending Lifetime of Wireless Sensor Networks using Forward Error Correction

    DEFF Research Database (Denmark)

    Donapudi, S U; Obel, C O; Madsen, Jan

    2006-01-01

    Communication between nodes in wireless sensor networks (WSN) is susceptible to transmission errors caused by low signal strength or interference. These errors manifest themselves as lost or corrupt packets. This often leads to retransmission, which in turn results in increased power consumption...

  19. Error Control in Distributed Node Self-Localization

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2008-03-01

    Full Text Available Location information of nodes in an ad hoc sensor network is essential to many tasks such as routing, cooperative sensing, and service delivery. Distributed node self-localization is lightweight and requires little communication overhead, but often suffers from the adverse effects of error propagation. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where localization error comes from and how it propagates from node to node. To prevent error from propagating and accumulating, we develop an error-control mechanism based on characterization of node uncertainties and discrimination between neighboring nodes. The error-control mechanism uses only local knowledge and is fully decentralized. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation for both range and directional sensors. It greatly improves localization accuracy and robustness.

  20. The Trouble with Zero

    Science.gov (United States)

    Lewis, Robert

    2015-01-01

    The history of the number zero is an interesting one. In early times, zero was not used as a number at all, but instead was used as a place holder to indicate the position of hundreds and tens. This article briefly discusses the history of zero and challenges the thinking where divisions using zero are used.

  1. Zero Point of Historical Time

    Directory of Open Access Journals (Sweden)

    R.S. Khakimov

    2014-02-01

    Full Text Available Historical studies are based on the assumption that there is a reference-starting point of the space-time – the Zero point of coordinate system. Due to the bifurcation in the Zero Point, the course of social processes changes sharply and the probabilistic causality replaces the deterministic one. For this reason, changes occur in the structure of social relations and statehood form as well as in the course of the ethnic processes. In such a way emerges a new discourse of the national behavior. With regard to the history of the Tatars and Tatarstan, such bifurcation points occurred in the periods of the formation: 1 of the Turkic Khaganate, which began to exist from the 6th century onward and became a qualitatively new State system that reformatted old elements in the new matrix introducing a new discourse of behavior; 2 of the Volga-Kama Bulgaria, where the rivers (Kama, Volga, Vyatka became the most important trade routes determining the singularity of this State. Here the nomadic culture was connected with the settled one and Islam became the official religion in 922; 3 and of the Golden Hordе, a powerful State with a remarkable system of communication, migration of huge human resources for thousands of kilometers, and extensive trade, that caused severe “mutations” in the ethnic terms and a huge mixing of ethnic groups. Given the dwelling space of Tatar population and its evolution within Russia, it can be argued that the Zero point of Tatar history, which conveyed the cultural invariants until today, begins in the Golden Horde. Neither in the Turkic khaganate nor in the Bulgar State, but namely in the Golden Horde. Despite the radical changes, the Russian Empire failed to transform the Tatars in the Russians. Therefore, contemporary Tatars preserved the Golden Horde tradition as a cultural invariant.

  2. Classical mechanics and electromagnetism in accelerator physics

    CERN Document Server

    Stupakov, Gennady

    2018-01-01

    This self-contained textbook with exercises discusses a broad range of selected topics from classical mechanics and electromagnetic theory that inform key issues related to modern accelerators. Part I presents fundamentals of the Lagrangian and Hamiltonian formalism for mechanical systems, canonical transformations, action-angle variables, and then linear and nonlinear oscillators. The Hamiltonian for a circular accelerator is used to evaluate the equations of motion, the action, and betatron oscillations in an accelerator. From this base, we explore the impact of field errors and nonlinear resonances. This part ends with the concept of the distribution function and an introduction to the kinetic equation to describe large ensembles of charged particles and to supplement the previous single-particle analysis of beam dynamics. Part II focuses on classical electromagnetism and begins with an analysis of the electromagnetic field from relativistic beams, both in vacuum and in a resistive pipe. Plane electromagne...

  3. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  4. Persistence of plasmids, cholera toxin genes, and prophage DNA in classical Vibrio cholerae O1.

    Science.gov (United States)

    Cook, W L; Wachsmuth, K; Johnson, S R; Birkness, K A; Samadi, A R

    1984-07-01

    Plasmid profiles, the location of cholera toxin subunit A genes, and the presence of the defective VcA1 prophage genome in classical Vibrio cholerae isolated from patients in Bangladesh in 1982 were compared with those in older classical strains isolated during the sixth pandemic and with those in selected eltor and nontoxigenic O1 isolates. Classical strains typically had two plasmids (21 and 3 megadaltons), eltor strains typically had no plasmids, and nontoxigenic O1 strains had zero to three plasmids. The old and new isolates of classical V. cholerae had two HindIII chromosomal digest fragments containing cholera toxin subunit A genes, whereas the eltor strains from Eastern countries had one fragment. The eltor strains from areas surrounding the Gulf of Mexico also had two subunit A gene fragments, which were smaller and easily distinguished from the classical pattern. All classical strains had 8 to 10 HindIII fragments containing the defective VcA1 prophage genome; none of the Eastern eltor strains had these genes, and the Gulf Coast eltor strains contained a different array of weakly hybridizing genes. These data suggest that the recent isolates of classical cholera in Bangladesh are closely related to the bacterial strain(s) which caused classical cholera during the sixth pandemic. These data do not support hypotheses that either the eltor or the nontoxigenic O1 strains are precursors of the new classical strains.

  5. High-dimensional structured light coding/decoding for free-space optical communications free of obstructions.

    Science.gov (United States)

    Du, Jing; Wang, Jian

    2015-11-01

    Bessel beams carrying orbital angular momentum (OAM) with helical phase fronts exp(ilφ)(l=0;±1;±2;…), where φ is the azimuthal angle and l corresponds to the topological number, are orthogonal with each other. This feature of Bessel beams provides a new dimension to code/decode data information on the OAM state of light, and the theoretical infinity of topological number enables possible high-dimensional structured light coding/decoding for free-space optical communications. Moreover, Bessel beams are nondiffracting beams having the ability to recover by themselves in the face of obstructions, which is important for free-space optical communications relying on line-of-sight operation. By utilizing the OAM and nondiffracting characteristics of Bessel beams, we experimentally demonstrate 12 m distance obstruction-free optical m-ary coding/decoding using visible Bessel beams in a free-space optical communication system. We also study the bit error rate (BER) performance of hexadecimal and 32-ary coding/decoding based on Bessel beams with different topological numbers. After receiving 500 symbols at the receiver side, a zero BER of hexadecimal coding/decoding is observed when the obstruction is placed along the propagation path of light.

  6. Is Attribute-Based Zero-Shot Learning an Ill-Posed Strategy?

    KAUST Repository

    Alabdulmohsin, Ibrahim; Cisse, Moustapha; Zhang, Xiangliang

    2016-01-01

    One transfer learning approach that has gained a wide popularity lately is attribute-based zero-shot learning. Its goal is to learn novel classes that were never seen during the training stage. The classical route towards realizing this goal is to incorporate a prior knowledge, in the form of a semantic embedding of classes, and to learn to predict classes indirectly via their semantic attributes. Despite the amount of research devoted to this subject lately, no known algorithm has yet reported a predictive accuracy that could exceed the accuracy of supervised learning with very few training examples. For instance, the direct attribute prediction (DAP) algorithm, which forms a standard baseline for the task, is known to be as accurate as supervised learning when as few as two examples from each hidden class are used for training on some popular benchmark datasets! In this paper, we argue that this lack of significant results in the literature is not a coincidence; attribute-based zero-shot learning is fundamentally an ill-posed strategy. The key insight is the observation that the mechanical task of predicting an attribute is, in fact, quite different from the epistemological task of learning the “correct meaning” of the attribute itself. This renders attribute-based zero-shot learning fundamentally ill-posed. In more precise mathematical terms, attribute-based zero-shot learning is equivalent to the mirage goal of learning with respect to one distribution of instances, with the hope of being able to predict with respect to any arbitrary distribution. We demonstrate this overlooked fact on some synthetic and real datasets. The data and software related to this paper are available at https://mine. kaust.edu.sa/Pages/zero-shot-learning.aspx. © Springer International Publishing AG 2016.

  7. Is Attribute-Based Zero-Shot Learning an Ill-Posed Strategy?

    KAUST Repository

    Alabdulmohsin, Ibrahim

    2016-09-03

    One transfer learning approach that has gained a wide popularity lately is attribute-based zero-shot learning. Its goal is to learn novel classes that were never seen during the training stage. The classical route towards realizing this goal is to incorporate a prior knowledge, in the form of a semantic embedding of classes, and to learn to predict classes indirectly via their semantic attributes. Despite the amount of research devoted to this subject lately, no known algorithm has yet reported a predictive accuracy that could exceed the accuracy of supervised learning with very few training examples. For instance, the direct attribute prediction (DAP) algorithm, which forms a standard baseline for the task, is known to be as accurate as supervised learning when as few as two examples from each hidden class are used for training on some popular benchmark datasets! In this paper, we argue that this lack of significant results in the literature is not a coincidence; attribute-based zero-shot learning is fundamentally an ill-posed strategy. The key insight is the observation that the mechanical task of predicting an attribute is, in fact, quite different from the epistemological task of learning the “correct meaning” of the attribute itself. This renders attribute-based zero-shot learning fundamentally ill-posed. In more precise mathematical terms, attribute-based zero-shot learning is equivalent to the mirage goal of learning with respect to one distribution of instances, with the hope of being able to predict with respect to any arbitrary distribution. We demonstrate this overlooked fact on some synthetic and real datasets. The data and software related to this paper are available at https://mine. kaust.edu.sa/Pages/zero-shot-learning.aspx. © Springer International Publishing AG 2016.

  8. Rogue waves, rational solutions, the patterns of their zeros and integral relations

    International Nuclear Information System (INIS)

    Ankiewicz, Adrian; Akhmediev, Nail; Clarkson, Peter A

    2010-01-01

    The focusing nonlinear Schroedinger equation, which describes generic nonlinear phenomena, including waves in the deep ocean and light pulses in optical fibres, supports a whole hierarchy of recently discovered rational solutions. We present recurrence relations for the hierarchy, the pattern of zeros for each solution and a set of integral relations which characterizes them. (fast track communication)

  9. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  10. Mixed quantum-classical equilibrium in global flux surface hopping

    International Nuclear Information System (INIS)

    Sifain, Andrew E.; Wang, Linjun; Prezhdo, Oleg V.

    2015-01-01

    Global flux surface hopping (GFSH) generalizes fewest switches surface hopping (FSSH)—one of the most popular approaches to nonadiabatic molecular dynamics—for processes exhibiting superexchange. We show that GFSH satisfies detailed balance and leads to thermodynamic equilibrium with accuracy similar to FSSH. This feature is particularly important when studying electron-vibrational relaxation and phonon-assisted transport. By studying the dynamics in a three-level quantum system coupled to a classical atom in contact with a classical bath, we demonstrate that both FSSH and GFSH achieve the Boltzmann state populations. Thermal equilibrium is attained significantly faster with GFSH, since it accurately represents the superexchange process. GFSH converges closer to the Boltzmann averages than FSSH and exhibits significantly smaller statistical errors

  11. On the Impact of Precoding Errors on Ultra-Reliable Communications

    DEFF Research Database (Denmark)

    Gerardino, Guillermo Andrés Pocovi; Pedersen, Klaus I.; Alvarez, Beatriz Soret

    2016-01-01

    Motivated by the stringent reliability required by some of the future cellular use cases, we study the impact of precoding errors on the SINR outage performance for various spatial diversity techniques. The performance evaluation is carried out via system-level simulations, including the effects...... of multi-user and multicell interference, and following the 3GPP-defined simulation assumptions for a traditional macro case. It is shown that, except for feedback error probabilities larger than 1%, closed-loop microscopic diversity schemes are generally preferred over open-loop techniques as a way...

  12. Transparency When Things Go Wrong: Physician Attitudes About Reporting Medical Errors to Patients, Peers, and Institutions.

    Science.gov (United States)

    Bell, Sigall K; White, Andrew A; Yi, Jean C; Yi-Frazier, Joyce P; Gallagher, Thomas H

    2017-12-01

    Transparent communication after medical error includes disclosing the mistake to the patient, discussing the event with colleagues, and reporting to the institution. Little is known about whether attitudes about these transparency practices are related. Understanding these relationships could inform educational and organizational strategies to promote transparency. We analyzed responses of 3038 US and Canadian physicians to a medical error communication survey. We used bivariate correlations, principal components analysis, and linear regression to determine whether and how physician attitudes about transparent communication with patients, peers, and the institution after error were related. Physician attitudes about disclosing errors to patients, peers, and institutions were correlated (all P's transparent communication with patients and peers/institution included female sex, US (vs Canadian) doctors, academic (vs private) practice, the belief that disclosure decreased likelihood of litigation, and the belief that system changes occur after error reporting. In addition, younger physicians, surgeons, and those with previous experience disclosing a serious error were more likely to agree with disclosure to patients. In comparison, doctors who believed that disclosure would decrease patient trust were less likely to agree with error disclosure to patients. Previous disclosure education was associated with attitudes supporting greater transparency with peers/institution. Physician attitudes about discussing errors with patients, colleagues, and institutions are related. Several predictors of transparency affect all 3 practices and are potentially modifiable by educational and institutional strategies.

  13. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    Science.gov (United States)

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  14. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  15. The role of communication in paediatric drug safety

    OpenAIRE

    Stebbing, Claire; Wong, Ian C K; Kaushal, Rainu; Jaffe, Adam

    2007-01-01

    Medication errors cause substantial harm to patients, and considerable cost to healthcare systems. Evidence suggests that communication plays a crucial role in the generation, management and prevention of such incidents. This review identifies how paediatric medication errors can be managed, and in particular focuses on the pathway of steps that can operationalise the current research findings. Furthermore, the current data suggesting how communication can help to prevent errors occurring in ...

  16. Robust general N user authentication scheme in a centralized quantum communication network via generalized GHZ states

    Science.gov (United States)

    Farouk, Ahmed; Batle, J.; Elhoseny, M.; Naseri, Mosayeb; Lone, Muzaffar; Fedorov, Alex; Alkhambashi, Majid; Ahmed, Syed Hassan; Abdel-Aty, M.

    2018-04-01

    Quantum communication provides an enormous advantage over its classical counterpart: security of communications based on the very principles of quantum mechanics. Researchers have proposed several approaches for user identity authentication via entanglement. Unfortunately, these protocols fail because an attacker can capture some of the particles in a transmitted sequence and send what is left to the receiver through a quantum channel. Subsequently, the attacker can restore some of the confidential messages, giving rise to the possibility of information leakage. Here we present a new robust General N user authentication protocol based on N-particle Greenberger-Horne-Zeilinger (GHZ) states, which makes eavesdropping detection more effective and secure, as compared to some current authentication protocols. The security analysis of our protocol for various kinds of attacks verifies that it is unconditionally secure, and that an attacker will not obtain any information about the transmitted key. Moreover, as the number of transferred key bits N becomes larger, while the number of users for transmitting the information is increased, the probability of effectively obtaining the transmitted authentication keys is reduced to zero.

  17. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  18. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr × Holstein F2 population

    Science.gov (United States)

    Silva, Fabyano Fonseca; Tunin, Karen P.; Rosa, Guilherme J.M.; da Silva, Marcos V.B.; Azevedo, Ana Luisa Souza; da Silva Verneque, Rui; Machado, Marco Antonio; Packer, Irineu Umberto

    2011-01-01

    Now a days, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs) and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP) may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized) with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr × Holstein) population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable. PMID:22215960

  19. Zero-inflated Poisson regression models for QTL mapping applied to tick-resistance in a Gyr x Holstein F2 population

    Directory of Open Access Journals (Sweden)

    Fabyano Fonseca Silva

    2011-01-01

    Full Text Available Nowadays, an important and interesting alternative in the control of tick-infestation in cattle is to select resistant animals, and identify the respective quantitative trait loci (QTLs and DNA markers, for posterior use in breeding programs. The number of ticks/animal is characterized as a discrete-counting trait, which could potentially follow Poisson distribution. However, in the case of an excess of zeros, due to the occurrence of several noninfected animals, zero-inflated Poisson and generalized zero-inflated distribution (GZIP may provide a better description of the data. Thus, the objective here was to compare through simulation, Poisson and ZIP models (simple and generalized with classical approaches, for QTL mapping with counting phenotypes under different scenarios, and to apply these approaches to a QTL study of tick resistance in an F2 cattle (Gyr x Holstein population. It was concluded that, when working with zero-inflated data, it is recommendable to use the generalized and simple ZIP model for analysis. On the other hand, when working with data with zeros, but not zero-inflated, the Poisson model or a data-transformation-approach, such as square-root or Box-Cox transformation, are applicable.

  20. Lorentz invariance from classical particle paths in quantum field theory of electric and magnetic charge

    International Nuclear Information System (INIS)

    Brandt, R.A.; Neri, F.; Zwanziger, D.

    1979-01-01

    We establish the Lorentz invariance of the quantum field theory of electric and magnetic charge. This is a priori implausible because the theory is the second-quantized version of a classical field theory which is inconsistent if the minimally coupled charged fields are smooth functions. For our proof we express the generating functional for the gauge-invariant Green's functions of quantum electrodynamics: with or without magnetic charge: as a path integral over the trajectories of classical charged point particles. The electric-electric and electric-magnetic interactions contribute factors exp(JDJ) and exp(JD'K), where J and K are the electric and magnetic currents of classical point particles and D is the usual photon propagator. The propagator D' involves the Dirac string but exp(JD'K) depends on it only through a topological integer linking string and classical particle trajectories. The charge quantization condition e/sub i/g/sub j/ - g/sub i/e/sub j/ = integer then suffices to make the gauge-invariant Green's functions string independent. By implication our formulation shows that if the Green's functions of quantum electrodynamics are expressed as usual as functional integrals over classical charged fields, the smooth field configurations have measure zero and all the support of the Feynman measure lies on the trajectories of classical point particles

  1. Simulation: learning from mistakes while building communication and teamwork.

    Science.gov (United States)

    Kuehster, Christina R; Hall, Carla D

    2010-01-01

    Medical errors are one of the leading causes of death annually in the United States. Many of these errors are related to poor communication and/or lack of teamwork. Using simulation as a teaching modality provides a dual role in helping to reduce these errors. Thorough integration of clinical practice with teamwork and communication in a safe environment increases the likelihood of reducing the error rates in medicine. By allowing practitioners to make potential errors in a safe environment, such as simulation, these valuable lessons improve retention and will rarely be repeated.

  2. [Improvement of medical processes with Six Sigma - practicable zero-defect quality in preparation for surgery].

    Science.gov (United States)

    Sobottka, Stephan B; Töpfer, Armin; Eberlein-Gonska, Maria; Schackert, Gabriele; Albrecht, D Michael

    2010-01-01

    Six Sigma is an innovative management- approach to reach practicable zero- defect quality in medical service processes. The Six Sigma principle utilizes strategies, which are based on quantitative measurements and which seek to optimize processes, limit deviations or dispersion from the target process. Hence, Six Sigma aims to eliminate errors or quality problems of all kinds. A pilot project to optimize the preparation for neurosurgery could now show that the Six Sigma method enhanced patient safety in medical care, while at the same time disturbances in the hospital processes and failure costs could be avoided. All six defined safety relevant quality indicators were significantly improved by changes in the workflow by using a standardized process- and patient- oriented approach. Certain defined quality standards such as a 100% complete surgical preparation at start of surgery and the required initial contact of the surgeon with the patient/ surgical record on the eve of surgery could be fulfilled within the range of practical zero- defect quality. Likewise, the degree of completion of the surgical record by 4 p.m. on the eve of surgery and their quality could be improved by a factor of 170 and 16, respectively, at sigma values of 4.43 and 4.38. The other two safety quality indicators "non-communicated changes in the OR- schedule" and the "completeness of the OR- schedule by 12:30 a.m. on the day before surgery" also show an impressive improvement by a factor of 2.8 and 7.7, respectively, corresponding with sigma values of 3.34 and 3.51. The results of this pilot project demonstrate that the Six Sigma method is eminently suitable for improving quality of medical processes. In our experience this methodology is suitable, even for complex clinical processes with a variety of stakeholders. In particular, in processes in which patient safety plays a key role, the objective of achieving a zero- defect quality is reasonable and should definitely be aspirated. Copyright

  3. Color-motion feature-binding errors are mediated by a higher-order chromatic representation.

    Science.gov (United States)

    Shevell, Steven K; Wang, Wei

    2016-03-01

    Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature429, 262 (2004)10.1038/429262a]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A31, A60 (2014)JOAOD60740-323210.1364/JOSAA.31.000A60]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at everyslevel. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higher-order chromatic mechanism.

  4. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  5. Sensitivity of Multicarrier Two-Dimensional Spreading Schemes to Synchronization Errors

    Directory of Open Access Journals (Sweden)

    Geneviève Jourdain

    2008-06-01

    Full Text Available This paper presents the impact of synchronization errors on the performance of a downlink multicarrier two-dimensional spreading OFDM-CDMA system. This impact is measured by the degradation of the signal to interference and noise ratio (SINR obtained after despreading and equalization. The contribution of this paper is twofold. First, we use some properties of random matrix and free probability theories to derive a new expression of the SINR. This expression is then independent of the actual value of the spreading codes while still accounting for the orthogonality between codes. This model is validated by means of Monte Carlo simulations. Secondly, the model is exploited to derive the SINR degradation of OFDM-CDMA systems due to synchronization errors which include a timing error, a carrier frequency offset, and a sampling frequency offset. It is also exploited to compare the sensitivities of MC-CDMA and MC-DS-CDMA systems to these errors in a frequency selective channel. This work is carried out for zero-forcing and minimum mean square error equalizers.

  6. Analysis of Employee's Survey for Preventing Human-Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses.

  7. Reversible Watermarking Using Prediction-Error Expansion and Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Guangyong Gao

    2015-01-01

    Full Text Available Currently, the research for reversible watermarking focuses on the decreasing of image distortion. Aiming at this issue, this paper presents an improvement method to lower the embedding distortion based on the prediction-error expansion (PE technique. Firstly, the extreme learning machine (ELM with good generalization ability is utilized to enhance the prediction accuracy for image pixel value during the watermarking embedding, and the lower prediction error results in the reduction of image distortion. Moreover, an optimization operation for strengthening the performance of ELM is taken to further lessen the embedding distortion. With two popular predictors, that is, median edge detector (MED predictor and gradient-adjusted predictor (GAP, the experimental results for the classical images and Kodak image set indicate that the proposed scheme achieves improvement for the lowering of image distortion compared with the classical PE scheme proposed by Thodi et al. and outperforms the improvement method presented by Coltuc and other existing approaches.

  8. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  9. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  10. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  11. Quantum communication complexity advantage implies violation of a Bell inequality

    Science.gov (United States)

    Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii

    2016-01-01

    We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600

  12. The B → D*lv form factor at zero recoil

    International Nuclear Information System (INIS)

    Simone, J.N.; Hashimoto, S.; El-Khadra, A.X.; Kronfeld, A.S.; Mackenzie, P.B.; Ryan, S.M.

    2000-01-01

    We describe a model independent lattice QCD method for determining the deviation from unity for h A1 (1), the B → D*lv form factor at zero recoil. We extend the double ratio method previously used to determine the B → Dlv form factor. The bulk of statistical and systematic errors cancel in the double ratios we consider, yielding form factors which promise to reduce present theoretical uncertainties in the determination of parallel V cb parallel. We present results from a prototype calculation at a single lattice spacing corresponding to β = 5.7

  13. Object-oriented communications

    International Nuclear Information System (INIS)

    Chapman, L.J.

    1989-01-01

    OOC is a high-level communications protocol based on the object-oriented paradigm. OOC's syntax, semantics, and pragmatics balance simplicity and expressivity for controls environments. While natural languages are too complex, computer protocols are often insufficiently expressive. An object-oriented communications philosophy provides a base for building the necessary high-level communications primitives like I don't understand and the current value of X is K. OOC is sufficiently flexible to express data acquisition, control requests, alarm messages, and error messages in a straightforward generic way. It can be used in networks, for inter-task communication, and even for intra-task communication

  14. Digital, Satellite-Based Aeronautical Communication

    Science.gov (United States)

    Davarian, F.

    1989-01-01

    Satellite system relays communication between aircraft and stations on ground. System offers better coverage with direct communication between air and ground, costs less and makes possible new communication services. Carries both voice and data. Because many data exchanged between aircraft and ground contain safety-related information, probability of bit errors essential.

  15. A study of the relationship between the semi-classical and the generator coordinate methods

    International Nuclear Information System (INIS)

    Passos, E.J.V. de; Souza Cruz, F.F. de.

    Using a very simple type of wave-packet which is obtained by letting unitary displacement operators having as generators canonical operators Q and P in the many-body Hilbert space act on a reference state, the relatinship between the semi-classical and the generator coordinate methods is investigated. The semi-classical method is based on the time-dependent variational principle whereas in the generator coordinate method the wave-packets are taken as generator states. To establish the equivalence of the two-methods, the concept of redundancy of the wave-packet and the importance of the zero-point energy effects are examined in detail, using tools developed in previous works. A numerical application to the case of the Goldhaber-Teller mode in 4 He is made. (Author) [pt

  16. The human communication space towards I-centric communications

    CERN Document Server

    Arbanowski, S; Steglich, S; Popescu-Zeletin, R

    2001-01-01

    A variety of concepts for service integration and corresponding systems have been developed. On one hand, they aim for the interworking and integration of classical telecommunications and data communications services. On the other, they are focusing on universal service access from a variety of end-user systems. Many of the technical problems, resulting from service integration and service personalisation, have been solved. However, all these systems are driven by the concept of providing several technologies to users by keeping the peculiarity of each service. Looking at human communication behaviour and communication space, it is obvious that human beings interact habitually in a set of contexts with their environment. The individual information preferences and needs, persons to interact with, and the set of devices controlled by each individual define their personal communication space. Following this view, a new approach is to build communication systems not on the basis of specific technologies, but on t...

  17. Performance Limits of Energy Harvesting Communications under Imperfect Channel State Information

    KAUST Repository

    Zenaidi, Mohamed Ridah

    2015-01-07

    In energy harvesting communications, the transmitters have to adapt transmission to availability of energy harvested during the course of communication. The performance of the transmission depends on the channel conditions which vary randomly due to mobility and environmental changes. In this work, we consider the problem of power allocation taking into account the energy arrivals over time and the degree of channel state information (CSI) available at the transmitter, in order to maximize the throughput. In this work, the CSI at the transmitter is not perfect and may include estimation errors. We solve this problem with respect to the causality and energy storage constraints. We determine the optimal offline policy in the case where the channel is assumed to be perfectly known at the receiver. Different cases of CSI availability are studied for the transmitter. We obtain the power policy when the transmitter has either perfect CSI or no CSI. We also investigate of utmost interest the case of fading channels with imperfect CSI. Furthermore, we analyze the asymptotic average throughput in a system where the average recharge rate goes asymptotically to zero and when it is very high.

  18. Reproductive value, sensitivity, and nonlinearity: population-management heuristics derived from classical demography.

    Science.gov (United States)

    Karsten, Richard; Teismann, Holger; Vogels, Angela

    2013-05-01

    In classical demographic theory, reproductive value and stable age distribution are proportional to the sensitivities of the asymptotic population size to changes in mortality and maternity, respectively. In this note we point out that analogous relationships hold if the maternity function is allowed to depend on the population density. The relevant formulae can essentially be obtained by replacing the growth rate ("Lotka's r") with zero. These facts may be used to derive heuristics for population management (pest control). Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Asteroid orbital error analysis: Theory and application

    Science.gov (United States)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  20. Comparative role of potential structure in classical, semiclassical, and quantum mechanics

    International Nuclear Information System (INIS)

    Judson, R.S.; Shi, S.; Rabitz, H.

    1989-01-01

    The corresponding effects of features in the potential on classical, semiclassical, and quantum mechanics are probed using the technique of functional sensitivity analysis. It is shown that the classical and quantum functional sensitivities are equivalent in the classical (small (h/2π)) and harmonic limits. Classical and quantum mechanics are known to react in qualitatively similar ways provided that features on the potential are smooth on the length scale of oscillations in the quantum wave function. By using functional sensitivity analysis, we are able to show in detail how the classical and quantum dynamics differ in the way that they sense the potential. Two examples are given, the first of which is the harmonic oscillator. This problem is well understood by other means but is useful to examine because it illustrates the detailed information about the interaction of the potential and the dynamics which can be provided by functional sensitivity analysis, simplifying the analysis of more complex systems. The second example is the collinear H+H 2 reaction. In that case there are a number of detailed and striking differences between the ways that classical and quantum mechanics react to features on the potential. For features which are broad compared to oscillations in the wave function, the two react in qualitatively the same way. The sensitivities are oscillatory, however, and there are phasing differences between the classical and quantum sensitivity functions. This means that using classical mechanics plus experimental data in an inversion scheme intended to find the ''true'' potential will necessarily introduce sizeable errors

  1. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  2. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    Science.gov (United States)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  3. Teamwork and clinical error reporting among nurses in Korean hospitals.

    Science.gov (United States)

    Hwang, Jee-In; Ahn, Jeonghoon

    2015-03-01

    To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.

  4. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  5. Quantum error correction with spins in diamond

    NARCIS (Netherlands)

    Cramer, J.

    2016-01-01

    Digital information based on the laws of quantum mechanics promisses powerful new ways of computation and communication. However, quantum information is very fragile; inevitable errors continuously build up and eventually all information is lost. Therefore, realistic large-scale quantum information

  6. The Importance of Relying on the Manual: Scoring Error Variance in the WISC-IV Vocabulary Subtest

    Science.gov (United States)

    Erdodi, Laszlo A.; Richard, David C. S.; Hopwood, Christopher

    2009-01-01

    Classical test theory assumes that ability level has no effect on measurement error. Newer test theories, however, argue that the precision of a measurement instrument changes as a function of the examinee's true score. Research has shown that administration errors are common in the Wechsler scales and that subtests requiring subjective scoring…

  7. Barriers and facilitators to recovering from e-prescribing errors in community pharmacies.

    Science.gov (United States)

    Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A

    2015-01-01

    To explore barriers and facilitators to recovery from e-prescribing errors in community pharmacies and to explore practical solutions for work system redesign to ensure successful recovery from errors. Cross-sectional qualitative design using direct observations, interviews, and focus groups. Five community pharmacies in Wisconsin. 13 pharmacists and 14 pharmacy technicians. Observational field notes and transcribed interviews and focus groups were subjected to thematic analysis guided by the Systems Engineering Initiative for Patient Safety (SEIPS) work system and patient safety model. Barriers and facilitators to recovering from e-prescription errors in community pharmacies. Organizational factors, such as communication, training, teamwork, and staffing levels, play an important role in recovering from e-prescription errors. Other factors that could positively or negatively affect recovery of e-prescription errors include level of experience, knowledge of the pharmacy personnel, availability or usability of tools and technology, interruptions and time pressure when performing tasks, and noise in the physical environment. The SEIPS model sheds light on key factors that may influence recovery from e-prescribing errors in pharmacies, including the environment, teamwork, communication, technology, tasks, and other organizational variables. To be successful in recovering from e-prescribing errors, pharmacies must provide the appropriate working conditions that support recovery from errors.

  8. Nonclassical measurements errors in nonlinear models

    DEFF Research Database (Denmark)

    Madsen, Edith; Mulalic, Ismir

    Discrete choice models and in particular logit type models play an important role in understanding and quantifying individual or household behavior in relation to transport demand. An example is the choice of travel mode for a given trip under the budget and time restrictions that the individuals...... estimates of the income effect it is of interest to investigate the magnitude of the estimation bias and if possible use estimation techniques that take the measurement error problem into account. We use data from the Danish National Travel Survey (NTS) and merge it with administrative register data...... that contains very detailed information about incomes. This gives a unique opportunity to learn about the magnitude and nature of the measurement error in income reported by the respondents in the Danish NTS compared to income from the administrative register (correct measure). We find that the classical...

  9. A generalized architecture of quantum secure direct communication for N disjointed users with authentication

    Science.gov (United States)

    Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A.

    2015-11-01

    In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement.

  10. When to Take a Gesture Seriously: On How We Use and Prioritize Communicative Cues.

    Science.gov (United States)

    Gunter, Thomas C; Weinbrenner, J E Douglas

    2017-08-01

    When people talk, their speech is often accompanied by gestures. Although it is known that co-speech gestures can influence face-to-face communication, it is currently unclear to what extent they are actively used and under which premises they are prioritized to facilitate communication. We investigated these open questions in two experiments that varied how pointing gestures disambiguate the utterances of an interlocutor. Participants, whose event-related brain responses were measured, watched a video, where an actress was interviewed about, for instance, classical literature (e.g., Goethe and Shakespeare). While responding, the actress pointed systematically to the left side to refer to, for example, Goethe, or to the right to refer to Shakespeare. Her final statement was ambiguous and combined with a pointing gesture. The P600 pattern found in Experiment 1 revealed that, when pointing was unreliable, gestures were only monitored for their cue validity and not used for reference tracking related to the ambiguity. However, when pointing was a valid cue (Experiment 2), it was used for reference tracking, as indicated by a reduced N400 for pointing. In summary, these findings suggest that a general prioritization mechanism is in use that constantly monitors and evaluates the use of communicative cues against communicative priors on the basis of accumulated error information.

  11. Generalized Pseudospectral Method and Zeros of Orthogonal Polynomials

    Directory of Open Access Journals (Sweden)

    Oksana Bihun

    2018-01-01

    Full Text Available Via a generalization of the pseudospectral method for numerical solution of differential equations, a family of nonlinear algebraic identities satisfied by the zeros of a wide class of orthogonal polynomials is derived. The generalization is based on a modification of pseudospectral matrix representations of linear differential operators proposed in the paper, which allows these representations to depend on two, rather than one, sets of interpolation nodes. The identities hold for every polynomial family pνxν=0∞ orthogonal with respect to a measure supported on the real line that satisfies some standard assumptions, as long as the polynomials in the family satisfy differential equations Apν(x=qν(xpν(x, where A is a linear differential operator and each qν(x is a polynomial of degree at most n0∈N; n0 does not depend on ν. The proposed identities generalize known identities for classical and Krall orthogonal polynomials, to the case of the nonclassical orthogonal polynomials that belong to the class described above. The generalized pseudospectral representations of the differential operator A for the case of the Sonin-Markov orthogonal polynomials, also known as generalized Hermite polynomials, are presented. The general result is illustrated by new algebraic relations satisfied by the zeros of the Sonin-Markov polynomials.

  12. Error Analysis of Variations on Larsen's Benchmark Problem

    International Nuclear Information System (INIS)

    Azmy, YY

    2001-01-01

    Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L 1 , L 2 , converge to zero with mesh refinement, the pointwise L ∞ norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD

  13. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  14. Zero-Sum Flows in Designs

    International Nuclear Information System (INIS)

    Akbari, S.; Khosrovshahi, G.B.; Mofidi, A.

    2010-07-01

    Let D be a t-(v, k, λ) design and let N i (D), for 1 ≤ i ≤ t, be the higher incidence matrix of D, a (0, 1)-matrix of size (v/i) x b, where b is the number of blocks of D. A zero-sum flow of D is a nowhere-zero real vector in the null space of N 1 (D). A zero-sum k-flow of D is a zero-sum flow with values in {±,...,±(k-1)}. In this paper we show that every non-symmetric design admits an integral zero-sum flow, and consequently we conjecture that every non-symmetric design admits a zero-sum 5-flow. Similarly, the definition of zero-sum flow can be extended to N i (D), 1 ≤ i ≤ t. Let D = t-(v,k, (v-t/k-t)) be the complete design. We conjecture that N t (D) admits a zero-sum 3-flow and prove this conjecture for t = 2. (author)

  15. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Science.gov (United States)

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  16. Zero-Sum Matrix Game with Payoffs of Dempster-Shafer Belief Structures and Its Applications on Sensors

    Science.gov (United States)

    Deng, Xinyang; Jiang, Wen; Zhang, Jiandong

    2017-01-01

    The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. PMID:28430156

  17. Performance of muon reconstruction including Alignment Position Errors for 2016 Collision Data

    CERN Document Server

    CMS Collaboration

    2016-01-01

    From 2016 Run muon reconstruction is using non-zero Alignment Position Errors to account for the residual uncertainties of muon chambers' positions. Significant improvements are obtained in particular for the startup phase after opening/closing the muon detector. Performance results are presented for real data and MC simulations, related to both the offline reconstruction and the High-Level Trigger.

  18. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  19. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    Science.gov (United States)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  20. Critical evidence for the prediction error theory in associative learning.

    Science.gov (United States)

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.