WorldWideScience

Sample records for superposition error bsse

  1. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-11-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  2. On the validity of the basis set superposition error and complete basis set limit extrapolations for the binding energy of the formic acid dimer

    Science.gov (United States)

    Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-03-01

    We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.

  3. A simple and efficient dispersion correction to the Hartree-Fock theory (2): Incorporation of a geometrical correction for the basis set superposition error.

    Science.gov (United States)

    Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi

    2015-10-01

    One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure.

  4. Effective empirical corrections for basis set superposition error in the def2-SVPD basis: gCP and DFT-C

    Science.gov (United States)

    Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin

    2017-06-01

    With the aim of mitigating the basis set error in density functional theory (DFT) calculations employing local basis sets, we herein develop two empirical corrections for basis set superposition error (BSSE) in the def2-SVPD basis, a basis which—when stripped of BSSE—is capable of providing near-complete-basis DFT results for non-covalent interactions. Specifically, we adapt the existing pairwise geometrical counterpoise (gCP) approach to the def2-SVPD basis, and we develop a beyond-pairwise approach, DFT-C, which we parameterize across a small set of intermolecular interactions. Both gCP and DFT-C are evaluated against the traditional Boys-Bernardi counterpoise correction across a set of 3402 non-covalent binding energies and isomerization energies. We find that the DFT-C method represents a significant improvement over gCP, particularly for non-covalently-interacting molecular clusters. Moreover, DFT-C is transferable among density functionals and can be combined with existing functionals—such as B97M-V—to recover large-basis results at a fraction of the cost.

  5. Ab initio O(N) elongation-counterpoise method for BSSE-corrected interaction energy analyses in biosystems

    Energy Technology Data Exchange (ETDEWEB)

    Orimoto, Yuuichi; Xie, Peng; Liu, Kai [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Yamamoto, Ryohei [Department of Molecular and Material Sciences, Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Imamura, Akira [Hiroshima Kokusai Gakuin University, 6-20-1 Nakano, Aki-ku, Hiroshima 739-0321 (Japan); Aoki, Yuriko, E-mail: aoki.yuriko.397@m.kyushu-u.ac.jp [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012 (Japan)

    2015-03-14

    An Elongation-counterpoise (ELG-CP) method was developed for performing accurate and efficient interaction energy analysis and correcting the basis set superposition error (BSSE) in biosystems. The method was achieved by combining our developed ab initio O(N) elongation method with the conventional counterpoise method proposed for solving the BSSE problem. As a test, the ELG-CP method was applied to the analysis of the DNAs’ inter-strands interaction energies with respect to the alkylation-induced base pair mismatch phenomenon that causes a transition from G⋯C to A⋯T. It was found that the ELG-CP method showed high efficiency (nearly linear-scaling) and high accuracy with a negligibly small energy error in the total energy calculations (in the order of 10{sup −7}–10{sup −8} hartree/atom) as compared with the conventional method during the counterpoise treatment. Furthermore, the magnitude of the BSSE was found to be ca. −290 kcal/mol for the calculation of a DNA model with 21 base pairs. This emphasizes the importance of BSSE correction when a limited size basis set is used to study the DNA models and compare small energy differences between them. In this work, we quantitatively estimated the inter-strands interaction energy for each possible step in the transition process from G⋯C to A⋯T by the ELG-CP method. It was found that the base pair replacement in the process only affects the interaction energy for a limited area around the mismatch position with a few adjacent base pairs. From the interaction energy point of view, our results showed that a base pair sliding mechanism possibly occurs after the alkylation of guanine to gain the maximum possible number of hydrogen bonds between the bases. In addition, the steps leading to the A⋯T replacement accompanied with replications were found to be unfavorable processes corresponding to ca. 10 kcal/mol loss in stabilization energy. The present study indicated that the ELG-CP method is promising for

  6. Recommending Hartree-Fock theory with London-dispersion and basis-set-superposition corrections for the optimization or quantum refinement of protein structures.

    Science.gov (United States)

    Goerigk, Lars; Collyer, Charles A; Reimers, Jeffrey R

    2014-12-18

    We demonstrate the importance of properly accounting for London dispersion and basis-set-superposition error (BSSE) in quantum-chemical optimizations of protein structures, factors that are often still neglected in contemporary applications. We optimize a portion of an ensemble of conformationally flexible lysozyme structures obtained from highly accurate X-ray crystallography data that serve as a reliable benchmark. We not only analyze root-mean-square deviations from the experimental Cartesian coordinates, but also, for the first time, demonstrate how London dispersion and BSSE influence crystallographic R factors. Our conclusions parallel recent recommendations for the optimization of small gas-phase peptide structures made by some of the present authors: Hartree-Fock theory extended with Grimme's recent dispersion and BSSE corrections (HF-D3-gCP) is superior to popular density functional theory (DFT) approaches. Not only are statistical errors on average lower with HF-D3-gCP, but also the convergence behavior is much better. In particular, we show that the BP86/6-31G* approach should not be relied upon as a black-box method, despite its widespread use, as its success is based on an unpredictable cancellation of errors. Using HF-D3-gCP is technically straightforward, and we therefore encourage users of quantum-chemical methods to adopt this approach in future applications.

  7. Second-order Møller-Plesset perturbation theory without basis set superposition error. II. Open-shell systems.

    Science.gov (United States)

    Salvador, P; Mayer, I

    2004-04-01

    The basis set superposition error-free second-order Møller-Plesset perturbation theory of intermolecular interactions, based on the "chemical Hamiltonian approach," which has been introduced in Part I, is applied here to open-shell systems by using a new, effective computer realization. The results of the numerical examples considered (CH(4) em leader HO, NO em leader HF) showed again the perfect performance of the method. Striking agreement has again been found with the results of the a posteriori counterpoise correction (CP) scheme in the case of large, well-balanced basis sets, which is also in agreement with a most recent formal theoretical analysis. The difficulties of the CP correction in open-shell systems are also discussed.

  8. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  9. Attenuating Away the Errors in Inter- and Intramolecular Interactions from Second-Order Møller-Plesset Calculations in the Small Aug-cc-pVDZ Basis Set.

    Science.gov (United States)

    Goldey, Matthew; Head-Gordon, Martin

    2012-12-06

    Second-order Møller-Plesset perturbation theory (MP2) treats electron correlation at low computational cost, but suffers from basis set superposition error (BSSE) and fundamental inaccuracies in long-range contributions. The cost differential between complete basis set (CBS) and small basis MP2 restricts system sizes where BSSE can be removed. Range-separation of MP2 could yield more tractable and/or accurate forms for short- and long-range correlation. Retaining only short-range contributions proves to be effective for MP2 in the small aug-cc-pVDZ (aDZ) basis. Using one range-separation parameter, superior behavior is obtained versus both MP2/aDZ and MP2/CBS for inter- and intramolecular test sets. Attenuation of the long-range helps to cancel both BSSE and intrinsic MP2 errors. Direct scaling of the MP2 correlation energy proves useful as well. The resulting SMP2/aDZ, MP2(erfc, aDZ), and MP2(terfc, aDZ) methods perform far better than MP2/aDZ across systems with hydrogen-bonding, dispersion, and mixed interactions at a fraction of MP2/CBS computational cost.

  10. The Effect of the Basis-Set Superposition Error on the Calculation of Dispersion Interactions:  A Test Study on the Neon Dimer.

    Science.gov (United States)

    Monari, Antonio; Bendazzoli, Gian Luigi; Evangelisti, Stefano; Angeli, Celestino; Ben Amor, Nadia; Borini, Stefano; Maynau, Daniel; Rossi, Elda

    2007-03-01

    The dispersion interactions of the Ne2 dimer were studied using both the long-range perturbative and supramolecular approaches:  for the long-range approach, full CI or string-truncated CI methods were used, while for the supramolecular treatments, the energy curves were computed by using configuration interaction with single and double excitation (CISD), coupled cluster with single and double excitation, and coupled-cluster with single and double (and perturbative) triple excitations. From the interatomic potential-energy curves obtained by the supramolecular approach, the C6 and C8 dispersion coefficients were computed via an interpolation scheme, and they were compared with the corresponding values obtained within the long-range perturbative treatment. We found that the lack of size consistency of the CISD approach makes this method completely useless to compute dispersion coefficients even when the effect of the basis-set superposition error on the dimer curves is considered. The largest full-CI space we were able to use contains more than 1 billion symmetry-adapted Slater determinants, and it is, to our knowledge, the largest calculation of second-order properties ever done at the full-CI level so far. Finally, a new data format and libraries (Q5Cost) have been used in order to interface different codes used in the present study.

  11. Interference of macroscopic superpositions

    CERN Document Server

    Vecchi, I

    2000-01-01

    We propose a simple experimental procedure based on the Elitzur-Vaidman scheme to implement a quantum nondemolition measurement testing the persistence of macroscopic superpositions. We conjecture that its implementation will reveal the persistence of superpositions of macroscopic objects in the absence of a direct act of observation.

  12. Postselected optomechanical superpositions

    CERN Document Server

    Pepper, Brian; Jeffrey, Evan; Simon, Christoph; Bouwmeester, Dirk

    2011-01-01

    We present a scheme for achieving macroscopic quantum superpositions in optomechanical systems by using single photon postselection. This method relieves many of the challenges associated with previous optical schemes for measuring macroscopic superpositions, and only requires the devices to be in the weak coupling regime. It requires only small improvements on currently achievable device parameters, and allows observation of decoherence on a timescale unconstrained by the system's optical decay time. Prospects for observing novel decoherence mechanisms are also discussed.

  13. Nonlinear dynamics by mode superposition

    Energy Technology Data Exchange (ETDEWEB)

    Nickell, R.E.

    1976-01-01

    A mode superposition technique for approximately solving nonlinear initial-boundary-value problems of structural dynamics is discussed, and results for examples involving large deformation are compared to those obtained with implicit direct integration methods such as the Newmark generalized acceleration and Houbolt backward-difference operators. The initial natural frequencies and mode shapes are found by inverse power iteration with the trial vectors for successively higher modes being swept by Gram-Schmidt orthonormalization at each iteration. The subsequent modal spectrum for nonlinear states is based upon the tangent stiffness of the structure and is calculated by a subspace iteration procedure that involves matrix multiplication only, using the most recently computed spectrum as an initial estimate. Then, a precise time integration algorithm that has no artificial damping or phase velocity error for linear problems is applied to the uncoupled modal equations of motion. Squared-frequency extrapolation is examined for nonlinear problems as a means by which these qualities of accuracy and precision can be maintained when the state of the system (and, thus, the modal spectrum) is changing rapidly. The results indicate that a number of important advantages accrue to nonlinear mode superposition: (a) there is no significant difference in total solution time between mode superposition and implicit direct integration analyses for problems having narrow matric half-bandwidth (in fact, as bandwidth increases, mode superposition becomes more economical), (b) solution accuracy is under better control since the analyst has ready access to modal participation factors and the ratios of time step size to modal period, and (c) physical understanding of nonlinear dynamic response is improved since the analyst is able to observe the changes in the modal spectrum as deformation proceeds.

  14. Superpositions of probability distributions

    Science.gov (United States)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  15. Superpositions of probability distributions.

    Science.gov (United States)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=sigma;{2} play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  16. 基于叠加编码的Web网页抓取路径损耗估计%Estimation of Web Webpage Grab Path Loss Error Based on Superposition Coding

    Institute of Scientific and Technical Information of China (English)

    邢计亮

    2015-01-01

    对Web网页抓取是实现Web文本特征数据检索的最佳方式,Web网页抓取路径损耗误差的优化估计可以提高对Web数据的挖掘性能。传统方法中,对Web网页抓取采用基于线性滤波检测的单模匹配抓取方法,受弱信号幅度和临界阈值约束,路径损耗较大,且无法有效实现路径损耗误差有效估计。提出一种基于叠加编码特征统计的Web网页抓取路径损耗误差估计算法。构建Web网页文本特征抓取的目标函数,进行Web网络路径损耗模型构建,设计叠加编码算法进行特征统计,得到Web网页抓取路径概念格。仿真实验表明,该算法能有效提高Web网页抓取路径损耗误差估计精度,进而提高了Web网页文本数据抓取的查准率和文本特征数据的挖掘性能。%The Web Webpage crawling is the best way to realize the Web text feature data retrieval, optimization of Web Webpage grab the path loss estimation error can improve the performance of Web data mining. In the traditional method, on the Web web crawling using single-mode linear filtering detection method based on matching of grasping, weak signal am⁃plitude and threshold csonstraint, path loss is bigger, and cannot effectively achieve the effective path loss estimation error. This paper presents an algorithm to estimate based on Web web crawling path loss error statistics superposition coding fea⁃tures. The objective function to construct Web text feature capture, construction of the Web network path loss model, the de⁃sign and calculation of superimposed coding characteristic statistics, get Web web crawling path concept lattice. Simulation results show that the algorithm can effectively improve the Web web crawling path loss error estimation precision, and im⁃proves the mining performance data precision and text feature Web text data capture.

  17. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  18. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  19. Quantum superpositions of crystalline structures

    CERN Document Server

    Baltrusch, Jens D; De Chiara, Gabriele; Calarco, Tommaso; Morigi, Giovanna

    2011-01-01

    A procedure is discussed for creating coherent superpositions of motional states of ion strings. The motional states are across the structural transition linear-zigzag, and their coherent superposition is achieved by means of spin-dependent forces, such that a coherent superposition of the electronic states of one ion evolves into an entangled state between the chain's internal and external degrees of freedom. It is shown that the creation of such an entangled state can be revealed by performing Ramsey interferometry with one ion of the chain.

  20. Superposition Attacks on Cryptographic Protocols

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Funder, Jakob Løvstad; Nielsen, Jesper Buus

    2011-01-01

    Attacks on classical cryptographic protocols are usually modeled by allowing an adversary to ask queries from an oracle. Security is then defined by requiring that as long as the queries satisfy some constraint, there is some problem the adversary cannot solve, such as compute a certain piece...... of information. In this paper, we introduce a fundamentally new model of quantum attacks on classical cryptographic protocols, where the adversary is allowed to ask several classical queries in quantum superposition. This is a strictly stronger attack than the standard one, and we consider the security...

  1. Superposition on a multicomputer system.

    Science.gov (United States)

    Murray, D C; Hoban, P W; Round, W H; Graham, I D; Metcalfe, P E

    1991-01-01

    Superposition (convolution using a noninvariant kernel) has been shown to be a highly promising technique for use in calculating dose distributions in radiotherapy treatment planning. However, one major difficulty that currently prevents use in routine planning is the computational effort required to perform the calculation in three dimensions. To help solve this problem the superposition technique has been implemented on a parallel processor multicomputer in order to examine the performance characteristics of such a system. Up to eight elements have been connected in a pipeline (linear array), and tree networks of three and seven processors have also been constructed (using INMOS T800 transputers). The significant results obtained with these networks are: (1) Both topologies provide near-linear speedup with increasing processor number (8 processors provide 7.81 times the computing power of a single processor when using an optimal communication packet size); (2) increasing communication packet size from 1 voxel to an optimum of approximately 40 voxels significantly reduces communication overhead per processor. Overhead per processor for a 7-element linear array is 6.9% when using 1-voxel packets, but only 1.8% when using 40-voxel packets; (3) the topology of the network has some effect on communication overhead: Arranging 7 processors in a 1-2-4 binary tree reduces overhead to 80.1% of that encountered using a 7-element linear array (with packet size of 1 voxel).

  2. Creating a Superposition of Unknown Quantum States.

    Science.gov (United States)

    Oszmaniec, Michał; Grudka, Andrzej; Horodecki, Michał; Wójcik, Antoni

    2016-03-18

    The superposition principle is one of the landmarks of quantum mechanics. The importance of quantum superpositions provokes questions about the limitations that quantum mechanics itself imposes on the possibility of their generation. In this work, we systematically study the problem of the creation of superpositions of unknown quantum states. First, we prove a no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. Second, we provide an explicit probabilistic protocol generating a superposition of two unknown states, each having a fixed overlap with the known referential pure state. The protocol can be applied to generate coherent superposition of results of independent runs of subroutines in a quantum computer. Moreover, in the context of quantum optics it can be used to efficiently generate highly nonclassical states or non-Gaussian states.

  3. Superposition dose calculation in lung for 10MV photons.

    Science.gov (United States)

    Hoban, P W; Murray, D C; Metcalfe, P E; Round, W H

    1990-06-01

    Currently available radiotherapy treatment planning systems employ scatter function models such as ETAR and Batho dSAR for dose calculation. Errors using these models for high energy photon irradiation occur in and beyond lung tissue for small fields. For larger fields, central axis dose is correctly predicted but penumbral broadening in lung is underestimated. The major source of error is the assumption that lateral electronic equilibrium is always established. A superposition algorithm has been developed for 10MV photons which calculates the dose by convolving the TERMA (Total Energy Released per unit MAss by primary photons) with a dose spread array formed using the EGS4 Monte Carlo code. TERMA and dose spread arrays are both generated using a 10 component photon energy spectrum. Dose in inhomogeneous media is calculated using dose spread arrays generated for different density media and by scaling dose spread arrays according to density variations. This method ensures that electronic disequilibrium is modelled in situations where it exists. Superposition results in a lung phantom for a 5 x 5 cm field agree with EGS4 Monte Carlo results to within 2% for p = 0.20 gcm-3 and p = 0.30 gcm-3 lung. Profiles generated by superposition for a 10 x 10 cm field at mid-lung and compared with film measurements show that penumbral broadening in low density material is also correctly predicted.

  4. Verifying quantum superpositions at metre scales

    CERN Document Server

    Stamper-Kurn, Dan M; Müller, Holger

    2016-01-01

    While the existence of quantum superpositions of massive particles over microscopic separations has been established since the birth of quantum mechanics, the maintenance of superposition states over macroscopic separations is a subject of modern experimental tests. In Ref. [1], T. Kovachy et al. report on applying optical pulses to place a freely falling Bose-Einstein condensate into a superposition of two trajectories that separate by an impressive distance of 54 cm before being redirected toward one another. When the trajectories overlap, a final optical pulse produces interference with high contrast, but with random phase, between the two wave packets. Contrary to claims made in Ref. [1], we argue that the observed interference is consistent with, but does not prove, that the spatially separated atomic ensembles were in a quantum superposition state. Therefore, the persistence of such superposition states remains experimentally unestablished.

  5. Macroscopic superpositions and gravimetry with quantum magnetomechanics

    Science.gov (United States)

    Johnsson, Mattias T.; Brennen, Gavin K.; Twamley, Jason

    2016-11-01

    Precision measurements of gravity can provide tests of fundamental physics and are of broad practical interest for metrology. We propose a scheme for absolute gravimetry using a quantum magnetomechanical system consisting of a magnetically trapped superconducting resonator whose motion is controlled and measured by a nearby RF-SQUID or flux qubit. By driving the mechanical massive resonator to be in a macroscopic superposition of two different heights our we predict that our interferometry protocol could, subject to systematic errors, achieve a gravimetric sensitivity of Δg/g ~ 2.2 × 10-10 Hz-1/2, with a spatial resolution of a few nanometres. This sensitivity and spatial resolution exceeds the precision of current state of the art atom-interferometric and corner-cube gravimeters by more than an order of magnitude, and unlike classical superconducting interferometers produces an absolute rather than relative measurement of gravity. In addition, our scheme takes measurements at ~10 kHz, a region where the ambient vibrational noise spectrum is heavily suppressed compared the ~10 Hz region relevant for current cold atom gravimeters.

  6. Superpositions of Lorentzians as the class of causal functions

    CERN Document Server

    Dirdal, Christopher A

    2013-01-01

    We prove that all functions obeying the Kramers-Kronig relations can be approximated as superpositions of Lorentzian functions, to any precision. As a result, the typical text-book analysis of dielectric dispersion response functions in terms of Lorentzians may be viewed as encompassing the whole class of causal functions under the conditions presented here. A further consequence is that Lorentzian resonances may be viewed as possible building blocks for engineering any desired metamaterial response. Two example functions, far from typical Lorentzian resonance behavior, are expressed in terms of Lorentzian superpositions: A steep dispersion medium that achieves large negative susceptibility with arbitrarily low loss/gain, and an optimal realization of a perfect lens over a bandwidth. Error bounds are derived for the approximation.

  7. Mixed superposition rules and the Riccati hierarchy

    Science.gov (United States)

    Grabowski, Janusz; de Lucas, Javier

    Mixed superposition rules, i.e., functions describing the general solution of a system of first-order differential equations in terms of a generic family of particular solutions of first-order systems and some constants, are studied. The main achievement is a generalization of the celebrated Lie-Scheffers Theorem, characterizing systems admitting a mixed superposition rule. This somehow unexpected result says that such systems are exactly Lie systems, i.e., they admit a standard superposition rule. This provides a new and powerful tool for finding Lie systems, which is applied here to studying the Riccati hierarchy and to retrieving some known results in a more efficient and simpler way.

  8. Mixed superposition rules and the Riccati hierarchy

    CERN Document Server

    Grabowski, Janusz

    2012-01-01

    Mixed superposition rules, i.e., functions describing the general solution of a system of first-order differential equations in terms of a generic family of particular solutions of first-order systems and some constants, are studied. The main achievement is a generalization of the celebrated Lie-Scheffers Theorem, characterizing systems admitting a mixed superposition rule. This somehow unexpected result says that such systems are exactly Lie systems, i.e., they admit a standard superposition rule. This provides a new and powerful tool for finding Lie systems, which is applied here to studying the Riccati hierarchy and to retrieving some known results in a more efficient and simpler way.

  9. Decoherence of superposition states in trapped ions

    CSIR Research Space (South Africa)

    Uys, H

    2010-09-01

    Full Text Available This paper investigates the decoherence of superpositions of hyperfine states of 9Be+ ions due to spontaneous scattering of off-resonant light. It was found that, contrary to conventional wisdom, elastic Raleigh scattering can have major...

  10. Robust Mesoscopic Superposition of Ultracold Atoms

    CERN Document Server

    Hallwood, David W; Brand, Joachim

    2010-01-01

    Quantum superpositions of macroscopically distinct states, as in Schroedinger's example of a dead and alive cat, are important for our understanding of quantum mechanics and carry great promise for enhanced precision measurement techniques. Due to their inherent fragility, the maximally entangled "NOON" states engineered in optics and spin systems for ultra-precise spectroscopy have been limited to 10 particles. The related mesoscopic superpositions of flux states consisting of 10^9 Cooper pairs observed in superconducting rings have proven more robust but their microscopic nature is debated. Binary superpositions with multiple ultra-cold atoms have not yet been seen and existing proposals suffer severe limitations due to decoherence and the unfavorable scaling of precision and time scales needed to produce these states. In this paper we show how robust superpositions of mesoscopic flow in a ring trap can be made with strongly-correlated ultra-cold atoms under one-dimensional confinement. We present a microsc...

  11. Exclusion of identification by negative superposition

    Directory of Open Access Journals (Sweden)

    Takač Šandor

    2012-01-01

    Full Text Available The paper represents the first report of negative superposition in our country. Photo of randomly selected young, living woman was superimposed on the previously discovered female skull. Computer program Adobe Photoshop 7.0 was used in work. Digitilized photographs of the skull and face, after uploaded to computer, were superimposed on each other and displayed on the monitor in order to assess their possible similarities or differences. Special attention was payed to matching the same anthropometrical points of the skull and face, as well as following their contours. The process of fitting the skull and the photograph is usually started by setting eyes in correct position relative to the orbits. In this case, lower jaw gonions go beyond the face contour and gnathion is highly placed. By positioning the chin, mouth and nose their correct anatomical position cannot be achieved. All the difficulties associated with the superposition were recorded, with special emphasis on critical evaluation of work results in a negative superposition. Negative superposition has greater probative value (exclusion of identification than positive (possible identification. 100% negative superposition is easily achieved, but 100% positive - almost never. 'Each skull is unique and viewed from different perspectives is always a new challenge'. From this point of view, identification can be negative or of high probability.

  12. Scan Quantum Mechanics: Quantum Inertia Stops Superposition

    CERN Document Server

    Gato-Rivera, Beatriz

    2015-01-01

    A novel interpretation of the quantum mechanical superposition is put forward. Quantum systems scan all possible available states and switch randomly and very rapidly among them. The longer they remain in a given state, the larger the probability of the system to be found in that state during a measurement. A crucial property that we postulate is quantum inertia, that increases whenever a constituent is added, or the system is perturbed with all kinds of interactions. Once the quantum inertia $I_q$ reaches a critical value $I_{cr}$ for an observable, the switching among the different eigenvalues of that observable stops and the corresponding superposition comes to an end. Consequently, increasing the mass, temperature, gravitational force, etc. of a quantum system increases its quantum inertia until the superposition of states disappears for all the observables and the system transmutes into a classical one. The process could be reversible decreasing the size, temperature, gravitational force, etc. leading to...

  13. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    Energy Technology Data Exchange (ETDEWEB)

    Livadiotis, G., E-mail: glivadiotis@swri.edu [Southwest Research Institute, San Antonio, TX (United States)

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  14. Generation of picosecond pulsed coherent state superpositions

    DEFF Research Database (Denmark)

    Dong, Ruifang; Tipsmark, Anders; Laghaout, Amine

    2014-01-01

    We present the generation of approximated coherent state superpositions-referred to as Schrodinger cat states-by the process of subtracting single photons from picosecond pulsed squeezed states of light. The squeezed vacuum states are produced by spontaneous parametric down-conversion (SPDC...

  15. Towards Quantum Superposition of Living Organisms

    CERN Document Server

    Romero-Isart, Oriol; Quidant, Romain; Cirac, J Ignacio

    2009-01-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. Up to now, the existence of such states has been tested with small objects, like atoms, ions, electrons and photons, and even with molecules. Recently, it has been even possible to create superpositions of collections of photons, atoms, or Cooper pairs. Current progress in optomechanical systems may soon allow us to create superpositions of even larger objects, like micro-sized mirrors or cantilevers, and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high--finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low vacuum pressures, and optically behave as dielectric objects. This opens up the poss...

  16. The principle of superposition in human prehension.

    Science.gov (United States)

    Zatsiorsky, Vladimir M; Latash, Mark L; Gao, Fan; Shim, Jae Kun

    2004-03-01

    The experimental evidence supports the validity of the principle of superposition for multi-finger prehension in humans. Forces and moments of individual digits are defined by two independent commands: "Grasp the object stronger/weaker to prevent slipping" and "Maintain the rotational equilibrium of the object". The effects of the two commands are summed up.

  17. Quantum inertia stops superposition: Scan Quantum Mechanics

    Science.gov (United States)

    Gato-Rivera, Beatriz

    2017-08-01

    Scan Quantum Mechanics is a novel interpretation of some aspects of quantum mechanics in which the superposition of states is only an approximate effective concept. Quantum systems scan all possible states in the superposition and switch randomly and very rapidly among them. A crucial property that we postulate is quantum inertia, that increases whenever a constituent is added, or the system is perturbed with all kinds of interactions. Once the quantum inertia Iq reaches a critical value Icr for an observable, the switching among its different eigenvalues stops and the corresponding superposition comes to an end, leaving behind a system with a well defined value of that observable. Consequently, increasing the mass, temperature, gravitational strength, etc. of a quantum system increases its quantum inertia until the superposition of states disappears for all the observables and the system transmutes into a classical one. Moreover, the process could be reversible. Entanglement can only occur between quantum systems because an exact synchronization between the switchings of the systems involved must be established in the first place and classical systems do not have any switchings to start with. Future experiments might determine the critical inertia Icr corresponding to different observables, which translates into a critical mass Mcr for fixed environmental conditions as well as critical temperatures, critical electric and magnetic fields, etc. In addition, this proposal implies a new radiation mechanism from astrophysical objects with strong gravitational fields, giving rise to non-thermal synchrotron emission, that could contribute to neutron star formation. Superconductivity, superfluidity, Bose-Einstein condensates, and any other physical phenomena at very low temperatures must be reanalyzed in the light of this interpretation, as well as mesoscopic systems in general.

  18. Toward quantum superposition of living organisms

    Science.gov (United States)

    Romero-Isart, Oriol; Juan, Mathieu L.; Quidant, Romain; Cirac, J. Ignacio

    2010-03-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deléglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6 Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schrödinger's cat 'gedanken' paradigm (Schrödinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  19. Multipartite cellular automata and the superposition principle

    Science.gov (United States)

    Elze, Hans-Thomas

    2016-05-01

    Cellular automata (CA) can show well known features of quantum mechanics (QM), such as a linear updating rule that resembles a discretized form of the Schrödinger equation together with its conservation laws. Surprisingly, a whole class of “natural” Hamiltonian CA, which are based entirely on integer-valued variables and couplings and derived from an action principle, can be mapped reversibly to continuum models with the help of sampling theory. This results in “deformed” quantum mechanical models with a finite discreteness scale l, which for l→0 reproduce the familiar continuum limit. Presently, we show, in particular, how such automata can form “multipartite” systems consistently with the tensor product structures of non-relativistic many-body QM, while maintaining the linearity of dynamics. Consequently, the superposition principle is fully operative already on the level of these primordial discrete deterministic automata, including the essential quantum effects of interference and entanglement.

  20. Wave Superposition Based Sound Field Reconstruction

    Institute of Scientific and Technical Information of China (English)

    LI Jia-qing; CHEN Jin; YANG Chao

    2008-01-01

    In order to overcome the obstacle of singular integral in boundary element method (BEM), wepresented an efficient sound field reconstruction technique based on the wave superposition method (WSM). Itsprinciple includes three steps: first, the sound pressure field of an arbitrary shaped radiator is measured witha microphone array; then, the exterior sound field of the radiator is computed backward and forward using theWSM; at last, the final results are visualized in terms of sound pressure contours or animations. With thesevisualized contours or animations, noise sources can be easily located and quantified; also noise transmissionpath can be found out. By numerical simulation and experimental results, we proved that the technique aresuitable and accurate for sound field reconstruction. In addition, we presented a sound field reconstruction sys-tem prototype on the basis of this technique. It makes a foundation for the application of wave superpositionin the sound field reconstruction in industry situations.

  1. Superposition rules and second-order Riccati equations

    CERN Document Server

    Cariñena, J F

    2010-01-01

    The concept of superposition rule for second-order differential equations is stated and conditions ensuring the existence of such superposition rules are analysed. In this way, second-order differential equations become formally included within the theory of Lie systems. The theory is illustrated by analysing the properties of a family of second-order differential equations with applications to Physics and we obtain a superposition rule common for all its members. Finally, time-dependent superposition rules for second-order differential equations are defined and we derive a particular instance for a family of second-order Riccati equations by means of the theory of quasi-Lie schemes.

  2. Superposition and alignment of labeled point clouds.

    Science.gov (United States)

    Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke

    2011-01-01

    Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.

  3. Partial coherence and other optical delicacies of lepidopteran superposition eyes

    NARCIS (Netherlands)

    Stavenga, DG

    2006-01-01

    Superposition eyes are generally thought to function ideally when the eye is spherical and with rhabdom tips in the focal plane of the imaging optics of facet lenses and crystalline cones. Anatomical data as well as direct optical measurements demonstrate that the superposition eyes of moths and

  4. Generation of optical coherent state superpositions for quantum information processing

    DEFF Research Database (Denmark)

    Tipsmark, Anders

    2012-01-01

    I dette projektarbejde med titlen “Generation of optical coherent state superpositions for quantum information processing” har målet været at generere optiske kat-tilstande. Dette er en kvantemekanisk superpositions tilstand af to koherente tilstande med stor amplitude. Sådan en tilstand er...

  5. Teleportation of Unknown Superpositions of Collective Atomic Coherent States

    Institute of Scientific and Technical Information of China (English)

    ZHENG ShiBiao

    2001-01-01

    We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time.``

  6. Controlled Creation of Spatial Superposition States for Single Atoms

    CERN Document Server

    Deasy, K; Chormaic, S N; Gong, S; Jin, S; Niu, Y; Busch, Th.

    2006-01-01

    We present a method for the controlled and robust generation of spatial superposition states of single atoms in micro-traps. Using a counter-intuitive positioning sequence for the individual potentials and appropriately chosen trapping frequencies, we show that it is possible to selectively create two different orthogonal superposition states, which can in turn be used for quantum information purposes.

  7. Noise-based logic: Binary, multi-valued, or fuzzy, with optional superposition of logic states

    CERN Document Server

    Kish, Laszlo B

    2008-01-01

    A new type of deterministic (non-probabilistic) computer logic system inspired by the stochasticity of brain signals is shown. The distinct values are represented by independent stochastic processes: independent voltage (or current) noises. The orthogonality of these processes provides a natural way to construct binary or multi-valued logic circuitry with arbitrary number N of logic values by using analog circuitry. Moreover, the logic values on a single wire can be made a (weighted) superposition of the N distinct logic values. Fuzzy logic is also naturally represented by a two-component superposition within the binary case (N=2). Error propagation and accumulation are suppressed. Other relevant advantages are reduced energy dissipation and leakage current problems, and robustness against circuit noise and background noises such as 1/f, Johnson, shot and crosstalk noise. Variability problems are also nonexistent because the logic value is an AC signal. A similar logic system can be built with orthogonal sinu...

  8. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  9. Counterpoise correction is not useful for short and Van der Waals distances but may be useful at long range.

    Science.gov (United States)

    Sheng, Xiao Wei; Mentel, Lukasz; Gritsenko, Oleg V; Baerends, Evert Jan

    2011-10-01

    This article investigates the errors in supermolecule calculations for the helium dimer. In a full CI calculation, there are two errors. One is the basis set superposition error (BSSE), the other is the basis set convergence error (BSCE). Both of the errors arise from the incompleteness of the basis set. These two errors make opposite contributions to the interaction energies. The BSCE is by far the largest error in the short range and larger than (but much closer to) BSSE around the Van der Waals minimum. Only at the long range, the BSSE becomes the larger error. The BSCE and BSSE largely cancel each other over the Van der Waals well. Accordingly, it may be recommended to not include the BSSE for the calculation of the potential energy curve from short distance till well beyond the Van der Waals minimum, but it may be recommended to include the BSSE correction if an accurate tail behavior is required. Only if the calculation has used a very large basis set, one can refrain from including the counterpoise correction in the full potential range. These results are based on full CI calculations with the aug-cc-pVXZ (X = D, T, Q, 5) basis sets.

  10. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  11. Creation of Coherent Superposition States in Multilevel Systems

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A powerful approach to generate multilevel superposition state in ∧-type manifold of levels is proposed. In the analysis, we introduce a group of rotations to transform the coupled system to a simpler form, which involves one coupled and several decoupled, dark states in the ground state manifold. Then an arbitrary superposition state of initial and final states can be created. In particular, when the Rabi frequencies of the Stokes pulses have equal magnitudes, a superposition state (equal population of the (n - 2) superposition states) will be generated. A numerical simulation of coherence generation is given. It is shown that a small transient population in metastable state decreases as the intensity of Stokes pulses increases. Experimental implementation in Neon atom is given.

  12. Nonclassical properties and quantum resources of hierarchical photonic superposition states

    Energy Technology Data Exchange (ETDEWEB)

    Volkoff, T. J., E-mail: adidasty@gmail.com [University of California, Department of Chemistry (United States)

    2015-11-15

    We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology for generating entanglement between spatially separated electromagnetic field modes.

  13. On the Superposition and Elastic Recoil of Electromagnetic Waves

    CERN Document Server

    Schantz, Hans G

    2014-01-01

    Superposition demands that a linear combination of solutions to an electromagnetic problem also be a solution. This paper analyzes some very simple problems: the constructive and destructive interferences of short impulse voltage and current waves along an ideal free-space transmission line. When voltage waves constructively interfere, the superposition has twice the electrical energy of the individual waveforms because current goes to zero, converting magnetic to electrical energy. When voltage waves destructively interfere, the superposition has no electrical energy because it transforms to magnetic energy. Although the impedance of the individual waves is that of free space, a superposition of waves may exhibit arbitrary impedance. Further, interferences of identical waveforms allow no energy transfer between opposite ends of a transmission line. The waves appear to recoil elastically one from another. Although alternate interpretations are possible, these appear less likely. Similar phenomenology arises i...

  14. Quantum State Engineering Via Coherent-State Superpositions

    Science.gov (United States)

    Janszky, Jozsef; Adam, P.; Szabo, S.; Domokos, P.

    1996-01-01

    The quantum interference between the two parts of the optical Schrodinger-cat state makes possible to construct a wide class of quantum states via discrete superpositions of coherent states. Even a small number of coherent states can approximate the given quantum states at a high accuracy when the distance between the coherent states is optimized, e. g. nearly perfect Fock state can be constructed by discrete superpositions of n + 1 coherent states lying in the vicinity of the vacuum state.

  15. Testing the quantum superposition principle in the frequency domain

    CERN Document Server

    Bahrami, Mohammad; Ulbricht, Hendrik

    2013-01-01

    New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Here we propose a novel way to test the possible violation of the superposition principle, by analyzing its effect on the spectral properties of a generic two-level system. We will show that spectral lines shapes are modified, if the superposition principle is violated, and we quantify the magnitude of the violation. We show how this effect can be distinguished from that of standard environmental noises. We argue that accurate enough spectroscopic experiments are within reach, with current technology.

  16. Dissipative Optomechanical Preparation of Macroscopic Quantum Superposition States

    Science.gov (United States)

    Abdi, M.; Degenfeld-Schonburg, P.; Sameti, M.; Navarrete-Benlloch, C.; Hartmann, M. J.

    2016-06-01

    The transition from quantum to classical physics remains an intensely debated question even though it has been investigated for more than a century. Further clarifications could be obtained by preparing macroscopic objects in spatial quantum superpositions and proposals for generating such states for nanomechanical devices either in a transient or a probabilistic fashion have been put forward. Here, we introduce a method to deterministically obtain spatial superpositions of arbitrary lifetime via dissipative state preparation. In our approach, we engineer a double-well potential for the motion of the mechanical element and drive it towards the ground state, which shows the desired spatial superposition, via optomechanical sideband cooling. We propose a specific implementation based on a superconducting circuit coupled to the mechanical motion of a lithium-decorated monolayer graphene sheet, introduce a method to verify the mechanical state by coupling it to a superconducting qubit, and discuss its prospects for testing collapse models for the quantum to classical transition.

  17. Quantum superposition counterintuitive consequences of coherence, entanglement, and interference

    CERN Document Server

    Silverman, M P

    2007-01-01

    Coherence, entanglement, and interference arise from quantum superposition, the most distinctive and puzzling feature of quantum physics. Silverman, whose extensive experimental and theoretical work has helped elucidate these processes, presents a clear and engaging discussion of the role of quantum superposition in diverse quantum phenomena such as the wavelike nature of particle propagation, indistinguishability of identical particles, nonlocal interactions of correlated particles, topological effects of magnetic fields, and chiral asymmetry in nature. He also examines how macroscopic quantum coherence may be able to extricate physics from its most challenging quandary, the collapse of a massive degenerate star to a singularity in space in which the laws of physics break down. Explained by a physicist with a concern for clarity and experimental achievability, the extraordinary nature of quantum superposition will fascinate the reader not only for its apparent strangeness, but also for its comprehensibility.

  18. Superposition of helical beams by using a Michelson interferometer.

    Science.gov (United States)

    Gao, Chunqing; Qi, Xiaoqing; Liu, Yidong; Weber, Horst

    2010-01-04

    Orbital angular momentum (OAM) of a helical beam is of great interests in the high density optical communication due to its infinite number of eigen-states. In this paper, an experimental setup is realized to the information encoding and decoding on the OAM eigen-states. A hologram designed by the iterative method is used to generate the helical beams, and a Michelson interferometer with two Porro prisms is used for the superposition of two helical beams. The experimental results of the collinear superposition of helical beams and their OAM eigen-states detection are presented.

  19. Pairwise Quantum Correlations for Superpositions of Dicke States

    Institute of Scientific and Technical Information of China (English)

    席政军; 熊恒娜; 李永明; 王晓光

    2012-01-01

    Pairwise correlation is really an important property for multi-qubit states.For the two-qubit X states extracted from Dicke states and their superposition states,we obtain a compact expression of the quantum discord by numerical check.We then apply the expression to discuss the quantum correlation of the reduced two-qubit states of Dicke states and their superpositions,and the results are compared with those obtained by entanglement of formation,which is a quantum entanglement measure.

  20. Superposition of nonlinear coherent states on a sphere

    Directory of Open Access Journals (Sweden)

    T Hosseinzadeh

    2013-09-01

    Full Text Available  In this paper, by using the nonlinear coherent states on a sphere, we introduce superposition of the aforementioned coherent states. Then, we consider quantum optical properties of these new superposed states and compare these properties with the corresponding properties of the nonlinear coherent states on the sphere. Specifically, we investigate their characteristics function, photon-number distribution, Mandel parameter, quadrature squeezing, anti-bunching effect and Wigner function, and obtain the curvature effect on the properties of the superposed states. Finally, by using the trapped atom system, we introduce a theoretical scheme to generate superposition of the coherent states on the sphere.

  1. Macroscopic superposition states and decoherence by quantum telegraph noise

    Energy Technology Data Exchange (ETDEWEB)

    Abel, Benjamin Simon

    2008-12-19

    In the first part of the present thesis we address the question about the size of superpositions of macroscopically distinct quantum states. We propose a measure for the ''size'' of a Schroedinger cat state, i.e. a quantum superposition of two many-body states with (supposedly) macroscopically distinct properties, by counting how many single-particle operations are needed to map one state onto the other. We apply our measure to a superconducting three-junction flux qubit put into a superposition of clockwise and counterclockwise circulating supercurrent states and find this Schroedinger cat to be surprisingly small. The unavoidable coupling of any quantum system to many environmental degrees of freedom leads to an irreversible loss of information about an initially prepared superposition of quantum states. This phenomenon, commonly referred to as decoherence or dephasing, is the subject of the second part of the thesis. We have studied the time evolution of the reduced density matrix of a two-level system (qubit) subject to quantum telegraph noise which is the major source of decoherence in Josephson charge qubits. We are able to derive an exact expression for the time evolution of the reduced density matrix. (orig.)

  2. Spectral properties of superpositions of Ornstein-Uhlenbeck type processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Leonenko, N.N.

    2005-01-01

    Stationary processes with prescribed one-dimensional marginal laws and long-range dependence are constructed. The asymptotic properties of the spectral densities are studied. The possibility of Mittag-Leffler decay in the autocorrelation function of superpositions of Ornstein-Uhlenbeck type proce...

  3. Generating superpositions of higher–order Bessel beams [Journal article

    CSIR Research Space (South Africa)

    Vasilyeu, R

    2009-12-01

    Full Text Available The authors report the first experimental generation of the superposition of higher-order Bessel beams, by means of a spatial light modulator (SLM) and a ring slit aperture. They present illuminating a ring slit aperture with light which has...

  4. The black hole information paradox and macroscopic superpositions

    CERN Document Server

    Hsu, Stephen D H

    2010-01-01

    We investigate the experimental capabilities required to test whether black holes destroy information. We show that an experiment capable of illuminating the information puzzle must necessarily be able to detect or manipulate macroscopic superpositions (i.e., Everett branches). Hence, it could also address the fundamental question of decoherence versus wavefunction collapse.

  5. Atomic quantum superposition state generation via optical probing

    DEFF Research Database (Denmark)

    Nielsen, Anne Ersbak Bang; Poulsen, Uffe Vestergaard; Negretti, Antonio

    2009-01-01

    We analyze the performance of a protocol to prepare an atomic ensemble in a superposition of two macroscopically distinguishable states. The protocol relies on conditional measurements performed on a light field, which interacts with the atoms inside an optical cavity prior to detection, and we...

  6. Reconstruction of nonstationary sound fields based on the time domain plane wave superposition method.

    Science.gov (United States)

    Zhang, Xiao-Zheng; Thomas, Jean-Hugh; Bi, Chuan-Xing; Pascal, Jean-Claude

    2012-10-01

    A time-domain plane wave superposition method is proposed to reconstruct nonstationary sound fields. In this method, the sound field is expressed as a superposition of time convolutions between the estimated time-wavenumber spectrum of the sound pressure on a virtual source plane and the time-domain propagation kernel at each wavenumber. By discretizing the time convolutions directly, the reconstruction can be carried out iteratively in the time domain, thus providing the advantage of continuously reconstructing time-dependent pressure signals. In the reconstruction process, the Tikhonov regularization is introduced at each time step to obtain a relevant estimate of the time-wavenumber spectrum on the virtual source plane. Because the double infinite integral of the two-dimensional spatial Fourier transform is discretized directly in the wavenumber domain in the proposed method, it does not need to perform the two-dimensional spatial fast Fourier transform that is generally used in time domain holography and real-time near-field acoustic holography, and therefore it avoids some errors associated with the two-dimensional spatial fast Fourier transform in theory and makes possible to use an irregular microphone array. The feasibility of the proposed method is demonstrated by numerical simulations and an experiment with two speakers.

  7. A fillable micro-hollow sphere lesion detection phantom using superposition

    Energy Technology Data Exchange (ETDEWEB)

    DiFilippo, Frank P; Gallo, Sven L; Patel, Sagar [Department of Nuclear Medicine, Imaging Institute, Cleveland Clinic, Cleveland, OH 44195 (United States); Klatte, Ryan S [Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, Cleveland, OH 44195 (United States)

    2010-09-21

    The lesion detection performance of SPECT and PET scanners is most commonly evaluated with a phantom containing hollow spheres in a background chamber at a specified radionuclide contrast ratio. However, there are limitations associated with a miniature version of a hollow sphere phantom for small-animal SPECT and PET scanners. One issue is that the 'wall effect' associated with zero activity in the sphere wall and fill port causes significant errors for small diameter spheres. Another issue is that there are practical difficulties in fabricating and in filling very small spheres (<3 mm diameter). The need for lesion detection performance assessment of small-animal scanners has motivated our development of a micro-hollow sphere phantom that utilizes the principle of superposition. The phantom is fabricated by stereolithography and has interchangeable sectors containing hollow spheres with volumes ranging from 1 to 14 {mu}L (diameters ranging from 1.25 to 3.0 mm). A simple 60{sup 0} internal rotation switches the positions of three such sectors with their corresponding background regions. Raw data from scans of each rotated configuration are combined and reconstructed to yield superposition images. Since the sphere counts and background counts are acquired separately, the wall effect is eliminated. The raw data are subsampled randomly prior to summation and reconstruction to specify the desired sphere-to-background contrast ratio of the superposition image. A set of images with multiple contrast ratios is generated for visual assessment of lesion detection thresholds. To demonstrate the utility of the phantom, data were acquired with a multi-pinhole SPECT/CT scanner. Micro-liter syringes were successful in filling the small hollow spheres, and the accuracy of the dispensed volume was validated through repeated filling and weighing of the spheres. The phantom's internal rotation and the data analysis process were successful in producing the expected

  8. Dynamical disappearance of superposition states in the thermodynamic limit

    CERN Document Server

    Frasca, M

    2003-01-01

    It is shown that a macroscopic superposition state of radiation, strongly interacting with an ensemble of two-level atoms, is removed generating a coherent state describing a classical radiation field, when the thermodynamic limit is taken on the unitary evolution obtained by the Schroedinger equation. Decoherence appears as a dynamical effect in agreement with a recent proposal [M. Frasca, Phys. Lett. A 283, 271 (2001)]. To prove that this effect is quite general, we show that this same behavior appears when a superposition of two Fock number states is also considered. Higher order corrections are computed showing that this result tends to become exact in the thermodynamic limit. It appears as a genuine example of intrinsic collapse of the wave function.

  9. Pairwise Quantum Correlations for Superpositions of Dicke States

    CERN Document Server

    Xi, Zhengjun; Li, Yongming; Wang, Xiaoguang

    2011-01-01

    Using the concept of quantum discord (QD), we study the quantum correlation for a class of two-qubit X states with exchange and parity symmetries, whose density matrices have complex off-diagonal elements. We derive an upper bound of the QD, which is independent of the arguments of the complex off-diagonal elements of the reduced two-qubit density matricies. Moreover, for the two-qubit X states obtained from Dicke states and their superposition states, we obtain a compact expression of the QD by numerical check. Finally, we apply the expression to discuss the quantum correlation of the reduced two-qubit states of Dicke states and their superpositions, and the results are compared with those obtained by entanglement of formation (EoF), which is a quantum entanglement measure.

  10. Experiments testing macroscopic quantum superpositions must be slow

    CERN Document Server

    Mari, Andrea; Giovannetti, Vittorio

    2015-01-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations an...

  11. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...

  12. Capacity-Approaching Superposition Coding for Optical Fiber Links

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Tafur Monroy, Idelfonso

    2014-01-01

    We report on the first experimental demonstration of superposition coded modulation (SCM) for polarization-multiplexed coherent-detection optical fiber links. The proposed coded modulation scheme is combined with phase-shifted bit-to-symbol mapping (PSM) in order to achieve geometric and passive...... shaping of the signal's waveform. The output constellations in SCM-PSM exhibit nonbijective quasi-Gaussian statistical distributions that asymptotically reach the Shannon capacity limit, showing up to 0.7 dB sensitivity improvement for 256-ary SCM-PSM with respect to 256-ary quadrature amplitude...... modulation (QAM). The characteristic wave formation based on superposition of antipodal symbols and the lack of need for additional encoders for signal shaping, greatly reduces the transmitter and receiver processing complexity in comparison to conventional alternatives. Single-level coding strategy (SL-SCM...

  13. Measurement-Induced Macroscopic Superposition States in Cavity Optomechanics

    Science.gov (United States)

    Hoff, Ulrich B.; Kollath-Bönig, Johann; Neergaard-Nielsen, Jonas S.; Andersen, Ulrik L.

    2016-09-01

    A novel protocol for generating quantum superpositions of macroscopically distinct states of a bulk mechanical oscillator is proposed, compatible with existing optomechanical devices operating in the bad-cavity limit. By combining a pulsed optomechanical quantum nondemolition (QND) interaction with nonclassical optical resources and measurement-induced feedback, the need for strong single-photon coupling is avoided. We outline a three-pulse sequence of QND interactions encompassing squeezing-enhanced cooling by measurement, state preparation, and tomography.

  14. Single-Atom Gating of Quantum State Superpositions

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Christopher

    2010-04-28

    The ultimate miniaturization of electronic devices will likely require local and coherent control of single electronic wavefunctions. Wavefunctions exist within both physical real space and an abstract state space with a simple geometric interpretation: this state space - or Hilbert space - is spanned by mutually orthogonal state vectors corresponding to the quantized degrees of freedom of the real-space system. Measurement of superpositions is akin to accessing the direction of a vector in Hilbert space, determining an angle of rotation equivalent to quantum phase. Here we show that an individual atom inside a designed quantum corral1 can control this angle, producing arbitrary coherent superpositions of spatial quantum states. Using scanning tunnelling microscopy and nanostructures assembled atom-by-atom we demonstrate how single spins and quantum mirages can be harnessed to image the superposition of two electronic states. We also present a straightforward method to determine the atom path enacting phase rotations between any desired state vectors. A single atom thus becomes a real-space handle for an abstract Hilbert space, providing a simple technique for coherent quantum state manipulation at the spatial limit of condensed matter.

  15. Numerical model for macroscopic quantum superpositions based on phase-covariant quantum cloning

    CERN Document Server

    Buraczewski, Adam

    2011-01-01

    We present a numerical model of macroscopic quantum superpositions generated by universally covariant optimal quantum cloning. It requires fast computation of the Gaussian hypergeometric function for moderate values of its parameters and argument as well as evaluation of infinite sums involving this function. We developed a method of dynamical estimation of cutoff for these sums. We worked out algorithms performing efficient summation of values of orders ranging from $10^{-100}$ to $10^{100}$ which neither lose precision nor accumulate errors, but provide the summation with acceleration. Our model is well adapted to experimental conditions. It optimizes computation by parallelization and choice of the most efficient algorithm. The methods presented here can be adjusted for analysis of similar experimental schemes. Including decoherence and realistic detection greatly improved the reliability and usability of our model for scientific research.

  16. Concentration-temperature superposition of helix folding rates in gelatin.

    Science.gov (United States)

    Gornall, J L; Terentjev, E M

    2007-07-13

    Using optical rotation as the primary technique, we have characterized the kinetics of helix renaturation in water solutions of gelatin. By covering a wide range of solution concentrations we identify a universal exponential dependence of folding rate on concentration and quench temperature. We demonstrate a new concentration-temperature superposition of data at all temperatures and concentrations, and build the corresponding master curve. The normalized rate constant is consistent with helix lengthening. Nucleation of the triple helix occurs rapidly and contributes less to the helical onset than previously thought.

  17. Construction of quantum states by special superpositions of coherent states

    Science.gov (United States)

    Adam, P.; Molnar, E.; Mogyorosi, G.; Varga, A.; Mechler, M.; Janszky, J.

    2015-06-01

    We consider the optimal approximation of certain quantum states of a harmonic oscillator with the superposition of a finite number of coherent states in phase space placed either on an ellipse or on a certain lattice. These scenarios are currently experimentally feasible. The parameters of the ellipse and the lattice and the coefficients of the constituent coherent states are optimized numerically, via a genetic algorithm, in order to obtain the best approximation. It is found that for certain quantum states the obtained approximation is better than the ones known from the literature thus far.

  18. Seeing lens imaging as a superposition of multiple views

    CERN Document Server

    Grusche, Sascha

    2015-01-01

    In the conventional approach to lens imaging, rays are used to map object points to image points. However, many students have a need to think of the image as a whole. To answer this need, lens imaging is reinterpreted as a superposition of sharp images from different viewpoints. These so-called elemental images are uncovered by covering the lens with a pinhole array. Rays are introduced to connect elemental images. Lens ray diagrams are constructed based on bundles of elemental images. The conventional construction method is included as a special case. The proposed approach proceeds from concrete images to abstract rays.

  19. Coherent control of mesoscopic superpositions in a diatomic molecule

    CERN Document Server

    Ghosh, Suranjana

    2011-01-01

    A phase controlled wave packet, recently used in experiment of wave packet interferometry of a diatomic molecule, is investigated to obtain mesoscopic superposition structures, useful in quantum metrology. This analysis provides a new way of obtaining sub-Planck scale structures at smaller time scale of revival dynamics. We study a number of situations for delineating the smallest interference structures and their control by tailoring the relative phase between two subsidiary wave packets. We also find the most appropriate state, so far, for high precision parameter estimation in a system of diatomic molecule.

  20. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  1. Measurement-induced macroscopic superposition states in cavity optomechanics

    CERN Document Server

    Hoff, Ulrich B; Neergaard-Nielsen, Jonas S; Andersen, Ulrik L

    2016-01-01

    We present a novel proposal for generating quantum superpositions of macroscopically distinct states of a bulk mechanical oscillator, compatible with existing optomechanical devices operating in the readily achievable bad-cavity limit. The scheme is based on a pulsed cavity optomechanical quantum non-demolition (QND) interaction, driven by displaced non-Gaussian states, and measurement-induced feedback, avoiding the need for strong single-photon optomechanical coupling. Furthermore, we show that single-quadrature cooling of the mechanical oscillator is sufficient for efficient state preparation, and we outline a three-pulse protocol comprising a sequence of QND interactions for squeezing-enhanced cooling, state preparation, and tomography.

  2. On Kolmogorov's superpositions and Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  3. Quantum teleportation of one- and two-photon superposition states

    Institute of Scientific and Technical Information of China (English)

    李英; 张天才; 张俊香; 谢常德

    2003-01-01

    Quantum teleportation of one- and two-photon superposition states based on EPR entanglement of continuouswave two-mode squeezed state is discussed. The fidelities of teleportation are deduced for two different input quantum states. The dependence of the fidelity on the parameters of EPR entanglement and the gain of the classical channels are shown numerically. Comparing with the teleportation of Fock state and coherent state, it is pointed out that for given EPR entanglement and classical gain, the higher the nonclassicality of the input state, the lower the accessible fidelity of teleportation.

  4. Runs in superpositions of renewal processes with applications to discrimination

    Science.gov (United States)

    Alsmeyer, Gerold; Irle, Albrecht

    2006-02-01

    Wald and Wolfowitz [Ann. Math. Statist. 11 (1940) 147-162] introduced the run test for testing whether two samples of i.i.d. random variables follow the same distribution. Here a run means a consecutive subsequence of maximal length from only one of the two samples. In this paper we contribute to the problem of runs and resulting test procedures for the superposition of independent renewal processes which may be interpreted as arrival processes of customers from two different input channels at the same service station. To be more precise, let (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 be the arrival processes for channel 1 and channel 2, respectively, and (Wn)n[greater-or-equal, slanted]1 their be superposition with counting process . Let further be the number of runs in W1,...,Wn and the number of runs observed up to time t. We study the asymptotic behavior of and Rt, first for the case where (Sn)n[greater-or-equal, slanted]1 and (Tn)n[greater-or-equal, slanted]1 have exponentially distributed increments with parameters [lambda]1 and [lambda]2, and then for the more difficult situation when these increments have an absolutely continuous distribution. These results are used to design asymptotic level [alpha] tests for testing [lambda]1=[lambda]2 against [lambda]1[not equal to][lambda]2 in the first case, and for testing for equal scale parameters in the second.

  5. Outage probability analysis for superposition coded symmetric relaying

    Institute of Scientific and Technical Information of China (English)

    WU Yi; ZHENG Meng; FEI ZeSong; LARSSON Erik G.; KUANG JingMing

    2013-01-01

    Superposition coded symmetric relaying is a bandwidth-efficient cooperative scheme where each source node simultaneously transmits both its own "local" packet and "relay" packet that originated at its partner by adding the modulated local and relay signals in Euclidean space. This paper investigates the power allocation and outage probability of a superposition coded symmetric relaying system with finite-constellation signaling. We first derive the mutual information (MI) metrics for the system. The derived MI metrics consist of two parts: one represents the MI conveyed by the modulated signal corresponding to its own data, and the other represents the MI conveyed by the modulated signal corresponding to its partner's data. Using MI based effective signal-to-noise ratio mapping technique, we attain expressions for the outage probability. Furthermore, we discuss power allocation policies that minimize the outage probability. Simulation results are presented to verify the correctness of the outage probability analysis and the benefits of the power allocation.

  6. Superposition states for quantum nanoelectronic circuits and their nonclassical properties

    Science.gov (United States)

    Choi, Jeong Ryeol

    2016-09-01

    Quantum properties of a superposition state for a series RLC nanoelectronic circuit are investigated. Two displaced number states of the same amplitude but with opposite phases are considered as components of the superposition state. We have assumed that the capacitance of the system varies with time and a time-dependent power source is exerted on the system. The effects of displacement and a sinusoidal power source on the characteristics of the state are addressed in detail. Depending on the magnitude of the sinusoidal power source, the wave packets that propagate in charge(q)-space are more or less distorted. Provided that the displacement is sufficiently high, distinct interference structures appear in the plot of the time behavior of the probability density whenever the two components of the wave packet meet together. This is strong evidence for the advent of nonclassical properties in the system, that cannot be interpretable by the classical theory. Nonclassicality of a quantum system is not only a beneficial topic for academic interest in itself, but its results can be useful resources for quantum information and computation as well.

  7. Experiments testing macroscopic quantum superpositions must be slow

    Science.gov (United States)

    Mari, Andrea; de Palma, Giacomo; Giovannetti, Vittorio

    2016-03-01

    We consider a thought experiment where the preparation of a macroscopically massive or charged particle in a quantum superposition and the associated dynamics of a distant test particle apparently allow for superluminal communication. We give a solution to the paradox which is based on the following fundamental principle: any local experiment, discriminating a coherent superposition from an incoherent statistical mixture, necessarily requires a minimum time proportional to the mass (or charge) of the system. For a charged particle, we consider two examples of such experiments, and show that they are both consistent with the previous limitation. In the first, the measurement requires to accelerate the charge, that can entangle with the emitted photons. In the second, the limitation can be ascribed to the quantum vacuum fluctuations of the electromagnetic field. On the other hand, when applied to massive particles our result provides an indirect evidence for the existence of gravitational vacuum fluctuations and for the possibility of entangling a particle with quantum gravitational radiation.

  8. Unveiling the curtain of superposition: Recent gedanken and laboratory experiments

    Science.gov (United States)

    Cohen, E.; Elitzur, A. C.

    2017-08-01

    What is the true meaning of quantum superposition? Can a particle genuinely reside in several places simultaneously? These questions lie at the heart of this paper which presents an updated survey of some important stages in the evolution of the three-boxes paradox, as well as novel conclusions drawn from it. We begin with the original thought experiment of Aharonov and Vaidman, and proceed to its non-counterfactual version. The latter was recently realized by Okamoto and Takeuchi using a quantum router. We then outline a dynamic version of this experiment, where a particle is shown to “disappear” and “re-appear” during the time evolution of the system. This surprising prediction based on self-cancellation of weak values is directly related to our notion of Quantum Oblivion. Finally, we present the non-counterfactual version of this disappearing-reappearing experiment. Within the near future, this last version of the experiment is likely to be realized in the lab, proving the existence of exotic hitherto unknown forms of superposition. With the aid of Bell’s theorem, we prove the inherent nonlocality and nontemporality underlying such pre- and post-selected systems, rendering anomalous weak values ontologically real.

  9. Modelling polychromatic high energy photon beams by superposition.

    Science.gov (United States)

    Metcalfe, P E; Hoban, P W; Murray, D C; Round, W H

    1989-09-01

    A unified three dimensional superposition approach to dose calculations used in treatment planning of polychromatic high energy photon beams in radiotherapy is developed. The approach we have used involves computing the dose at all points in a medium by superposing the dose spread array (DSA) from the interaction of a photon at a point in the medium with an array of data representing the TERMA (photon fluence times the photon energy) at points in the beam. The polychromatic nature of the beam is accounted for by modelling the beam as having ten spectral components. A "polychromatic dose spread array" (PDSA) for an interaction from a beam with this spectrum was derived. The TERMA array is calculated from a weighted average of the TERMA arrays for the ten photon energies to give a "polychromatic TERMA array". Thus the method accounts for the effect of beam hardening of the TERMA. But it does not account for the effect of beam hardening on the PDSA since a single PDSA (usually for the spectrum at the surface of the medium) is used at all depths. However, by considering measured and calculated beam central axis data, this model is shown to be adequate for computing depth doses for beams in a homogeneous medium penetrating to extreme radiological depths. A computation time advantage is gained because only one superposition per beam is required.

  10. Free Nano-Object Ramsey Interferometry for Large Quantum Superpositions

    Science.gov (United States)

    Wan, C.; Scala, M.; Morley, G. W.; Rahman, ATM. A.; Ulbricht, H.; Bateman, J.; Barker, P. F.; Bose, S.; Kim, M. S.

    2016-09-01

    We propose an interferometric scheme based on an untrapped nano-object subjected to gravity. The motion of the center of mass (c.m.) of the free object is coupled to its internal spin system magnetically, and a free flight scheme is developed based on coherent spin control. The wave packet of the test object, under a spin-dependent force, may then be delocalized to a macroscopic scale. A gravity induced dynamical phase (accrued solely on the spin state, and measured through a Ramsey scheme) is used to reveal the above spatially delocalized superposition of the spin-nano-object composite system that arises during our scheme. We find a remarkable immunity to the motional noise in the c.m. (initially in a thermal state with moderate cooling), and also a dynamical decoupling nature of the scheme itself. Together they secure a high visibility of the resulting Ramsey fringes. The mass independence of our scheme makes it viable for a nano-object selected from an ensemble with a high mass variability. Given these advantages, a quantum superposition with a 100 nm spatial separation for a massive object of 1 09 amu is achievable experimentally, providing a route to test postulated modifications of quantum theory such as continuous spontaneous localization.

  11. Student ability to distinguish between superposition states and mixed states in quantum mechanics

    Science.gov (United States)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-12-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the experimental implications of a superposition state. In particular, they fail to recognize how a superposition state and a mixed state (sometimes called a "lack of knowledge" state) can produce different experimental results. We present data that suggest that superposition in quantum mechanics is a difficult concept for students enrolled in sophomore-, junior-, and graduate-level quantum mechanics courses. We illustrate how an interactive lecture tutorial can improve student understanding of quantum mechanical superposition. A longitudinal study suggests that the impact persists after an additional quarter of quantum mechanics instruction that does not specifically address these ideas.

  12. Role of the superposition principle for enhancing the efficiency of the quantum-mechanical Carnot engine.

    Science.gov (United States)

    Abe, Sumiyoshi; Okuyama, Shinji

    2012-01-01

    The role of the superposition principle is discussed for the quantum-mechanical Carnot engine introduced by Bender, Brody, and Meister [J. Phys. A 33, 4427 (2000)]. It is shown that the efficiency of the engine can be enhanced by the superposition of quantum states. A finite-time process is also discussed and the condition of the maximum power output is presented. Interestingly, the efficiency at the maximum power is lower than that without superposition.

  13. Vibration Superposition in Tunnel Blasting with Millisecond Delay

    Institute of Scientific and Technical Information of China (English)

    ZHENG Jun-jie; LOU Xiao-ming; LUO De-pi

    2009-01-01

    According to explosion dynamics and elastic wave theory,the models of particle vibration velocity for simultaneous blasting and millisecond blasting are built.In the models,influential factors such as delay interval and charge quantity,are considered.The calculated vibration velocity is compared with the field test results,which shows that the theoretical values are close to the experimental ones.Meanwhile,the particle vibration velocity decreases quickly with time due to the damping of rock mass and has a harmonic motion,and the particle vibration velocity of millisecond blasting has short interval.The superposition of particle vibration velocities may reduce vibration because of wave interference,or magnify the surrounding rock response to the blasting-induced vibration.

  14. Quantum Decoherence Timescales for Ionic Superposition States in Ion Channels

    CERN Document Server

    Salari, V; Fazileh, F; Shahbazi, F

    2014-01-01

    There are many controversial and challenging discussions about quantum effects in microscopic structures in neurons of the human brain. The challenge is mainly because of quick decoherence of quantum states due to hot, wet and noisy environment of the brain which forbids long life coherence for brain processing. Despite these critical discussions, there are only a few number of published papers about numerical aspects of decoherence in neurons. Perhaps the most important issue is offered by Max Tegmark who has calculated decoherence times for the systems of "ions" and "microtubules" in neurons of the brain. In fact, Tegmark did not consider ion channels which are responsible for ions displacement through the membrane and are the building blocks of electrical membrane signals in the nervous system. Here, we would like to re-investigate decoherence times for ionic superposition states by using the data obtained via molecular dynamics simulations. Our main approach is according to what Tegmark has used before. I...

  15. Adiabatic rotation, quantum search, and preparation of superposition states

    Science.gov (United States)

    Siu, M. Stewart

    2007-06-01

    We introduce the idea of using adiabatic rotation to generate superpositions of a large class of quantum states. For quantum computing this is an interesting alternative to the well-studied “straight line” adiabatic evolution. In ways that complement recent results, we show how to efficiently prepare three types of states: Kitaev’s toric code state, the cluster state of the measurement-based computation model, and the history state used in the adiabatic simulation of a quantum circuit. We also show that the method, when adapted for quantum search, provides quadratic speedup as other optimal methods do with the advantages that the problem Hamiltonian is time independent and that the energy gap above the ground state is strictly nondecreasing with time. Likewise the method can be used for optimization as an alternative to the standard adiabatic algorithm.

  16. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    Science.gov (United States)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The

  17. Predicting jet radius in electrospinning by superpositioning exponential functions

    Science.gov (United States)

    Widartiningsih, P. M.; Iskandar, F.; Munir, M. M.; Viridi, S.

    2016-08-01

    This paper presents an analytical study of the correlation between viscosity and fiber diameter in electrospinning. Control over fiber diameter in electrospinning process was important since it will determine the performance of resulting nanofiber. Theoretically, fiber diameter was determined by surface tension, solution concentration, flow rate, and electric current. But experimentally it had been proven that significantly viscosity had an influence to fiber diameter. Jet radius equation in electrospinning process was divided into three areas: near the nozzle, far from the nozzle, and at jet terminal. There was no correlation between these equations. Superposition of exponential series model provides the equations combined into one, thus the entire of working parameters on electrospinning take a contribution to fiber diameter. This method yields the value of solution viscosity has a linear relation to jet radius. However, this method works only for low viscosity.

  18. Performance of Superposition Coded Broadcast/Unicast Service Overlay System

    Science.gov (United States)

    Yoon, Seokhyun; Kim, Donghee

    The system level performance of a superposition coded broadcast/unicast service overlay system is considered. Cellular network for unicast service only is considered as interference limited system, where increasing the transmission power does not help improve the network throughput especially when the frequency reuse factor is close to 1. In such cases, the amount of power that does not contribute to improving the throughput can be considered as “unused.” This situation motivates us to use the unused power for broadcast services, which can be efficiently provided in OFDM based single frequency networks as in digital multimedia broadcast systems. In this paper, we investigate the performance of such a broadcast/unicast overlay system in which a single frequency broadcast service is superimposed over a unicast cellular service. Alternative service multiplexing using FDM/TDM is also considered for comparison.

  19. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    Science.gov (United States)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  20. The number of terms in the superpositions upper bounds the amount of the coherence change

    Science.gov (United States)

    Liu, Feng; Li, Fei

    2016-10-01

    For the l1 norm of coherence, what is the relation between the coherence of a state and the individual terms that by superposition yield the state? We find upper bounds on the coherence change before and after the superposition. When every term comes from one Hilbert subspace, the upper bound is the number of terms in the superpositions minus one. However, when the terms have support on orthogonal subspaces, the coherence of the superposition cannot be more the double of the above upper bound than the average of the coherence of the all terms being superposed.

  1. On first-order theorem proving using generalized odd-superpositions

    Institute of Scientific and Technical Information of China (English)

    吴尽昭; 刘卓军

    1996-01-01

    It is shown that the proof system using odd-superpositions Ⅱ is not complete.The reason leading to this incompleteness is that the use of idempotency rule is neglected.By defining the superpositions of first-order polynomials and zero,the concept of odd-superpositions Ⅱ is extended,and a complete proof system using the extended odd-superpositions Ⅱ is developed.In addition,this proof system is an improvement on remainder method;its completeness demonstrates actually that the remainder method using semantic strategy is still complete.

  2. Generation of superpositions of coherent states for an atomic sample in cavity QED

    Institute of Scientific and Technical Information of China (English)

    Zheng Shi-Biao

    2009-01-01

    This paper proposes a scheme for generation of superpositions of coherent states of the effective bosonic mode in a collection of atoms. In the scheme an atomic sample interacts with a slightly detuned cavity mode and a resonant strong classical field. Under certain conditions the atomic system evolves from a coherent state to a superposition of coherent states.

  3. Superposition Principle and Young Type Double-Slit Experiment in Vacuum

    CERN Document Server

    Savas, A

    2002-01-01

    In this study, it is shown with reasons that superposition principle does not work in vacuum. This case can be observed by Young type double slit experiment to be carried out. Since field-field interaction is carried through charged particles, in the absence of charged particles linear superposition of two fields is not possible and interference will not be observed.

  4. A note on superposition of two unknown states using Deutsch CTC model

    CERN Document Server

    Sami, Sasha

    2016-01-01

    In a recent work, authors prove a yet another no-go theorem that forbids the existence of a universal probabilistic quantum protocol producing a superposition of two unknown quantum states. In this short note, we show that in the presence of closed time like curves, one can indeed create superposition of unknown quantum states and evade the no-go result.

  5. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    Science.gov (United States)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  6. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    Science.gov (United States)

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  7. Experimental generation and application of the superposition of higher-order Bessel beams

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-07-01

    Full Text Available This presentation provides information on experimental generation and application of the superposition of higher-order Bessel beams. The superposition of zero order Bessel beams can be used to measure the radius of curvature of a reflecting surface...

  8. Optics and phylogeny: is there an insight? The evolution of superposition eyes in the Decapoda (Crustacea)

    NARCIS (Netherlands)

    Gaten, Edward

    1998-01-01

    This paper addresses the use of eye structure and optics in the construction of crustacean phylogenies and presents an hypothesis for the evolution of superposition eyes in the Decapoda, based on the distribution of eye types in extant decapod families. It is suggested that reflecting superposition

  9. Generation of discrete superpositions of coherent states in the anharmonic oscillator model

    CERN Document Server

    Miranowicz, A; Kielich, S; 10.1088/0954-8998/2/3/006

    2011-01-01

    The problem of generating discrete superpositions of coherent states in the process of light propagation through a nonlinear Kerr medium, which is modelled by the anharmonic oscillator, is discussed. It is shown that under an appropriate choice of the length (time) of the medium the superpositions with both even and odd numbers of coherent states can appear. Analytical formulae for such superpositions with a few components are given explicitly. General rules governing the process of generating discrete superpositions of coherent states are also given. The maximum number of well distinguished states that can be obtained for a given number of initial photons is estimated. The quasiprobability distribution $Q(\\alpha,\\alpha^*,t)$ representing the superposition states is illustrated graphically, showing regular structures when the component states are well separated.

  10. Homogeneous partial differential equations for superpositions of indeterminate functions of several variables

    Energy Technology Data Exchange (ETDEWEB)

    Asai, Kazuto [University of Aizu, Aizu-Wakamatsu (Japan)

    2009-02-28

    We determine essentially all partial differential equations satisfied by superpositions of tree type and of a further special type. These equations represent necessary and sufficient conditions for an analytic function to be locally expressible as an analytic superposition of the type indicated. The representability of a real analytic function by a superposition of this type is independent of whether that superposition involves real-analytic functions or C{sup {rho}}-functions, where the constant {rho} is determined by the structure of the superposition. We also prove that the function u defined by u{sup n}=xu{sup a}+yu{sup b}+zu{sup c}+1 is generally non-representable in any real (resp. complex) domain as f(g(x,y),h(y,z)) with twice differentiable f and differentiable g, h (resp. analytic f, g, h)

  11. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning.

    Science.gov (United States)

    Wu, Vincent W C; Tse, Teddy K H; Ho, Cola L M; Yeung, Eric C Y

    2013-01-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (pplans was significantly lower than that of the MGS (palgorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.

  12. A geometry-based approach to determining time-temperature superposition shifts in aging experiments

    Energy Technology Data Exchange (ETDEWEB)

    Maiti, Amitesh

    2015-12-21

    A powerful way to expand the time and frequency range of material properties is through a method called time-temperature superposition (TTS). Traditionally, TTS has been applied to the dynamical mechanical and flow properties of thermo-rheologically simple materials, where a well-defined master curve can be objectively and accurately obtained by appropriate shifts of curves at different temperatures. However, TTS analysis can also be useful in many other situations where there is scatter in the data and where the principle holds only approximately. In such cases, shifting curves can become a subjective exercise and can often lead to significant errors in the long-term prediction. This mandates the need for an objective method of determining TTS shifts. Here, we adopt a method based on minimizing the “arc length” of the master curve, which is designed to work in situations where there is overlapping data at successive temperatures. We examine the accuracy of the method as a function of increasing noise in the data, and explore the effectiveness of data smoothing prior to TTS shifting. We validate the method using existing experimental data on the creep strain of an aramid fiber and the powder coarsening of an energetic material.

  13. Fugacity superposition: a new approach to dynamic multimedia fate modeling.

    Science.gov (United States)

    Hertwich, E G

    2001-08-01

    The fugacities, concentrations, or inventories of pollutants in environmental compartments as determined by multimedia environmental fate models of the Mackay type can be superimposed on each other. This is true for both steady-state (level III) and dynamic (level IV) models. Any problem in multimedia fate models with linear, time-invariant transfer and transformation coefficients can be solved through a superposition of a set of n independent solutions to a set of coupled, homogeneous first-order differential equations, where n is the number of compartments in the model. For initial condition problems in dynamic models, the initial inventories can be separated, e.g. by a compartment. The solution is obtained by adding the single-compartment solutions. For time-varying emissions, a convolution integral is used to superimpose solutions. The advantage of this approach is that the differential equations have to be solved only once. No numeric integration is required. Alternatively, the dynamic model can be simplified to algebraic equations using the Laplace transform. For time-varying emissions, the Laplace transform of the model equations is simply multiplied with the Laplace transform of the emission profile. It is also shown that the time-integrated inventories of the initial conditions problems are the same as the inventories in the steady-state problem. This implies that important properties of pollutants such as potential dose, persistence, and characteristic travel distance can be derived from the steady state.

  14. Linear superposition of sensory-evoked and ongoing cortical hemodynamics

    Directory of Open Access Journals (Sweden)

    Mohamad Saka

    2010-08-01

    Full Text Available Modern non-invasive brain imaging techniques utilise changes in cerebral blood flow, volume and oxygenation that accompany brain activation. However, stimulus-evoked hemodynamic responses display considerable inter-trial variability even when identical stimuli are presented and the sources of this variability are poorly understood. One of the sources of this response variation could be ongoing spontaneous hemodynamic fluctuations. To investigate this issue, 2-dimensional optical imaging spectroscopy was used to measure cortical hemodynamics in response to sensory stimuli in anaesthetised rodents Pre-stimulus cortical hemodynamics displayed spontaneous periodic fluctuations and as such, data from individual stimulus presentation trials were assigned to one of four groups depending on the phase angle of pre-stimulus hemodynamic fluctuations and averaged. This analysis revealed that sensory evoked cortical hemodynamics displayed distinctive response characteristics and magnitudes depending on the phase angle of ongoing fluctuations at stimulus onset. To investigate the origin of this phenomenon, ‘null-trails’ were collected without stimulus presentation. Subtraction of phase averaged ‘null trials’ from their phase averaged stimulus-evoked counterparts resulted in four similar time series that resembled the mean stimulus-evoked response. These analyses suggest that linear superposition of evoked and ongoing cortical hemodynamic changes may be a property of the structure of inter-trial variability.

  15. A reciprocal space approach for locating symmetry elements in Patterson superposition maps

    Energy Technology Data Exchange (ETDEWEB)

    Hendrixson, T.

    1990-09-21

    A method for determining the location and possible existence of symmetry elements in Patterson superposition maps has been developed. A comparison of the original superposition map and a superposition map operated on by the symmetry element gives possible translations to the location of the symmetry element. A reciprocal space approach using structure factor-like quantities obtained from the Fourier transform of the superposition function is then used to determine the best'' location of the symmetry element. Constraints based upon the space group requirements are also used as a check on the locations. The locations of the symmetry elements are used to modify the Fourier transform coefficients of the superposition function to give an approximation of the structure factors, which are then refined using the EG relation. The analysis of several compounds using this method is presented. Reciprocal space techniques for locating multiple images in the superposition function are also presented, along with methods to remove the effect of multiple images in the Fourier transform coefficients of the superposition map. In addition, crystallographic studies of the extended chain structure of (NHC{sub 5}H{sub 5})SbI{sub 4} and of the twinning method of the orthorhombic form of the high-{Tc} superconductor YBa{sub 2}Cu{sub 3}O{sub 7-x} are presented. 54 refs.

  16. A convolution-superposition dose calculation engine for GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Hissoiny, Sami; Ozell, Benoit; Despres, Philippe [Departement de genie informatique et genie logiciel, Ecole polytechnique de Montreal, 2500 Chemin de Polytechnique, Montreal, Quebec H3T 1J4 (Canada); Departement de radio-oncologie, CRCHUM-Centre hospitalier de l' Universite de Montreal, 1560 rue Sherbrooke Est, Montreal, Quebec H2L 4M1 (Canada)

    2010-03-15

    Purpose: Graphic processing units (GPUs) are increasingly used for scientific applications, where their parallel architecture and unprecedented computing power density can be exploited to accelerate calculations. In this paper, a new GPU implementation of a convolution/superposition (CS) algorithm is presented. Methods: This new GPU implementation has been designed from the ground-up to use the graphics card's strengths and to avoid its weaknesses. The CS GPU algorithm takes into account beam hardening, off-axis softening, kernel tilting, and relies heavily on raytracing through patient imaging data. Implementation details are reported as well as a multi-GPU solution. Results: An overall single-GPU acceleration factor of 908x was achieved when compared to a nonoptimized version of the CS algorithm implemented in PlanUNC in single threaded central processing unit (CPU) mode, resulting in approximatively 2.8 s per beam for a 3D dose computation on a 0.4 cm grid. A comparison to an established commercial system leads to an acceleration factor of approximately 29x or 0.58 versus 16.6 s per beam in single threaded mode. An acceleration factor of 46x has been obtained for the total energy released per mass (TERMA) calculation and a 943x acceleration factor for the CS calculation compared to PlanUNC. Dose distributions also have been obtained for a simple water-lung phantom to verify that the implementation gives accurate results. Conclusions: These results suggest that GPUs are an attractive solution for radiation therapy applications and that careful design, taking the GPU architecture into account, is critical in obtaining significant acceleration factors. These results potentially can have a significant impact on complex dose delivery techniques requiring intensive dose calculations such as intensity-modulated radiation therapy (IMRT) and arc therapy. They also are relevant for adaptive radiation therapy where dose results must be obtained rapidly.

  17. Comparison study on spherical wave superposition method and spherical wave source boundary point method for realizing nearfield acoustic holography

    Institute of Scientific and Technical Information of China (English)

    BI Chuanxing; CHEN Xinzhao; ZHOU Rong; CHEN Jian

    2005-01-01

    In the light of the concept of spherical wave source, the theoretical model of nearfield acoustic holography (NAH) based on the spherical wave superposition method (SWSM), including reconstruction of expansion coefficients, prediction of acoustic field, error sensitivity analysis, regularization method and a searching method with dual measurement surfaces for determining the optimal number of expansion terms, is established. Subsequently, the spherical wave source boundary point method (SWSBPM) and its application in the NAH are introduced briefly. Considering the similarity of the SWSM and the SWSBPM for realizing the NAH, they are compared. The similarities and differences of the two methods are illuminated by a rigorous mathematical justification and two experiments on a single source and two coherent sources in the semi-free acoustic field. And, the superiority of the NAH based on the SWSBPM is demonstrated.

  18. Collapsing a perfect superposition to a chosen quantum state without measurement.

    Science.gov (United States)

    Younes, Ahmed; Abdel-Aty, Mahmoud

    2014-01-01

    Given a perfect superposition of [Formula: see text] states on a quantum system of [Formula: see text] qubits. We propose a fast quantum algorithm for collapsing the perfect superposition to a chosen quantum state [Formula: see text] without applying any measurements. The basic idea is to use a phase destruction mechanism. Two operators are used, the first operator applies a phase shift and a temporary entanglement to mark [Formula: see text] in the superposition, and the second operator applies selective phase shifts on the states in the superposition according to their Hamming distance with [Formula: see text]. The generated state can be used as an excellent input state for testing quantum memories and linear optics quantum computers. We make no assumptions about the used operators and applied quantum gates, but our result implies that for this purpose the number of qubits in the quantum register offers no advantage, in principle, over the obvious measurement-based feedback protocol.

  19. Collapsing a perfect superposition to a chosen quantum state without measurement.

    Directory of Open Access Journals (Sweden)

    Ahmed Younes

    Full Text Available Given a perfect superposition of [Formula: see text] states on a quantum system of [Formula: see text] qubits. We propose a fast quantum algorithm for collapsing the perfect superposition to a chosen quantum state [Formula: see text] without applying any measurements. The basic idea is to use a phase destruction mechanism. Two operators are used, the first operator applies a phase shift and a temporary entanglement to mark [Formula: see text] in the superposition, and the second operator applies selective phase shifts on the states in the superposition according to their Hamming distance with [Formula: see text]. The generated state can be used as an excellent input state for testing quantum memories and linear optics quantum computers. We make no assumptions about the used operators and applied quantum gates, but our result implies that for this purpose the number of qubits in the quantum register offers no advantage, in principle, over the obvious measurement-based feedback protocol.

  20. A cute and highly contrast-sensitive superposition eye : The diurnal owlfly Libelloides macaronius

    NARCIS (Netherlands)

    Belušič, Gregor; Pirih, Primož; Stavenga, Doekele G.

    The owlfly Libelloides macaronius (Insecta: Neuroptera) has large bipartite eyes of the superposition type. The spatial resolution and sensitivity of the photoreceptor array in the dorsofrontal eye part was studied with optical and electrophysiological methods. Using structured illumination

  1. Superpositions of higher-order bessel beams and nondiffracting speckle fields - (SAIP 2009)

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-07-01

    Full Text Available This poster presents a mechanism for the generation of the superposition of higher-order Bessel beams, which implements a ring slit aperture and spatial light modulator (SLM). The experimental technique is also adapted to generate nondiffracting...

  2. A cute and highly contrast-sensitive superposition eye - the diurnal owlfly Libelloides macaronius

    NARCIS (Netherlands)

    Belusic, Gregor; Pirih, Primoz; Stavenga, Doekele G.; Belušič, Gregor; Pirih, Primož

    2013-01-01

    The owlfly Libelloides macaronius (Insecta: Neuroptera) has large bipartite eyes of the superposition type. The spatial resolution and sensitivity of the photoreceptor array in the dorsofrontal eye part was studied with optical and electrophysiological methods. Using structured illumination microsco

  3. Evaluation of Class II treatment by cephalometric regional superpositions versus conventional measurements.

    Science.gov (United States)

    Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph

    2005-11-01

    The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.

  4. Harmonic Enhancement Mechanism of a Superposition State Atom Irradiated by Short Pulses

    Institute of Scientific and Technical Information of China (English)

    YANG Yu-Jun; ZHU Qi-Ren; CHEN Ji-Gen; HUANG Yu-Xin; GUO Fu-Ming; ZHANG Hong-Xing; SUN Jia-Zhong; ZHU Hong-Yu; WANG Li; WANG Hui

    2007-01-01

    We investigate the high-order harmonic generation (HHG) of a model atom whose initial state is prepared in a superposition of its ground state and an excited state irradiated by different duration laser pulses. Compared to the HHG generated from an atom whose initial state is in its ground state, its conversion efficiency obtains some enhancement. The enhancement originates from the higher ionization rate (rather than the ionization yield) of the atom with superposition initial state.

  5. Experimental Demonstration of Capacity-Achieving Phase-Shifted Superposition Modulation

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Zibar, Darko; Caballero Jambrina, Antonio

    2013-01-01

    We report on the first experimental demonstration of phase-shifted superposition modulation (PSM) for optical links. Successful demodulation and decoding is obtained after 240 km transmission for 16-, 32- and 64-PSM.......We report on the first experimental demonstration of phase-shifted superposition modulation (PSM) for optical links. Successful demodulation and decoding is obtained after 240 km transmission for 16-, 32- and 64-PSM....

  6. Creation of macroscopic superpositions of flow states with Bose-Einstein condensates

    OpenAIRE

    Dunningham, Jacob; Hallwood, David

    2006-01-01

    We present a straightforward scheme for creating macroscopic superpositions of different superfluid flow states of Bose-Einstein condensates trapped in optical lattices. This scheme has the great advantage that all the techniques required are achievable with current experiments. Furthermore, the relative difficulty of creating cats scales favorably with the size of the cat. This means that this scheme may be well-suited to creating superpositions involving large numbers of particles. Such sta...

  7. Nonlinear quantum mechanics, the superposition principle, and the quantum measurement problem

    Indian Academy of Sciences (India)

    Kinjalk Lochan; T P Singh

    2011-01-01

    There are four reasons why our present knowledge and understanding of quantum mechanics can be regarded as incomplete. (1) The principle of linear superposition has not been experimentally tested for position eigenstates of objects having more than about a thousand atoms. (2) There is no universally agreed upon explanation for the process of quantum measurement. (3) There is no universally agreed upon explanation for the observed fact that macroscopic objects are not found in superposition of position eigenstates. (4) Most importantly, the concept of time is classical and hence external to quantum mechanics: there should exist an equivalent reformulation of the theory which does not refer to an external classical time. In this paper we argue that such a reformulation is the limiting case of a nonlinear quantum theory, with the nonlinearity becoming important at the Planck mass scale. Such a nonlinearity can provide insights into the aforesaid problems. We use a physically motivated model for a nonlinear Schr ¨odinger equation to show that nonlinearity can help in understanding quantum measurement. We also show that while the principle of linear superposition holds to a very high accuracy for atomic systems, the lifetime of a quantum superposition becomes progressively smaller, as one goes from microscopic to macroscopic objects. This can explain the observed absence of position superpositions in macroscopic objects (lifetime is too small). It also suggests that ongoing laboratory experiments may be able to detect the finite superposition lifetime for mesoscopic objects in the near future.

  8. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  9. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  10. Encoding/decoding using superpositions of spatial modes for image transfer in km-scale few-mode fiber.

    Science.gov (United States)

    Zhu, Long; Liu, Jun; Mo, Qi; Du, Cheng; Wang, Jian

    2016-07-25

    Space domain is regarded as the only known physical dimension left of lightwave to exploit in optical communications. Recently, lots of research efforts have been devoted to using spatial modes of fibers to increase data transmission capacity in optical fiber communications. In this paper, we propose and demonstrate a different approach to exploiting the space dimension, i.e. transferring image by space dimension encoding/decoding using superpositions of spatial modes in km-scale few-mode fiber. Three grayscale images are successfully transmitted through a 1.1-km few-mode fiber by employing either 4 modes, i.e. three linearly polarized (LP) modes of LP01, LP11a, LP11b and one orbital angular momentum (OAM) mode of OAM-1, or 2 modes (OAM+1, OAM-1). The bit-error rate is evaluated and zero error among all received data is achieved, showing favorable fiber link communication performance using the spatial modes of fiber for encoding/decoding. Moreover, we also demonstrate the 4 modes (LP01, LP11a, LP11b and OAM-1) encoding/decoding for image transfer in a 10-km few-mode fiber in the experiment.

  11. Attosecond probing of state-resolved ionization and superpositions of atoms and molecules

    Science.gov (United States)

    Leone, Stephen

    2016-05-01

    Isolated attosecond pulses in the extreme ultraviolet are used to probe strong field ionization and to initiate electronic and vibrational superpositions in atoms and small molecules. Few-cycle 800 nm pulses produce strong-field ionization of Xe atoms, and the attosecond probe is used to measure the risetimes of the two spin orbit states of the ion on the 4d inner shell transitions to the 5p vacancies in the valence shell. Step-like features in the risetimes due to the subcycles of the 800 nm pulse are observed and compared with theory to elucidate the instantaneous and effective hole dynamics. Isolated attosecond pulses create massive superpositions of electronic states in Ar and nitrogen as well as vibrational superpositions among electronic states in nitrogen. An 800 nm pulse manipulates the superpositions, and specific subcycle interferences, level shifting, and quantum beats are imprinted onto the attosecond pulse as a function of time delay. Detailed outcomes are compared to theory for measurements of time-dynamic superpositions by attosecond transient absorption. Supported by DOE, NSF, ARO, AFOSR, and DARPA.

  12. Selective preparation of the maximum coherent superposition state in four-level atoms

    Institute of Scientific and Technical Information of China (English)

    Li Deng; Yueping Niu; Shangqing Gong

    2011-01-01

    We demonstrate that the maximum coherent superposition state can be selectively prepared using a sequence of pulse pairs in lambda-type atomic systems, with the final level as a doublet. In each pair, the Stocks pulse comes before the pump pulse, with their back edges overlapping. Numerical results indicate that by tuning the interval of the adjacent pulse pairs, the selective maximum coherent superposition state preparation between the initial and one of the final levels can be achieved. The phenomenon is caused by the accumulative property of the pulse sequence.%The coherent superposition state in atoms or molecules plays a crucial role in quantum physics.It has applications in many areas such as electromagnetically induced transparency[1-5],quantum information[6-8] and control of chemical reaction[9-11].Many schemes can prepare the coherent superposition state.For instance,the fractional stimulated Raman adiabatic passage(F-STIRAP) [12] and the coherent population trapping[13] can obtain the maximum coherent superposition state of the two lower levels in lambda-type atoms.Our group also proposed several schemes to achieve this goal,such as the methods based on the STIRAP[14,15] and the pulse train method[16].

  13. An Approximate Analytical (Structural Superposition in Terms of Two, or More, α- Circuits of the Same Topology: Pt.1 – Description of the Superposition

    Directory of Open Access Journals (Sweden)

    E. Gluskin

    2013-09-01

    Full Text Available One-ports named “f-circuits”, composed of similar conductors described by a monotonic polynomial, or quasi-polynomial (i.e. with positive but not necessarily integer, powers characteristic i = f(v are studied, focusing on the algebraic map f → F. Here F(. is the input conductivity characteristic; i.e., iin = F(vin is the input current. The “power-law” “a-circuit” introduced in [1], for which f(v ~ v a , is an important particular case. By means of a generalization of a parallel connection, the f-circuits are constructed from the a-circuits of the same topology, with different a, so that the given topology is kept, and ‘f’ is an additive function of the connection. We observe and consider an associated, generally approximated, but, in all of the cases studied, always high-precision, specific superposition. This superposition is in terms of f → F, and it means that F(. of the connection is close to the sum of the input currents of the independent a-circuits, all connected in parallel to the same source. In other words, F(. is well approximated by a linear combination of the same degrees of the independent variable as in f(., i.e. the map of the characteristics f → F is close to a linear one. This unexpected result is useful for understanding nonlinear algebraic circuits, and is missed in the classical theory. The cases of f(v = D1v + D2v 2 and f(v = D1v + D3v 3, are analyzed in examples. Special topologies when the superposition must be ideal, are also considered. In the second part [2] of the work the “circuit mechanism” that is responsible for the high precision of the superposition, in the most general case, is explained.

  14. [Superposition of the motor commands during creation of static efforts by human hand muscles].

    Science.gov (United States)

    Vereshchaka, I V; Horkovenko, A V

    2012-01-01

    The features of superposition of central motor commands (CMCs) have been studied during generation of the "two-joint" isometric efforts by hand. The electromyogram (EMG) amplitudes which were recorded from the humeral belt and shoulder muscles have been used for estimation of the CMCs intensity. The forces were generated in the horizontal plane of the work space; the position of arm was fixed. Two vectors of equal amplitudes and close direction and their geometrical sum were compared. The hypothesis of the CMCs' superposition in the task of the force vector summation has been examined. The directions of the constituent and resulting forces with satisfactory superposition of the CMCs were defined. Differences in the co-activation patterns for flexor and extensor muscles of both joints were shown. The high level of the flexor muscles activity has been observed during extension efforts, while the flexion directions demonstrated much weaker activation of the extensor muscles.

  15. Oblique superposition of two elliptically polarized lightwaves using geometric algebra: is energy-momentum conserved?

    Science.gov (United States)

    Sze, Michelle Wynne C; Sugon, Quirino M; McNamara, Daniel J

    2010-11-01

    In this paper, we use Clifford (geometric) algebra Cl(3,0) to verify if electromagnetic energy-momentum density is still conserved for oblique superposition of two elliptically polarized plane waves with the same frequency. We show that energy-momentum conservation is valid at any time only for the superposition of two counter-propagating elliptically polarized plane waves. We show that the time-average energy-momentum of the superposition of two circularly polarized waves with opposite handedness is conserved regardless of the propagation directions of the waves. And, we show that the resulting momentum density of the superposed waves generally has a vector component perpendicular to the momentum densities of the individual waves.

  16. Quantum tic-tac-toe: A teaching metaphor for superposition in quantum mechanics

    Science.gov (United States)

    Goff, Allan

    2006-11-01

    Quantum tic-tac-toe was developed as a metaphor for the counterintuitive nature of superposition exhibited by quantum systems. It offers a way of introducing quantum physics without advanced mathematics, provides a conceptual foundation for understanding the meaning of quantum mechanics, and is fun to play. A single superposition rule is added to the child's game of classical tic-tac-toe. Each move consists of a pair of marks subscripted by the number of the move ("spooky" marks) that must be placed in different squares. When a measurement occurs, one spooky mark becomes real and the other disappears. Quantum tic-tac-toe illustrates a number of quantum principles including states, superposition, collapse, nonlocality, entanglement, the correspondence principle, interference, and decoherence. The game can be played on paper or on a white board. A Web-based version provides a refereed playing board to facilitate the mechanics of play, making it ideal for classrooms with a computer projector.

  17. Reconstruction and prediction of coherent acoustic field with the combined wave superposition approach

    Institute of Scientific and Technical Information of China (English)

    LI Weibing; CHEN Jian; YU Fei; CHEN Xinzhao

    2006-01-01

    The routine wave superposition approach cannot be used in reconstruction and prediction of a coherent acoustic field, because it is impossible to separate the pressures generated by individual sources. According to the superposition theory of the coherent acoustic field , a novel method based on the combined wave superposition approach is developed to reconstruct and predict the coherent acoustic field by building the combined pressure matching matrixes between the hologram surfaces and the sources. The method can reconstruct the acoustic information on surfaces of the individual sources, and it is possible to predict the acoustic field radiated from every source and the total coherent acoustic field can also be calculated spontaneously. The experimental and numerical simulation results show that this method can effectively solve the holographic reconstruction and prediction of the coherent acoustic field and it can also be used as a coherent acoustic field separation technique. The study on this novel method extends the application scope of the acoustic holography technique.

  18. Quantum Superpositions and the Representation of Physical Reality Beyond Measurement Outcomes and Mathematical Structures

    CERN Document Server

    de Ronde, Christian

    2016-01-01

    In this paper we intend to discuss the importance of providing a physical representation of quantum superpositions which goes beyond the mere reference to mathematical structures and measurement outcomes. This proposal goes in the opposite direction of the orthodox project which attempts to "bridge the gap" between the quantum formalism and common sense "classical reality" --precluding, right from the start, the possibility of interpreting quantum superpositions through non-classical notions. We will argue that in order to restate the problem of interpretation of quantum mechanics in truly ontological terms we require a radical revision of the problems and definitions addressed within the orthodox literature. On the one hand, we will discuss the need of providing a formal redefinition of superpositions which captures their contextual character. On the other hand, we attempt to replace the focus on the measurement problem, which concentrates on the justification of measurement outcomes from "weird" superposed ...

  19. Quantum superposition, entanglement, and state teleportation of a microorganism on an electromechanical oscillator

    CERN Document Server

    Li, Tongcang

    2016-01-01

    Schr\\"odinger's thought experiment to prepare a cat in a superposition of both alive and dead states reveals profound consequences of quantum mechanics and has attracted enormous interests. Here we propose a straightforward method to create quantum superposition states of a living microorganism by putting a small bacterium on top of an electromechanical oscillator. Our proposal is based on recent developments that the center-of-mass oscillation of a 15-$\\mu$m-diameter aluminium membrane has been cooled to its quantum ground state [Nature 475, 359 (2011)], and entangled with a microwave field [Science, 342, 710 (2013)]. A microorganism with a mass much smaller than the mass of the electromechanical membrane will not significantly affect the quality factor of the membrane and can be cooled to the quantum ground state together with the membrane. Quantum superposition and teleportation of its center-of-mass motion state can be realized with the help of superconducting microwave circuits. More importantly, the int...

  20. Macroscopic quantum superposition of spin ensembles with ultra-long coherence times via superradiant masing

    CERN Document Server

    Jin, Liang; Wrachtrup, Jörg; Liu, Ren-Bao

    2014-01-01

    Macroscopic quantum phenomena such as lasers, Bose-Einstein condensates, superfluids, and superconductors are of great importance in foundations and applications of quantum mechanics. In particular, quantum superposition of a large number of spins in solids is highly desirable for both quantum information processing and ultrasensitive magnetometry. Spin ensembles in solids, however, have rather short collective coherence time (typically less than microseconds). Here we demonstrate that under realistic conditions it is possible to maintain macroscopic quantum superposition of a large spin ensemble (such as about ~10^{14} nitrogen-vacancy center electron spins in diamond) with an extremely long coherence time ~10^8 sec under readily accessible conditions. The scheme, following the mechanism of superradiant lasers, is based on superradiant masing due to coherent coupling between collective spin excitations (magnons) and microwave cavity photons. The coherence time of the macroscopic quantum superposition is the ...

  1. Dense coding scheme using superpositions of Bell-states and its NMR implementation

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Jingfu; XIE; Jingyi; DENG; Zhiwei; LU; Zhiheng

    2005-01-01

    Dense coding using superpositions of Bell-states is proposed. The generalized Grover's algorithm is used to prepare the initial entangled states, and the reverse process of the quantum algorithm is used to determine the entangled state in the decoding measurement. Compared with the previous schemes, the superpositions of two Bell-states are exploited. Our scheme is demonstrated using a nuclear magnetic resonance (NMR)quantum computer. The corresponding manipulations are obtained. Experimental results show a good agreement between theory and experiment. We also generalize the scheme to transmit eight messages by introducing an additional two-state system.

  2. Quantum decoherence time scales for ionic superposition states in ion channels

    Science.gov (United States)

    Salari, V.; Moradi, N.; Sajadi, M.; Fazileh, F.; Shahbazi, F.

    2015-03-01

    There are many controversial and challenging discussions about quantum effects in microscopic structures in neurons of the brain and their role in cognitive processing. In this paper, we focus on a small, nanoscale part of ion channels which is called the "selectivity filter" and plays a key role in the operation of an ion channel. Our results for superposition states of potassium ions indicate that decoherence times are of the order of picoseconds. This decoherence time is not long enough for cognitive processing in the brain, however, it may be adequate for quantum superposition states of ions in the filter to leave their quantum traces on the selectivity filter and action potentials.

  3. Generation of squeezed-state superpositions via time-dependent Kerr nonlinearities

    CERN Document Server

    León-Montiel, R de J

    2015-01-01

    We put forward an experimental scheme for direct generation of optical squeezed coherent-state superpositions. The proposed setup makes use of an optical cavity, filled with a nonlinear Kerr medium, whose frequency is allowed to change during time evolution. By exactly solving the corresponding time-dependent anharmonic-oscillator Hamiltonian, we demonstrate that squeezed-state superpositions can be generated in an optical cavity. Furthermore, we show that the squeezing degree of the produced states can be tuned by properly controlling the frequency shift of the cavity, a feature that could be useful in many quantum information protocols, such as quantum teleportation and quantum computing.

  4. The general use of the time-temperature-pressure superposition principle

    DEFF Research Database (Denmark)

    Rasmussen, Henrik Koblitz

    This note is a supplement to Dynamic of Polymeric Liquids (DPL) section 3.6(a). DPL do only concern material functions and only the effect of the temperature on these. This is a short introduction to the general use of the time-temperature-pressure superposition principle.......This note is a supplement to Dynamic of Polymeric Liquids (DPL) section 3.6(a). DPL do only concern material functions and only the effect of the temperature on these. This is a short introduction to the general use of the time-temperature-pressure superposition principle....

  5. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  6. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  7. Adiabatic generation of arbitrary coherent superpositions of two quantum states: Exact and approximate solutions

    Science.gov (United States)

    Zlatanov, Kaloyan N.; Vitanov, Nikolay V.

    2017-07-01

    The common objective of the application of adiabatic techniques in the field of quantum control is to transfer a quantum system from one discrete energy state to another. These techniques feature both high efficiency and insensitivity to variations in the experimental parameters, e.g., variations in the driving field amplitude, duration, frequency, and shape, as well as fluctuations in the environment. Here we explore the potential of adiabatic techniques for creating arbitrary predefined coherent superpositions of two quantum states. We show that an equally weighted coherent superposition can be created by temporal variation of the ratio between the Rabi frequency Ω (t ) and the detuning Δ (t ) from 0 to ∞ (case 1) or vice versa (case 2), as it is readily deduced from the explicit adiabatic solution for the Bloch vector. We infer important differences between cases 1 and 2 in the composition of the created coherent superposition: The latter depends on the dynamical phase of the process in case 2, while it does not depend on this phase in case 1. Furthermore, an arbitrary coherent superposition of unequal weights can be created by using asymptotic ratios of Ω (t )/Δ (t ) different from 0 and ∞ . We supplement the general adiabatic solution with analytic solutions for three exactly soluble models: two trigonometric models and the hyperbolic Demkov-Kunike model. They allow us not only to demonstrate the general predictions in specific cases but also to derive the nonadiabatic corrections to the adiabatic solutions.

  8. Chaos and Complexities Theories. Superposition and Standardized Testing: Are We Coming or Going?

    Science.gov (United States)

    Erwin, Susan

    2005-01-01

    The purpose of this paper is to explore the possibility of using the principle of "superposition of states" (commonly illustrated by Schrodinger's Cat experiment) to understand the process of using standardized testing to measure a student's learning. Comparisons from literature, neuroscience, and Schema Theory will be used to expound upon the…

  9. Invertebrate superposition eyes-structures that behave like metamaterial with negative refractive index

    NARCIS (Netherlands)

    Stavenga, D. G.

    2006-01-01

    The superposition eyes of moths and lobsters are described with the geometrical optics for a refractive surface between two media, where the refractive index of the image space is negative. Consequently, the eye power and the object focal length are negative, whereas the image focal length is positi

  10. Invertebrate superposition eyes-structures that behave like metamaterial with negative refractive index

    NARCIS (Netherlands)

    Stavenga, D. G.

    2006-01-01

    The superposition eyes of moths and lobsters are described with the geometrical optics for a refractive surface between two media, where the refractive index of the image space is negative. Consequently, the eye power and the object focal length are negative, whereas the image focal length is

  11. Transforming squeezed light into large-amplitude coherent-state superposition

    DEFF Research Database (Denmark)

    Nielsen, Anne Ersbak Bang; Mølmer, Klaus

    2007-01-01

    A quantum superposition of two coherent states of light with small amplitude can be obtained by subtracting a photon from a squeezed vacuum state. In experiments this preparation can be made conditioned on the detection of a photon in the field from a squeezed light source. We propose and analyze...

  12. A note on “Generalized superposition of two squeezed states: generation and statistical properties”

    OpenAIRE

    Avelar, A. T.; Malbouisson, J.M.C.; Baseia, B.

    2004-01-01

    Texto completo: acesso restrito. p. 139-143 A previous scheme [Physica A 280 (2003) 346] showed how to create a generalized superposition of two squeezed states for stationary fields and studied its statistical properties. Here we show how to extend this result for travelling fields.

  13. Measuring the orbital angular momentum density for a superposition of Bessel beams

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-01-01

    Full Text Available To measure the Orbital Angular Momentum (OAM) density of superposition fields two steps are needed: generation and measurement. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency domain produces a higher...

  14. Reservoir engineering of a mechanical resonator: generating a macroscopic superposition state and monitoring its decoherence

    Science.gov (United States)

    Asjad, Muhammad; Vitali, David

    2014-02-01

    A deterministic scheme for generating a macroscopic superposition state of a nanomechanical resonator is proposed. The nonclassical state is generated through a suitably engineered dissipative dynamics exploiting the optomechanical quadratic interaction with a bichromatically driven optical cavity mode. The resulting driven dissipative dynamics can be employed for monitoring and testing the decoherence processes affecting the nanomechanical resonator under controlled conditions.

  15. Proportional fair scheduling with superposition coding in a cellular cooperative relay system

    DEFF Research Database (Denmark)

    Kaneko, Megumi; Hayashi, Kazunori; Popovski, Petar

    2013-01-01

    Many works have tackled on the problem of throughput and fairness optimization in cellular cooperative relaying systems. Considering firstly a two-user relay broadcast channel, we design a scheme based on superposition coding (SC) which maximizes the achievable sum-rate under a proportional fairn...

  16. Generation of Superpositions of Two Bloch States in an Ion Trap

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shi-Biao

    2003-01-01

    We propose a scheme for the generation of superpositions of two Bloch states for a collection of ions. Inthe scheme the ions are trapped in a linear potential and interact with laser beams. Our scheme does not put anyrequirement on the Lamb-Dicke parameters.

  17. Using Musical Intervals to Demonstrate Superposition of Waves and Fourier Analysis

    Science.gov (United States)

    LoPresto, Michael C.

    2013-01-01

    What follows is a description of a demonstration of superposition of waves and Fourier analysis using a set of four tuning forks mounted on resonance boxes and oscilloscope software to create, capture and analyze the waveforms and Fourier spectra of musical intervals.

  18. Application of time-temperature-stress superposition on creep of wood-plastic composites

    Science.gov (United States)

    Chang, Feng-Cheng; Lam, Frank; Kadla, John F.

    2013-08-01

    Time-temperature-stress superposition principle (TTSSP) was widely applied in studies of viscoelastic properties of materials. It involves shifting curves at various conditions to construct master curves. To extend the application of this principle, a temperature-stress hybrid shift factor and a modified Williams-Landel-Ferry (WLF) equation that incorporated variables of stress and temperature for the shift factor fitting were studied. A wood-plastic composite (WPC) was selected as the test subject to conduct a series of short-term creep tests. The results indicate that the WPC were rheologically simple materials and merely a horizontal shift was needed for the time-temperature superposition, whereas vertical shifting would be needed for time-stress superposition. The shift factor was independent of the stress for horizontal shifts in time-temperature superposition. In addition, the temperature- and stress-shift factors used to construct master curves were well fitted with the WLF equation. Furthermore, the parameters of the modified WLF equation were also successfully calibrated. The application of this method and equation can be extended to curve shifting that involves the effects of both temperature and stress simultaneously.

  19. Macroscopic Quantum Superposition States in a Model of Photon-Supersonic Phonon Interaction

    Institute of Scientific and Technical Information of China (English)

    CHAI Jin-Hua; WANG Yan-Bang; LU Yi-Qun

    2000-01-01

    A model of photon-hypersonic phonon interaction is proposed. The evolution of macroscopic quantum superpo sition states is analyzed, including the wave function and number distribution. It is shown that a superposition state of hypersonic phonon modes can be generated in the case of nondetuning and no losses.

  20. Teleportation of a Superposition of Three Orthogonal States of an Atom via Photon Interference

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shi-Biao

    2006-01-01

    We propose a scheme to teleport a superposition of three states of an atom trapped in a cavity to a second atom trapped in a remote cavity. The scheme is based on the detection of photons leaking from the cavities after the atom-cavity interaction.

  1. WAVE SUPERPOSITION METHOD BASED ON VIRTUAL SOURCE BOUNDARY WITH COMPLEX RADIUS VECTOR FOR SOLVING ACOUSTIC RADIATION PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    XiangYu; HuangYuying; MaXiaoqiang

    2004-01-01

    By virtue of the comparability between the wave superposition method and the dynamic analysis of structures, a general format for overcoming the non-uniqueness of solution induced by the wave superposition method at the eigenfrequencies of the corresponding interior problems is proposed. By adding appropriate damp to the virtual source system of the wave superposition method, the unique solutions for all wave numbers can be ensured. Based on this thought, a novel method-wave superposition method with complex radius vector is constructed.Not only is the computational time of this method approximately equal to that of the standard wave superposition method, but also the accuracy is much higher compared with other correlative methods. Finally, by taking the pulsating sphere and oscillating sphere as examples, the results of calculation show that the present method can effectively overcome the non-uniqueness problem.

  2. Evaluation of collapsed cone convolution superposition (CCCS algorithms in prowess treatment planning system for calculating symmetric and asymmetric field size

    Directory of Open Access Journals (Sweden)

    Tamer Dawod

    2015-01-01

    Full Text Available Purpose: This work investigated the accuracy of prowess treatment planning system (TPS in dose calculation in a homogenous phantom for symmetric and asymmetric field sizes using collapse cone convolution / superposition algorithm (CCCS. Methods: The measurements were carried out at source-to-surface distance (SSD set to 100 cm for 6 and 10 MV photon beams. Data for a full set of measurements for symmetric fields and asymmetric fields, including inplane and crossplane profiles at various depths and percentage depth doses (PDDs were obtained during measurements on the linear accelerator.Results: The results showed that the asymmetric collimation dose lead to significant errors (up to approximately 7% in dose calculations if changes in primary beam intensity and beam quality. It is obvious that the most difference in the isodose curves was found in buildup and the penumbra regions. Conclusion: The results showed that the dose calculation using Prowess TPS based on CCCS algorithm is generally in excellent agreement with measurements.

  3. From quantum feedback to probabilistic error correction: manipulation of quantum beats in cavity QED

    Energy Technology Data Exchange (ETDEWEB)

    Barberis-Blostein, P [Instituto de Investigaciones en Matematicas Aplicadas y en Sistemas, Universidad Nacional Autonoma de Mexico, Ciudad Universitaria, 04510, Mexico, DF (Mexico); Norris, D G; Orozco, L A; Carmichael, H J [Joint Quantum Institute, Department of Physics, University of Maryland and National Institute of Standards and Technology, College Park, MD 20742 (United States)], E-mail: lorozco@umd.edu

    2010-02-15

    It is shown how one can implement quantum feedback and probabilistic error correction in an open quantum system consisting of a single atom, with ground- and excited-state Zeeman structure, in a driven two-mode optical cavity. The ground-state superposition is manipulated and controlled through conditional measurements and external fields, which shield the coherence and correct quantum errors. Modeling an experimentally realistic situation demonstrates the robustness of the proposal for realization in the laboratory.

  4. Superposition-model analysis of rare-earth doped BaY2F8

    Science.gov (United States)

    Magnani, N.; Amoretti, G.; Baraldi, A.; Capelletti, R.

    The energy level schemes of four rare-earth dopants (Ce3+ , Nd3+ , Dy3+ , and Er3+) in BaY2 F-8 , as determined by optical absorption spectra, were fitted with a single-ion Hamiltonian and analysed within Newman's Superposition Model for the crystal field. A unified picture for the four dopants was obtained, by assuming a distortion of the F- ligand cage around the RE site; within the framework of the Superposition Model, this distortion is found to have a marked anisotropic behaviour for heavy rare earths, while it turns into an isotropic expansion of the nearest-neighbours polyhedron for light rare earths. It is also inferred that the substituting ion may occupy an off-center position with respect to the original Y3+ site in the crystal.

  5. On Multiple Users Scheduling Using Superposition Coding over Rayleigh Fading Channels

    KAUST Repository

    Zafar, Ammar

    2013-02-20

    In this letter, numerical results are provided to analyze the gains of multiple users scheduling via superposition coding with successive interference cancellation in comparison with the conventional single user scheduling in Rayleigh blockfading broadcast channels. The information-theoretic optimal power, rate and decoding order allocation for the superposition coding scheme are considered and the corresponding histogram for the optimal number of scheduled users is evaluated. Results show that at optimality there is a high probability that only two or three users are scheduled per channel transmission block. Numerical results for the gains of multiple users scheduling in terms of the long term throughput under hard and proportional fairness as well as for fixed merit weights for the users are also provided. These results show that the performance gain of multiple users scheduling over single user scheduling increases when the total number of users in the network increases, and it can exceed 10% for high number of users

  6. Optical information encryption based on incoherent superposition with the help of the QR code

    Science.gov (United States)

    Qin, Yi; Gong, Qiong

    2014-01-01

    In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.

  7. Obtaining the Probability Vector Current Density in Canonical Quantum Mechanics by Linear Superposition

    CERN Document Server

    Kauffmann, Steven Kenneth

    2013-01-01

    The quantum mechanics status of the probability vector current density has long seemed to be marginal. On one hand no systematic prescription for its construction is provided, and the special examples of it that are obtained for particular types of Hamiltonian operator could conceivably be attributed to happenstance. On the other hand this concept's key physical interpretation as local average particle flux, which flows from the equation of continuity that it is supposed to satisfy in conjunction with the probability scalar density, has been claimed to breach the uncertainty principle. Given the dispiriting impact of that claim, we straightaway point out that the subtle directional nature of the uncertainty principle makes it consistent with the measurement of local average particle flux. We next focus on the fact that the unique closed-form linear-superposition quantization of any classical Hamiltonian function yields in tandem the corresponding unique linear-superposition closed-form divergence of the proba...

  8. NEAR-FIELD ACOUSTIC HOLOGRAPHY FOR SEMI-FREE ACOUSTIC FIELD BASED ON WAVE SUPERPOSITION APPROACH

    Institute of Scientific and Technical Information of China (English)

    LI Weibing; CHEN Jian; YU Fei; CHEN Xinzhao

    2006-01-01

    In the semi-free acoustic field, the actual acoustic pressure at any point is composed of two parts: The direct acoustic pressure and the reflected acoustic pressure. The general acoustic holographic theories and algorithms request that there is only the direct acoustic pressure contained in the pressure at any point on the hologram surface, consequently, they cannot be used to reconstruct acoustic source and predict acoustic field directly. To take the reflected pressure into consideration, near-field acoustic holography for semi-free acoustic field based on wave superposition approach is proposed to realize the holographic reconstruction and prediction of the semi-free acoustic field, and the wave superposition approach is adopted as a holographic transform algorithm. The proposed theory and algorithm are realized and verified with a numerical example,and the drawbacks of the general theories and algorithms in the holographic reconstruction and prediction of the semi-free acoustic field are also demonstrated by this numerical example.

  9. Quantum test of the equivalence principle for atoms in coherent superposition of internal energy states

    Science.gov (United States)

    Rosi, G.; D'Amico, G.; Cacciapuoti, L.; Sorrentino, F.; Prevedelli, M.; Zych, M.; Brukner, Č.; Tino, G. M.

    2017-06-01

    The Einstein equivalence principle (EEP) has a central role in the understanding of gravity and space-time. In its weak form, or weak equivalence principle (WEP), it directly implies equivalence between inertial and gravitational mass. Verifying this principle in a regime where the relevant properties of the test body must be described by quantum theory has profound implications. Here we report on a novel WEP test for atoms: a Bragg atom interferometer in a gravity gradiometer configuration compares the free fall of rubidium atoms prepared in two hyperfine states and in their coherent superposition. The use of the superposition state allows testing genuine quantum aspects of EEP with no classical analogue, which have remained completely unexplored so far. In addition, we measure the Eötvös ratio of atoms in two hyperfine levels with relative uncertainty in the low 10-9, improving previous results by almost two orders of magnitude.

  10. A numerical dressing method for the nonlinear superposition of solutions of the KdV equation

    Science.gov (United States)

    Trogdon, Thomas; Deconinck, Bernard

    2014-01-01

    In this paper we present the unification of two existing numerical methods for the construction of solutions of the Korteweg-de Vries (KdV) equation. The first method is used to solve the Cauchy initial-value problem on the line for rapidly decaying initial data. The second method is used to compute finite-genus solutions of the KdV equation. The combination of these numerical methods allows for the computation of exact solutions that are asymptotically (quasi-)periodic finite-gap solutions and are a nonlinear superposition of dispersive, soliton and (quasi-)periodic solutions in the finite (x, t)-plane. Such solutions are referred to as superposition solutions. We compute these solutions accurately for all values of x and t.

  11. Investigating macroscopic quantum superpositions and the quantum-to-classical transition by optical parametric amplification

    CERN Document Server

    De Martini, Francesco

    2012-01-01

    The present work reports on an extended research endeavor focused on the theoretical and experimental realization of a macroscopic quantum superposition (MQS) made up with photons. As it is well known, this intriguing, fundamental quantum condition is at the core of a famous argument conceived by Erwin Schroedinger, back in 1935. The main experimental challenge to the actual realization of this object resides generally on the unavoidable and uncontrolled interactions with the environment, i.e. the decoherence leading to the cancellation of any evidence of the quantum features associated with the macroscopic system. The present scheme is based on a nonlinear process, the "quantum injected optical parametric amplification", that maps by a linearized cloning process the quantum coherence of a single - particle state, i.e. a Micro - qubit, into a Macro - qubit, consisting in a large number M of photons in quantum superposition. Since the adopted scheme was found resilient to decoherence, the MQS\\ demonstration wa...

  12. A Defense of the Paraconsistent Approach to Quantum Superpositions (Answer to Arenhart and Krause)

    CERN Document Server

    de Ronde, Christian

    2014-01-01

    In (da Costa and de Ronde, 2014), Newton da Costa together with the author of this paper argued in favor of the possibility to consider quantum superpositions in terms of a paraconsistent approach. We claimed that, even though most interpretations of quantum mechanics attempt to escape contradictions, there are many hints that indicate it could be worth while to engage in a research of this kind. Recently, Arenhart and Krause (2014) have raised several arguments against this approach. In the present paper we attempt to answer the main questions presented by Arenhart and Krause. We will argue, firstly, that the obstacles presented by them are based on a specific metaphysical stance, which we will characterize in terms of what we call the Orthodox Line of Research (OLR). Secondly, that this is not necessarily the only possible line, and that a different one, namely, a Constructive Metaphysical Line of Research (CMLR) provides a different perspective in which the Paraconsistent Approach to Quantum Superpositions...

  13. More twists on optical twisters: of helico-conical beams, superpositions and combinations

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin

    nonseparable helical or azimuthal phase and the conical o r radial phase, and that have been shown to self-reconstruct after an obstruction . In this work, we deeanstruet the helico-conical beam (HCB) as a coherent superposition of Bessel-like beams, which carry arbitrary topological charge. Fromthis......-conical beam with seleetable number of multiple helix) as well as multihelical beams that emulate the diffractionfree properties of its constituent Bessel-like beams....

  14. Investigation of the Parametric Field from a Focusing Source by Using Superposition of Gaussian Beams

    Institute of Scientific and Technical Information of China (English)

    ZHANG Dong; GONG Xiu-Fen; LU Rong-Rong

    2000-01-01

    The superposition method of Gaussian beams is extended to describe the acoustical parametric field from a focusing source. The axial sound pressure of the difference frequency wave 1MHz generated due to the interaction of two primary wave 3.5 and 4.5MHz is theoretically calculated by using 10 items of Gaussian functions. Experimental results coincide well with the calculated results except for the case at the vicinity of the focusing source.

  15. The superposition method in seeking the solitary wave solutions to the KdV-Burgers equation

    Indian Academy of Sciences (India)

    Yuanxi Xie; Jilashi Tang

    2006-03-01

    In this paper, starting from the careful analysis on the characteristics of the Burgers equation and the KdV equation as well as the KdV-Burgers equation, the superposition method is put forward for constructing the solitary wave solutions of the KdV-Burgers equation from those of the Burgers equation and the KdV equation. The solitary wave solutions for the KdV-Burgers equation are presented successfully by means of this method.

  16. Quantum Teleportation of One-Photon and Two-Photon Superposition States

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ One-photon and two-photon superposition states are the fundamental quantum states, which have shown interesting features, such as squeezing and anti-bunching. In this paper we discuss the quantum teleportation of such quantum states with the continuous-wave EPR states. Fidelity as a function of EPR correlation is obtained. We also compared the results with Fock state and coherent state teleportation.

  17. Measurement of the decoherence of a mesoscopic superposition of motional states of a trapped ion

    Institute of Scientific and Technical Information of China (English)

    Zheng Shi-Biao

    2004-01-01

    We propose a scheme to observe the decoherence of a mesoscopic superposition of two coherent states in the motion of a trapped ion. In the scheme the ion is excited by two perpendicular lasers tuned to the ion transition. The decoherence is revealed by the decrease of the correlation between two successive measurements of the internal state of the ion after relevant laser-ion interaction.

  18. Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates

    Science.gov (United States)

    Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim

    2016-05-01

    We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.

  19. Quantum State Engineering by Superpositions of Coherent States along aStraight Line in Cavity Quantum Electrodynamics

    Institute of Scientific and Technical Information of China (English)

    郑仕标

    2001-01-01

    A scheme is proposed for generating the superpositions of several coherent states in a cavity field with dispersive cavity quantum electrodynamics (QED). In the scheme, a sequence of atoms interacts dispersively with the cavity field, connected with a microwave source, and is manipulated by classical fields, followed by state-selective measurements. In this way, the cavity field is collapsed onto a superposition of several coherent states along a straight line with controllable coefficients. This scheme provides the possibility for quantum state engineering via coherent-state superpositions along a straight line in cavity QED for the first time.

  20. Repeated quantum error correction on a continuously encoded qubit by real-time feedback.

    Science.gov (United States)

    Cramer, J; Kalb, N; Rol, M A; Hensen, B; Blok, M S; Markham, M; Twitchen, D J; Hanson, R; Taminiau, T H

    2016-05-05

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.

  1. Repeated quantum error correction on a continuously encoded qubit by real-time feedback

    Science.gov (United States)

    Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.

    2016-05-01

    Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.

  2. Error-detection-based quantum fault tolerance against discrete Pauli noise

    CERN Document Server

    Reichardt, B W

    2006-01-01

    A quantum computer -- i.e., a computer capable of manipulating data in quantum superposition -- would find applications including factoring, quantum simulation and tests of basic quantum theory. Since quantum superpositions are fragile, the major hurdle in building such a computer is overcoming noise. Developed over the last couple of years, new schemes for achieving fault tolerance based on error detection, rather than error correction, appear to tolerate as much as 3-6% noise per gate -- an order of magnitude better than previous procedures. But proof techniques could not show that these promising fault-tolerance schemes tolerated any noise at all. With an analysis based on decomposing complicated probability distributions into mixtures of simpler ones, we rigorously prove the existence of constant tolerable noise rates ("noise thresholds") for error-detection-based schemes. Numerical calculations indicate that the actual noise threshold this method yields is lower-bounded by 0.1% noise per gate.

  3. Preparation of arbitrary n-particle d-dimensional superposition states using only single qubit operations and CNOT gates

    Institute of Scientific and Technical Information of China (English)

    Wang Yan-Hui; Fang Mao-Fa

    2004-01-01

    In this article, using only single qubit operation and a CNOT gate, we propose a scheme for creating arbitrary n-particle d-dimensional superposition states including entangled states and give the relevant circuits for realizing this scheme.

  4. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  5. Prediction in cases with superposition of different hydrological phenomena, such as from weather "cold drops

    Science.gov (United States)

    Anton, J. M.; Grau, J. B.; Tarquis, A. M.; Andina, D.; Sanchez, M. E.

    2012-04-01

    The authors have been involved in Model Codes for Construction prior to Eurocodes now Euronorms, and in a Drainage Instruction for Roads for Spain that adopted a prediction model from BPR (Bureau of Public Roads) of USA to take account of evident regional differences in Iberian Peninsula and Spanish Isles, and in some related studies. They used Extreme Value Type I (Gumbell law) models, with independent actions in superposition; this law was also adopted then to obtain maps of extreme rains by CEDEX. These methods could be extrapolated somehow with other extreme values distributions, but the first step was useful to set valid superposition schemas for actions in norms. As real case, in East of Spain rain comes usually extensively from normal weather perturbations, but in other cases from "cold drop" local high rains of about 400mm in a day occur, causing inundations and in cases local disasters. The city of Valencia in East of Spain was inundated at 1,5m high from a cold drop in 1957, and the river Turia formerly through that city was just later diverted some kilometers to South in a wider canal. With Gumbell law the expected intensity grows with time for occurrence, indicating a value for each given "return period", but the increasing speed grows with the "annual dispersion" of the Gumbell law, and some rare dangerous events may become really very possible in periods of many years. That can be proved with relatively simple models, e.g. with Extreme Law type I, and they could be made more precise or discussed. Such effects were used for superposition of actions on a structure for Model Codes, and may be combined with hydraulic effects, e.g. for bridges on rivers. These different Gumbell laws, or other extreme laws, with different dispersion may occur for marine actions of waves, earthquakes, tsunamis, and maybe for human perturbations, that could include industrial catastrophes, or civilization wars if considering historical periods.

  6. 3-D superposition for radiotherapy treatment planning using fast Fourier transforms.

    Science.gov (United States)

    Murray, D C; Hoban, P W; Metcalfe, P E; Round, W H

    1989-09-01

    Currently used radiotherapy treatment planning algorithms based on effective path length or scatter function methods do not model electron ranging from photon interaction sites. The superposition (or convolution) technique does model this effect, which is especially important at higher (linear accelerator) energies since the electron range is significant. Another advantage of this method is that it is conceptually simple and models the physical processes directly, rather than using empirically derived methods. A major disadvantage of superposition lies in the large amount of computer time required to generate a plan, especially in three dimensions. To help solve this problem, superposition using an invariant dose spread array (kernel) can be achieved by performing a convolution in Fourier space using fast Fourier transforms (FFTs). A method for 3 dimensional calculation of dose using FFTs is presented. Dose spread arrays are calculated using the EGS Monte Carlo code, and convolved with the TERMA (total energy released per unit mass). In both cases a 10 MV nominal beam energy is modelled by a 10 component spectrum, which is compared to the result obtained using monochromatic energy only (3.0 MeV at the surface). The FFT technique is shown to be significantly faster than standard convolution for medium to large TERMA and dose spread array sizes. The method is shown to be highly accurate for small fields in homogeneous media. For larger fields the central axis depth dose is accurate but the profile shape in the penumbral region becomes slightly distorted. This is because photons incident near the beam edges are not parallel to the cartesian coordinate system used as the convolution framework. However, this effect is sufficiently small to indicate that the convolution method is suitable for use in routine treatment planning.

  7. Implementation of FFT convolution and multigrid superposition models in the FOCUS RTP system

    Science.gov (United States)

    Miften, Moyed; Wiesmeyer, Mark; Monthofer, Suzanne; Krippner, Ken

    2000-04-01

    In radiotherapy treatment planning, convolution/superposition algorithms currently represent the best practical approach for accurate photon dose calculation in heterogeneous tissues. In this work, the implementation, accuracy and performance of the FFT convolution (FFTC) and multigrid superposition (MGS) algorithms are presented. The FFTC and MGS models use the same `TERMA' calculation and are commissioned using the same parameters. Both models use the same spectra, incorporate the same off-axis softening and base incident lateral fluence on the same measurements. In addition, corrections are explicitly applied to the polyenergetic and parallel kernel approximations, and electron contamination is modelled. Spectra generated by Monte Carlo (MC) modelling of treatment heads are used. Calculations using the MC spectra were in excellent agreement with measurements for many linear accelerator types. To speed up the calculations, a number of calculation techniques were implemented, including separate primary and scatter dose calculation, the FFT technique which assumes kernel invariance for the convolution calculation and a multigrid (MG) acceleration technique for the superposition calculation. Timing results show that the FFTC model is faster than MGS by a factor of 4 and 8 for small and large field sizes, respectively. Comparisons with measured data and BEAM MC results for a wide range of clinical beam setups show that (a) FFTC and MGS doses match measurements to better than 2% or 2 mm in homogeneous media; (b) MGS is more accurate than FFTC in lung phantoms where MGS doses are within 3% or 3 mm of BEAM results and (c) FFTC overestimates the dose in lung by a maximum of 9% compared to BEAM.

  8. The Superposition Principle in Quantum Mechanics - did the rock enter the foundation surreptitiously?

    CERN Document Server

    Dass, N D Hari

    2013-01-01

    The superposition principle forms the very backbone of quantum theory. The resulting linear structure of quantum theory is structurally so rigid that tampering with it may have serious, seemingly unphysical, consequences. This principle has been succesful at even the highest available accelerator energies. Is this aspect of quantum theory forever then? The present work is an attempt to understand the attitude of the founding fathers, particularly of Bohr and Dirac, towards this principle. The Heisenberg matrix mechanics on the one hand, and the Schrodinger wave mechanics on the other, are critically examined to shed light as to how this principle entered the very foundations of quantum theory.

  9. Three-Phase Multiple Harmonic Sequence Detection Based on Generalized Delayed Signal Superposition

    DEFF Research Database (Denmark)

    Lu, Yong; Xiao, Guochun; Wang, Xiongfei

    2016-01-01

    -phase multiple harmonic sequence detection method is proposed for estimating both the fundamental and harmonic sequence components under adverse grid conditions. This detection method is denoted as MG DSS-PLL since it contains Multiple Generalized Delayed Signal Superposition operators and a Phase-Locked Loop....... The proposed MGDSS-PLL can be flexibly tuned to extracting any harmonic components according to specific applications and it also exhibits great robustness to different grid disturbances. Simulations and experimental results are presented for verifying the performance of the MGDSS-PLL....

  10. Superpositions of higher-order bessel beams and nondiffracting speckle fields

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2009-08-01

    Full Text Available present in certain types of resonators. In optical tweezing, beams which carry orbital angular momentum are used to rotate trapped particles. In the case of generating a superposition of two higher-order Bessel beams which possess equal orders... but of differing sign, the Proc. of SPIE Vol. 7430 74300A-6 produced field carries no orbital angular momentum. However, these beams are still able to trap a particle in its intensity distribution and cause it to rotate over a spiral path along the beam’s...

  11. The accuracy of single-seed dose superposition for I-125 implants.

    Science.gov (United States)

    Burns, G S; Raeside, D E

    1989-01-01

    The Monte Carlo method was used to study perturbations of single I-125 seed dose distributions created by the presence of one or three neighboring seeds for the case of seeds immersed in a water phantom. Perturbation factors were determined within the geometric shadow of neighboring seeds for two-seed designs, four-seed spacings, and several choices of dose point. The results were compared to dose estimates obtained by the simple superposition of single-seed data for one- and two-plane implants. Some significant differences were found.

  12. Strong-Field Tunneling from a Coherent Superposition of Electronic States

    Science.gov (United States)

    Fechner, Lutz; Camus, Nicolas; Ullrich, Joachim; Pfeifer, Thomas; Moshammer, Robert

    2014-05-01

    Laser-induced tunnel ionization from a coherent superposition of electronic states in Ar+ is studied in a kinematically complete experiment. Within a pump-probe scheme a spin-orbit wave packet is launched through the first ionization step from the neutral species. The multielectron coherent wave packet is probed as a function of time by the second pulse which ionizes the system to Ar++. By measuring delay-dependent electron momentum distributions we directly image the evolution of the nonstationary multielectron wave function. Comparing the results with simulations we test common assumptions about electron momentum distributions and the tunneling process itself.

  13. Heisenberg-limited quantum sensing and metrology with superpositions of twin-Fock states

    Science.gov (United States)

    Gerry, Christopher C.; Mimih, Jihane

    2011-03-01

    We discuss the prospects of performing Heisenberg-limited quantum sensing and metrology using a Mach-Zehnder interferometer with input states that are superpositions of twin-Fock states and where photon number parity measurements are made on one of the output beams of the interferometer. This study is motivated by the experimental challenge of producing twin-Fock states on opposite sides of a beam splitter. We focus on the use of the so-called pair coherent states for this purpose and discuss a possible mechanism for generating them. We also discuss the prospect of using other superstitions of twin-Fock states, for the purpose of interferometry.

  14. Statistical Properties and Algebraic Characteristics of Quantum Superpositions of Negative Binomial States

    Institute of Scientific and Technical Information of China (English)

    WANG XiaoGuang; FU Hong-Chen

    2001-01-01

    We introduce new kinds of states of quantized radiation fields, which are the superpositions of negative binomial states. They exhibit remarkable nonclassical properties and reduce to Schrodinger cat states in a certain limit.The algebras involved in the even and odd negative binomial states turn out to be generally deformed oscillator algebras.It is found that the even and odd negative binomial states satisfy the same eigenvalue equation with the same eigenvalue and they can be viewed as two-photon nonlinear coherent states. Two methods of generating such the states are proposed.``

  15. Superposition model study of Cr3+ doped tetra methyl ammonium cadmium chloride.

    Science.gov (United States)

    Kripal, Ram; Yadav, Awadhesh Kumar

    2015-02-25

    The zero field splitting parameter D of Cr(3+) doped in tetra methyl ammonium cadmium chloride (TMCC) is calculated with perturbation formula using microscopic spin Hamiltonian theory and crystal field parameters from superposition model. The theoretically calculated ZFS parameter for Cr(3+) in TMCC single crystal is compared with the experimental value obtained by electron paramagnetic resonance (EPR). The local structure distortion is considered to obtain the crystal field parameters. The theoretical study gives the ZFS parameter D similar to that from experiment. However, calculation considering small distortion in local structure around Cr(3+) gives better agreement with the experimental value of ZFS parameter.

  16. Approximate eigensolutions of Dirac equation for the superposition Hellmann potential under spin and pseudospin symmetries

    Indian Academy of Sciences (India)

    M Hamzavi; S M Ikhdair

    2014-07-01

    The Hellmann potential is simply a superposition of an attractive Coulomb potential $−a/r$ plus a Yukawa potential e${}^{−δr} /r$. The generalized parametric Nikiforov–Uvarov (NU) method is used to examine the approximate analytical energy eigenvalues and two-component wave function of the Dirac equation with the Hellmann potential for arbitrary spin-orbit quantum number in the presence of exact spin and pseudospin (p-spin) symmetries. As a particular case, we obtain the energy eigenvalues of the pure Coulomb potential in the non-relativistic limit.

  17. Teleportation of a Coherent Superposition State Via a nonmaximally Entangled Coherent Xhannel

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ We investigate the problemm of teleportation of a superposition coherent state with nonmaximally entangled coherent channel. Two strategies are considered to complete the task. The first one uses entanglement concentration to purify the channel to a maximally entangled one. The second one teleports the state through the nonmaximally entangled coherent channel directly. We find that the probabilities of successful teleportations for the two strategies are depend on the amplitudes of the coherent states and the mean fidelity of teleportation using the first strategy is always less than that of the second strategy.

  18. A reflected wave superposition method for vibration and energy of a travelling string

    Science.gov (United States)

    Chen, E. W.; Luo, Q.; Ferguson, N. S.; Lu, Y. M.

    2017-07-01

    This paper considers the analytical free time domain response and energy in an axially translating and laterally vibrating string. The domain of the string is either a constant or variable length, dependent upon the general initial conditions. The translating tensioned strings possess either fixed-fixed or fixed-free boundaries. An alternative analytical solution using a reflected wave superposition method is presented for a finite translating string. Firstly, the cycles of vibration for both constant and variable length strings are provided, which for the latter are dependent upon the variable string length. Each cycle is divided into three time intervals according to the magnitude and the direction of the translating string velocity. Applying d'Alembert's method combined with the reflection properties, expressions for the reflected waves at the two boundaries are obtained. Subsequently, superposition of all of the incident and reflected waves provides results for the free vibration of the string over the three time intervals. The variation in the total mechanical energy of the string system is also shown. The accuracy and efficiency of the proposed method are confirmed numerically by comparison to simulations produced using a Newmark-Beta method solution and an existing state space function representation of the string dynamics.

  19. Phase sensitivity in deformed-state superposition considering nonlinear phase shifts

    Science.gov (United States)

    Berrada, K.

    2016-07-01

    We study the problem of the phase estimation for the deformation-state superposition (DSS) under perfect and lossy (due to a dissipative interaction of DSS with their environment) regimes. The study is also devoted to the phase enhancement of the quantum states resulting from a generalized non-linearity of the phase shifts, both without and with losses. We find that such a kind of superposition can give the smallest variance in the phase parameter in comparison with usual Schrödinger cat states in different order of non-linearity even if for a larger average number of photons. Due to the significance of how a system is quantum correlated with its environment in the construction of a scalable quantum computer, the entanglement between the DSS and its environment is investigated during the dissipation. We show that partial entanglement trapping occurs during the dynamics depending on the kind of deformation and mean photon number. These features make the DSS with a larger average number of photons a good candidate for implementation of schemes of quantum optics and information with high precision.

  20. A Simple Method on Generating any Bi-Photon Superposition State with Linear Optics

    Science.gov (United States)

    Zhang, Ting-Ting; Wei, Jie; Wang, Qin

    2017-04-01

    We present a simple method on the generation of any bi-photon superposition state using only linear optics. In this scheme, the input states, a two-mode squeezed state and a bi-photon state, meet on a beam-splitter and the output states are post-selected with two threshold single-photon detectors. We carry out corresponding numerical simulations by accounting for practical experimental conditions, calculating both the Wigner function and the state fidelity of those generated bi-photon superposition states. Our simulation results demonstrate that not only distinct nonclassical characteristics but also very high state fidelities can be achieved even under imperfect experimental conditions. Supported by the National Natural Science Foundation of China under Grant Nos. 61475197, 61590932, 11274178, the Natural Science Foundation of the Jiangsu Higher Education Institutions under Grant No. 15KJA120002, the Outstanding Youth Project of Jiangsu Province under Grant No. BK20150039, and the Priority Academic Program Development of Jiangsu Higher Education Institutions under Grant No. YX002001

  1. Biases on initial mass function determinations. II. Real multiple systems and chance superpositions

    CERN Document Server

    Apellániz, J Maíz

    2008-01-01

    When calculating IMFs for young clusters, one has to take into account that (a) most massive stars are born in multiple systems (b) most IMFs are derived from data that cannot resolve such systems, and (c) multiple chance superpositions between members are expected to happen if the cluster is too distant. In this article I use numerical experiments to model the consequences of those phenomena on the observed color-magnitude diagrams and the IMFs derived from them. Real multiple systems affect the observed or apparent massive-star MF slope little but can create a significant population of apparently ultramassive stars. Chance superpositions produce only small biases when the number of superimposed stars is low but, once a certain number threshold is reached, they can affect both the observed slope and the apparent stellar upper mass limit. I apply those experiments to two well known massive young clusters in the Local Group, NGC 3603 and R136. In both cases I show that the observed population of stars with mas...

  2. Teleportation of one ququat encoded in single mode superposition of coherent states

    CERN Document Server

    Prakash, Hari

    2012-01-01

    Superposition of optical coherent states (SCS) Ket(plus/minus alpha), possessing opposite phases, plays an important role as qubits in quantum information processing tasks like quantum computation, teleportation, cryptography etc. and are of fundamental importance in testing quantum mechanics. Recently, ququats and qutrits defined in four and three dimensional (D) Hilbert space, respectively, have attracted much more attention as they present advantage in secure quantum communication and also in researches on the foundation of quantum mechanics. Here, we show that superposition of four non-orthogonal coherent states Ket(plus/minus alpha) and Ket(plus/minus i alpha), that are 90 degrees out of phase, can be employed for encoding one ququat defined in a 4D Hilbert space spanned by four newly defined multi-photonic states, Ket(alpha subscript j) with 4n+j numbers of photons, where, j= 0, 1, 2, 3. We propose a scheme which generates states Ket(alpha subscript j). When these states fall on a 50-50 beam splitter, t...

  3. Superposition approach for description of electrical conductivity in sheared MWNT/polycarbonate melts

    Directory of Open Access Journals (Sweden)

    M. Saphiannikova

    2012-06-01

    Full Text Available The theoretical description of electrical properties of polymer melts, filled with attractively interacting conductive particles, represents a great challenge. Such filler particles tend to build a network-like structure which is very fragile and can be easily broken in a shear flow with shear rates of about 1 s–1. In this study, measured shear-induced changes in electrical conductivity of polymer composites are described using a superposition approach, in which the filler particles are separated into a highly conductive percolating and low conductive non-percolating phases. The latter is represented by separated well-dispersed filler particles. It is assumed that these phases determine the effective electrical properties of composites through a type of mixing rule involving the phase volume fractions. The conductivity of the percolating phase is described with the help of classical percolation theory, while the conductivity of non-percolating phase is given by the matrix conductivity enhanced by the presence of separate filler particles. The percolation theory is coupled with a kinetic equation for a scalar structural parameter which describes the current state of filler network under particular flow conditions. The superposition approach is applied to transient shear experiments carried out on polycarbonate composites filled with multi-wall carbon nanotubes.

  4. Propagation of the off-axis superposition of partially coherent beams through atmospheric turbulence

    Institute of Scientific and Technical Information of China (English)

    Zhang En-Tao; Ji Xiao-Ling; Lü Bai-Da

    2009-01-01

    The propagation properties of the off-axis superposition of partially coherent beams through atmospheric tur-bulence and their beam quality in terms of the mean-squared beam width w(z) and the power in the bucket (PIB)are studied in detail, where the effects of partial coherence, off-axis beam superposition and atmospheric turbulence are considered. The analytical expressions for the intensity, the beam width and the PIB are derived, and illustrative examples are given numerically. It is shown that the maximum intensity/max and the PIB decrease and ω(z) increases as the refraction index structure constant C2n increases. Therefore, the turbulence results in a degradation of the beam quality. However, the resulting partially coherent beam with a smaller value of spatial correlation parameter γ and larger values of separate distance Xd and beam number M is less affected by the turbulence than that with a larger value of γ and smaller values of xd and M. The main results obtained in this paper are explained physically.

  5. Risk evaluation of rock burst through theory of static and dynamic stresses superposition

    Institute of Scientific and Technical Information of China (English)

    李振雷; 蔡武; 窦林名; 何江; 王桂峰; 丁言露

    2015-01-01

    Rock burst is one of the most catastrophic dynamic hazards in coal mining. A static and dynamic stresses superposition-based (SDSS-based) risk evaluation method of rock burst was proposed to pre-evaluate rock burst risk. Theoretical basis of this method is the stress criterion incurring rock burst and rock burst risk is evaluated according to the closeness degree of the total stress (due to the superposition of static stress in the coal and dynamic stress induced by tremors) with the critical stress. In addition, risk evaluation criterion of rock burst was established by defining the “Satisfaction Degree” of static stress. Furthermore, the method was used to pre-evaluate rock burst risk degree and prejudge endangered area of an insular longwall face in Nanshan Coal Mine in China. Results show that rock burst risk is moderate at advance extent of 97 m, strong at advance extent of 97−131 m, and extremely strong (i.e. inevitable to occur) when advance extent exceeds 131 m (mining is prohibited in this case). The section of two gateways whose floor abuts 15−3 coal seam is a susceptible area prone to rock burst. Evaluation results were further compared with rock bursts and tremors detected by microseismic monitoring. Comparison results indicate that evaluation results are consistent with microseismic monitoring, which proves the method’s feasibility.

  6. The denoising of Monte Carlo dose distributions using convolution superposition calculations.

    Science.gov (United States)

    El Naqa, I; Cui, J; Lindsay, P; Olivera, G; Deasy, J O

    2007-09-07

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.

  7. SAS-Pro: simultaneous residue assignment and structure superposition for protein structure alignment.

    Science.gov (United States)

    Shah, Shweta B; Sahinidis, Nikolaos V

    2012-01-01

    Protein structure alignment is the problem of determining an assignment between the amino-acid residues of two given proteins in a way that maximizes a measure of similarity between the two superimposed protein structures. By identifying geometric similarities, structure alignment algorithms provide critical insights into protein functional similarities. Existing structure alignment tools adopt a two-stage approach to structure alignment by decoupling and iterating between the assignment evaluation and structure superposition problems. We introduce a novel approach, SAS-Pro, which addresses the assignment evaluation and structure superposition simultaneously by formulating the alignment problem as a single bilevel optimization problem. The new formulation does not require the sequentiality constraints, thus generalizing the scope of the alignment methodology to include non-sequential protein alignments. We employ derivative-free optimization methodologies for searching for the global optimum of the highly nonlinear and non-differentiable RMSD function encountered in the proposed model. Alignments obtained with SAS-Pro have better RMSD values and larger lengths than those obtained from other alignment tools. For non-sequential alignment problems, SAS-Pro leads to alignments with high degree of similarity with known reference alignments. The source code of SAS-Pro is available for download at http://eudoxus.cheme.cmu.edu/saspro/SAS-Pro.html.

  8. The denoising of Monte Carlo dose distributions using convolution superposition calculations

    Energy Technology Data Exchange (ETDEWEB)

    El Naqa, I [Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO (United States); Cui, J [Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO (United States); Lindsay, P [MD Anderson, Houston, TX (United States); Olivera, G [Tomotherapy Inc., Madison, WI (United States); Deasy, J O [Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO (United States)

    2007-09-07

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction. (note)

  9. NOTE: The denoising of Monte Carlo dose distributions using convolution superposition calculations

    Science.gov (United States)

    El Naqa, I.; Cui, J.; Lindsay, P.; Olivera, G.; Deasy, J. O.

    2007-09-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.

  10. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  11. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  12. Superposition of nonparaxial vectorial complex-source spherically focused beams: Axial Poynting singularity and reverse propagation

    Science.gov (United States)

    Mitri, F. G.

    2016-08-01

    In this work, counterintuitive effects such as the generation of an axial (i.e., long the direction of wave motion) zero-energy flux density (i.e., axial Poynting singularity) and reverse (i.e., negative) propagation of nonparaxial quasi-Gaussian electromagnetic (EM) beams are examined. Generalized analytical expressions for the EM field's components of a coherent superposition of two high-order quasi-Gaussian vortex beams of opposite handedness and different amplitudes are derived based on the complex-source-point method, stemming from Maxwell's vector equations and the Lorenz gauge condition. The general solutions exhibiting unusual effects satisfy the Helmholtz and Maxwell's equations. The EM beam components are characterized by nonzero integer degree and order (n ,m ) , respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and a weighting (real) factor 0 ≤α ≤1 that describes the transition of the beam from a purely vortex (α =0 ) to a nonvortex (α =1 ) type. An attractive feature for this superposition is the description of strongly focused (or strongly divergent) wave fields. Computations of the EM power density as well as the linear and angular momentum density fluxes illustrate the analysis with particular emphasis on the polarization states of the vector potentials forming the beams and the weight of the coherent beam superposition causing the transition from the vortex to the nonvortex type. Should some conditions determined by the polarization state of the vector potentials and the beam parameters be met, an axial zero-energy flux density is predicted in addition to a negative retrograde propagation effect. Moreover, rotation reversal of the angular momentum flux density with respect to the beam handedness is anticipated, suggesting the possible generation of negative (left-handed) torques. The results are particularly useful in applications involving the design of strongly focused optical laser

  13. Multi-level manual and autonomous control superposition for intelligent telerobot

    Science.gov (United States)

    Hirai, Shigeoki; Sato, T.

    1989-01-01

    Space telerobots are recognized to require cooperation with human operators in various ways. Multi-level manual and autonomous control superposition in telerobot task execution is described. The object model, the structured master-slave manipulation system, and the motion understanding system are proposed to realize the concept. The object model offers interfaces for task level and object level human intervention. The structured master-slave manipulation system offers interfaces for motion level human intervention. The motion understanding system maintains the consistency of the knowledge through all the levels which supports the robot autonomy while accepting the human intervention. The superposing execution of the teleoperational task at multi-levels realizes intuitive and robust task execution for wide variety of objects and in changeful environment. The performance of several examples of operating chemical apparatuses is shown.

  14. A Simple Test of the Equivalence Principle(s) for Quantum Superpositions

    CERN Document Server

    Orlando, Patrick J; Modi, Kavan; Pollock, Felix A

    2015-01-01

    We propose a simple experimental test of the quantum equivalence principle introduced by Zych and Brukner [arXiv:1502.00971], which generalises the Einstein equivalence principle to superpositions of internal energy states. We consider a harmonically-trapped spin-$\\frac12$ atom in the presence of both gravity and an external magnetic field and show that when the external magnetic field is suddenly switched off, various violations of the equivalence principle would manifest as otherwise forbidden transitions. Performing such an experiment would put bounds on the various phenomenological violating parameters. We further demonstrate that the classical weak equivalence principle can be tested by suddenly putting the apparatus into free fall, effectively 'switching off' gravity.

  15. Quantum Interference and Superposition in Cognition: Development of a Theory for the Disjunction of Concepts

    CERN Document Server

    Aerts, Diederik

    2007-01-01

    We elaborate a theory for the modeling of concepts using the mathematical structure of quantum mechanics. Items and concepts are represented by vectors in the complex Hilbert space of quantum mechanics and membership weights of items are modeled by quantum weights calculated following the quantum rules. We apply this theory to model the disjunction of concepts and show that the predictions of our theory for the membership weights of items with respect to the disjunction of concepts match with great accuracy the results of an experiment conducted by Hampton (1988b). It is the quantum effects of interference and superposition that are at the origin of the effects of overextension and underextension observed by Hampton as deviations from a classical use of the disjunction. We show that the complex numbers of the Hilbert space are essential to obtaining the experimental predictions, i.e. vector space models over real numbers do not provide predictions matching the experimental data. We put forward an explanation ...

  16. Macroscopic realism, wave-particle duality and the superposition principle for entangled states

    CERN Document Server

    Chuprikov, N L

    2006-01-01

    On the basis of our model of a one-dimensional (1D) completed scattering (Russian Physics, 49, p.119 and p.314 (2006)) we argue that the linear formalism of quantum mechanics (QM) respects the principles of the macroscopic realism (J. Phys.: Condens. Matter, 14, R415-R451 (2002)). In QM one has to distinguish two kinds of pure ensembles: pure unentangled ensembles to be macroscopically inseparable, and pure entangled ones to be macroscopically separable. A pure entangled ensemble is an intermediate link between a pure unentangled ensemble and classical mixture. Like the former it strictly respects the linear formalism of QM. Like the latter it is decomposable into macroscopically distinct subensembles, in spite of interference between them; our new model exemplifies how to perform such a decomposition in the case of a 1D completed scattering. To respect macroscopic realism, the superposition principle must be reformulated: it must forbid introducing observables for entangled states.

  17. From Quantum To Classical Dynamics: A Landau Continuous Phase Transition With Spontaneous Superposition Breaking

    CERN Document Server

    Pankovic, V; Predojevic, M; Krmar, M; Pankovic, Vladan; Hubsch, Tristan; Predojevic, Milan; Krmar, Miodrag

    2004-01-01

    Developing an earlier proposal (Ne'eman, Damnjanovic, etc), we show herein that there is a Landau continuous phase transition from the exact quantum dynamics to the effectively classical one, occurring via spontaneous superposition breaking (effective hiding), as a special case of the corresponding general formalism (Bernstein). Critical values of the order parameters for this transition are determined by Heisenberg's indeterminacy relations, change continuously, and are in excellent agreement with the recent and remarkable experiments with Bose condensation. It is also shown that such a phase transition can sucessfully model self-collapse (self-decoherence), as an effective classical phenomenon, on the measurement device. This then induces a relative collapse (relative decoherence) as an effective quantum phenomenon on the measured quantum object by measurement. We demonstrate this (including the case of Bose-Einstein condensation) in the well-known cases of the Stern-Gerlach spin measurement, Bell's inequal...

  18. On sparse reconstructions in near-field acoustic holography using the method of superposition

    CERN Document Server

    Abusag, Nadia M

    2016-01-01

    The method of superposition is proposed in combination with a sparse $\\ell_1$ optimisation algorithm with the aim of finding a sparse basis to accurately reconstruct the structural vibrations of a radiating object from a set of acoustic pressure values on a conformal surface in the near-field. The nature of the reconstructions generated by the method differs fundamentally from those generated via standard Tikhonov regularisation in terms of the level of sparsity in the distribution of charge strengths specifying the basis. In many cases, the $\\ell_1$ optimisation leads to a solution basis whose size is only a small fraction of the total number of measured data points. The effects of changing the wavenumber, the internal source surface and the (noisy) acoustic pressure data in general will all be studied with reference to a numerical study on a cuboid of similar dimensions to a typical loudspeaker cabinet. The development of sparse and accurate reconstructions has a number of advantageous consequences includin...

  19. Investigating the Influence of Visualization on Student Understanding of Quantum Superposition

    CERN Document Server

    Kohnle, Antje; Ruby, Scott

    2014-01-01

    Visualizations in interactive computer simulations are a powerful tool to help students develop productive mental models, particularly in the case of quantum phenomena that have no classical analogue. The QuVis Quantum Mechanics Visualization Project develops research-based interactive simulations for the learning and teaching of quantum mechanics. We describe efforts to refine the visual representation of a single-photon superposition state in the QuVis simulations. We developed various depictions of a photon incident on a beam splitter, and investigated their influence on student thinking through individual interviews. Outcomes from this study led to the incorporation of a revised visualization in all QuVis single-photon simulations. In-class trials in 2013 and 2014 using the Interferometer Experiments simulation in an introductory quantum physics course were used for a comparative study of the initial and revised visualizations. The class that used the revised visualization showed a lower frequency of inco...

  20. Effect of temperature on aging and time-temperature superposition in nonergodic laponite suspensions

    Science.gov (United States)

    Awasthi, Varun; Joshi, Yogesh M.

    We have studied the effect of temperature on aging dynamics of laponite suspensions by carrying out the rheological oscillatory and creep experiments. We observed that at higher temperatures the mechanism responsible for aging became faster thereby shifting the evolution of elastic modulus to lower ages. Significantly, in the creep experiments, all the aging time and the temperature dependent strain data superposed to form a master curve. Possibility of such superposition suggests that the rheological behavior depends on the temperature and the aging time only through the relaxation processes and both the variables do not affect the distribution but only the average value of relaxation times. In addition, this procedure allows us to predict long time rheological behavior by carrying out short time tests at high temperatures and small ages.

  1. Proportional fair scheduling with superposition coding in a cellular cooperative relay system

    DEFF Research Database (Denmark)

    Kaneko, Megumi; Hayashi, Kazunori; Popovski, Petar

    2013-01-01

    Many works have tackled on the problem of throughput and fairness optimization in cellular cooperative relaying systems. Considering firstly a two-user relay broadcast channel, we design a scheme based on superposition coding (SC) which maximizes the achievable sum-rate under a proportional...... fairness constraint. Unlike most relaying schemes where users are allocated orthogonally, our scheme serves the two users simultaneously on the same time-frequency resource unit by superposing their messages into three SC layers. The optimal power allocation parameters of each SC layer are derived...... by analysis. Next, we consider the general multi-user case in a cellular relay system, for which we design resource allocation algorithms based on proportional fair scheduling exploiting the proposed SC-based scheme. Numerical results show that the proposed algorithms allowing simultaneous user allocation...

  2. Superposition frames for adaptive time-frequency analysis and fast reconstruction

    CERN Document Server

    Rudoy, Daniel; Wolfe, Patrick J

    2009-01-01

    In this article we introduce a broad family of adaptive, linear time-frequency representations termed superposition frames, and show that they admit desirable fast overlap-add reconstruction properties akin to standard short-time Fourier techniques. This approach stands in contrast to many adaptive time-frequency representations in the extant literature, which, while more flexible than standard fixed-resolution approaches, typically fail to provide efficient reconstruction and often lack the regular structure necessary for precise frame-theoretic analysis. Our main technical contributions come through the development of properties which ensure that this construction provides for a numerically stable, invertible signal representation. Our primary algorithmic contributions come via the introduction and discussion of specific signal adaptation criteria in deterministic and stochastic settings, based respectively on time-frequency concentration and nonstationarity detection. We conclude with a short speech enhanc...

  3. Re-thinking the Rubric for Grading the CUE: The Superposition Principle

    CERN Document Server

    Zwolak, Justyna P; Manogue, Corinne A

    2013-01-01

    While introductory electricity and magnetism (E&M) has been investigated for decades, research at the upper-division is relatively new. The University of Colorado has developed the Colorado Upper-Division Electrostatics (CUE) Diagnostic to test students' understanding of the content of the first semester of an upper-division E&M course. While the questions on the CUE cover many learning goals in an appropriate manner, we believe the rubric for the CUE is particularly aligned to the topics and methods of teaching at the University of Colorado. We suggest that changes to the rubric would allow for better assessment of a wider range of teaching schemes. As an example, we highlight one problem from the CUE involving the superposition principle. Using data from both Oregon State University and the University of Colorado, we discuss the limitations of the current rubric, compare results using a different analysis scheme, and discuss the implications for assessing students' understanding.

  4. Accurate modeling of vector hysteresis using a superposition of Preisach-type models

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A. [Cairo Univ., Giza (Egypt). Electrical Power and Machines Dept.; Mayergoyz, I.D. [Univ. of Maryland, College Park, MD (United States). Electrical Engineering Dept.

    1997-09-01

    Vector hysteresis models are basically regarded as helpful tools that can be utilized in simulating and/or predicting multi-dimensional field-media interactions. Simulations of energy loss in power devices having unoriented magnetic cores, read/write recording processes as well as tape and disk erasure approaches are examples of such interactions that are currently of considerable interest. Vector hysteresis models are generally regarded as helpful tools that can be utilized in simulating multi-dimensional field-media interactions. In this paper, simulation of vector hysteresis is proposed by using a superposition of isotropic Preisach-type models. This approach gives the opportunity to fully incorporate rotational experimental results in its identification procedure, thus leading to higher simulation accuracy. Detailed solution of the model identification problem and some experimental testing results are given in the paper.

  5. Similarity recognition of molecular structures by optimal atomic matching and rotational superposition.

    Science.gov (United States)

    Helmich, Benjamin; Sierka, Marek

    2012-01-15

    An algorithm for similarity recognition of molecules and molecular clusters is presented which also establishes the optimum matching among atoms of different structures. In the first step of the algorithm, a set of molecules are coarsely superimposed by transforming them into a common reference coordinate system. The optimum atomic matching among structures is then found with the help of the Hungarian algorithm. For this, pairs of structures are represented as complete bipartite graphs with a weight function that uses intermolecular atomic distances. In the final step, a rotational superposition method is applied using the optimum atomic matching found. This yields the minimum root mean square deviation of intermolecular atomic distances with respect to arbitrary rotation and translation of the molecules. Combined with an effective similarity prescreening method, our algorithm shows robustness and an effective quadratic scaling of computational time with the number of atoms.

  6. Color changes in wood during heating: kinetic analysis by applying a time-temperature superposition method

    Science.gov (United States)

    Matsuo, Miyuki; Yokoyama, Misao; Umemura, Kenji; Gril, Joseph; Yano, Ken'ichiro; Kawai, Shuichi

    2010-04-01

    This paper deals with the kinetics of the color properties of hinoki ( Chamaecyparis obtusa Endl.) wood. Specimens cut from the wood were heated at 90-180°C as accelerated aging treatment. The specimens completely dried and heated in the presence of oxygen allowed us to evaluate the effects of thermal oxidation on wood color change. Color properties measured by a spectrophotometer showed similar behavior irrespective of the treatment temperature with each time scale. Kinetic analysis using the time-temperature superposition principle, which uses the whole data set, was successfully applied to the color changes. The calculated values of the apparent activation energy in terms of L *, a *, b *, and Δ E^{*}_{ab} were 117, 95, 114, and 113 kJ/mol, respectively, which are similar to the values of the literature obtained for other properties such as the physical and mechanical properties of wood.

  7. EPR, optical and superposition model study of Mn2+ doped L+ glutamic acid

    Science.gov (United States)

    Kripal, Ram; Singh, Manju

    2015-12-01

    Electron paramagnetic resonance (EPR) study of Mn2+ doped L+ glutamic acid single crystal is done at room temperature. Four interstitial sites are observed and the spin Hamiltonian parameters are calculated with the help of large number of resonant lines for various angular positions of external magnetic field. The optical absorption study is also done at room temperature. The energy values for different orbital levels are calculated, and observed bands are assigned as transitions from 6A1g(s) ground state to various excited states. With the help of these assigned bands, Racah inter-electronic repulsion parameters B = 869 cm-1, C = 2080 cm-1 and cubic crystal field splitting parameter Dq = 730 cm-1 are calculated. Zero field splitting (ZFS) parameters D and E are calculated by the perturbation formulae and crystal field parameters obtained using superposition model. The calculated values of ZFS parameters are in good agreement with the experimental values obtained by EPR.

  8. Limitations to the validity of single wake superposition in wind farm yield assessment

    Science.gov (United States)

    Gunn, K.; Stock-Williams, C.; Burke, M.; Willden, R.; Vogel, C.; Hunter, W.; Stallard, T.; Robinson, N.; Schmidt, S. R.

    2016-09-01

    Commercially available wind yield assessment models rely on superposition of wakes calculated for isolated single turbines. These methods of wake simulation fail to account for emergent flow physics that may affect the behaviour of multiple turbines and their wakes and therefore wind farm yield predictions. In this paper wake-wake interaction is modelled computationally (CFD) and physically (in a hydraulic flume) to investigate physical causes of discrepancies between analytical modelling and simulations or measurements. Three effects, currently neglected in commercial models, are identified as being of importance: 1) when turbines are directly aligned, the combined wake is shortened relative to the single turbine wake; 2) when wakes are adjacent, each will be lengthened due to reduced mixing; and 3) the pressure field of downstream turbines can move and modify wakes flowing close to them.

  9. Role of externally induced coherent superposition in demonstrating quantum nonlocality in a correlated emission laser

    Energy Technology Data Exchange (ETDEWEB)

    Tesfa, Sintayehu [Physics Department, Addis Ababa University, PO Box 1176, Addis Ababa (Ethiopia); Physics Department, Dilla University, PO Box 419, Dilla (Ethiopia)], E-mail: sint_tesfa@yahoo.com

    2008-12-28

    Analysis of the effects of external pumping on the quantum features, including entanglement, quantum nonlocality and nonclassical photon number correlations, of the cavity radiation of a correlated emission laser is presented. It turns out that the contribution of externally induced coherent superposition in demonstrating quantum nonlocality is significant. Despite the available evidence that entangled states can exhibit nonlocality for certain values of the rate at which the atoms are injected into the cavity and amplitude of the driving radiation, a direct relation between the degree of entanglement and quantum nonlocality cannot be established. However, it seems likely to make a consistent connection between the Cauchy-Schwarz and Bell-Clauser-Horne-Shimony-Holt inequalities. It is evident that comparison among various nonclassical correlations enhances the understanding of the otherwise intricate quantum theoretical predictions.

  10. Photon-assisted Landau-Zener transition: Role of coherent superposition states

    CERN Document Server

    Sun, Zhe; Wang, Xiaoguang; Nori, Franco

    2012-01-01

    We investigate a Landau-Zener (LZ) transition process modelled by a quantum two-level system (TLS) coupled to a photon mode when the bias energy is varied linearly in time. The initial state of the photon field is assumed to be a superposition of coherent states, leading to a more intricate LZ transition. Applying the rotating-wave approximation (RWA), analytical results are obtained revealing the enhancement of the LZ probability by increasing the average photon number. We also consider the creation of entanglement and the change of photon statistics during the LZ process. When without the RWA, we find some qualitative differences of the LZ dynamics from the RWA results, e.g., the average photon number no longer monotonically enhances the LZ probability.

  11. A test of the equivalence principle(s) for quantum superpositions

    Science.gov (United States)

    Orlando, Patrick J.; Mann, Robert B.; Modi, Kavan; Pollock, Felix A.

    2016-10-01

    We propose an experimental test of the quantum equivalence principle introduced by Zych and Brukner (arXiv:1502.00971), which generalises the Einstein equivalence principle to superpositions of internal energy states. We consider a harmonically trapped {spin} - \\tfrac{1}{2} atom in the presence of both gravity and an external magnetic field and show that when the external magnetic field is suddenly switched off, various violations of the equivalence principle would manifest as otherwise forbidden transitions. Performing such an experiment would put bounds on the various phenomenological violating parameters. We further demonstrate that the classical weak equivalence principle can be tested by suddenly putting the apparatus into free fall, effectively ‘switching off’ gravity.

  12. Practical method using superposition of individual magnetic fields for initial arrangement of undulator magnets.

    Science.gov (United States)

    Tsuchiya, K; Shioya, T

    2015-04-01

    We have developed a practical method for determining an excellent initial arrangement of magnetic arrays for a pure-magnet Halbach-type undulator. In this method, the longitudinal magnetic field distribution of each magnet is measured using a moving Hall probe system along the beam axis with a high positional resolution. The initial arrangement of magnetic arrays is optimized and selected by analyzing the superposition of all distribution data in order to achieve adequate spectral quality for the undulator. We applied this method to two elliptically polarizing undulators (EPUs), called U#16-2 and U#02-2, at the Photon Factory storage ring (PF ring) in the High Energy Accelerator Research Organization (KEK). The measured field distribution of the undulator was demonstrated to be excellent for the initial arrangement of the magnet array, and this method saved a great deal of effort in adjusting the magnetic fields of EPUs.

  13. Digital coherent superposition of optical OFDM subcarrier pairs with Hermitian symmetry for phase noise mitigation.

    Science.gov (United States)

    Yi, Xingwen; Chen, Xuemei; Sharma, Dinesh; Li, Chao; Luo, Ming; Yang, Qi; Li, Zhaohui; Qiu, Kun

    2014-06-02

    Digital coherent superposition (DCS) provides an approach to combat fiber nonlinearities by trading off the spectrum efficiency. In analogy, we extend the concept of DCS to the optical OFDM subcarrier pairs with Hermitian symmetry to combat the linear and nonlinear phase noise. At the transmitter, we simply use a real-valued OFDM signal to drive a Mach-Zehnder (MZ) intensity modulator biased at the null point and the so-generated OFDM signal is Hermitian in the frequency domain. At receiver, after the conventional OFDM signal processing, we conduct DCS of the optical OFDM subcarrier pairs, which requires only conjugation and summation. We show that the inter-carrier-interference (ICI) due to phase noise can be reduced because of the Hermitain symmetry. In a simulation, this method improves the tolerance to the laser phase noise. In a nonlinear WDM transmission experiment, this method also achieves better performance under the influence of cross phase modulation (XPM).

  14. Superposition of two optical vortices with opposite integer or non-integer orbital angular momentum

    Directory of Open Access Journals (Sweden)

    Carlos Fernando Díaz Meza

    2016-04-01

    Full Text Available This work develops a brief proposal to achieve the superposition of two opposite vortex beams, both with integer or non-integer mean value of the orbital angular momentum. The first part is about the generation of this kind of spatial light distributions through a modified Brown and Lohmann’s hologram. The inclusion of a simple mathematical expression into the pixelated grid’s transmittance function, based in Fourier domain properties, shifts the diffraction orders counterclockwise and clockwise to the same point and allows the addition of different modes. The strategy is theoretically and experimentally validated for the case of two opposite rotation helical wavefronts.

  15. Enhancing quantum entanglement for continuous variables by a coherent superposition of photon subtraction and addition

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Su-Yong; Kim, Ho-Joon [Department of Physics, Texas A and M University at Qatar, P.O. Box 23874, Doha (Qatar); Ji, Se-Wan [School of Computational Sciences, Korea Institute for Advanced Study, Seoul 130-012 (Korea, Republic of); Nha, Hyunchul [Department of Physics, Texas A and M University at Qatar, P.O. Box 23874, Doha (Qatar); Institute fuer Quantenphysik, Universitaet Ulm, D-89069 Ulm (Germany)

    2011-07-15

    We investigate how the entanglement properties of a two-mode state can be improved by performing a coherent superposition operation ta+ra{sup {dagger}} of photon subtraction and addition, proposed by Lee and Nha [Phys. Rev. A 82, 053812 (2010)], on each mode. We show that the degree of entanglement, the Einstein-Podolsky-Rosen-type correlation, and the performance of quantum teleportation can be all enhanced for the output state when the coherent operation is applied to a two-mode squeezed state. The effects of the coherent operation are more prominent than those of the mere photon subtraction a and the addition a{sup {dagger}} particularly in the small-squeezing regime, whereas the optimal operation becomes the photon subtraction (case of r=0) in the large-squeezing regime.

  16. Contradiction between assumption on superposition of flux-qubit states and the law of angular momentum conservation

    CERN Document Server

    Nikulov, A V

    2009-01-01

    Superconducting loop interrupted by one or three Josephson junctions is considered in many publications as a possible quantum bit, flux qubit, which can be used for creation of quantum computer. But the assumption on superposition of two macroscopically distinct quantum states of superconducting loop contradict to the fundamental law of angular momentum conservation and the universally recognized quantum formalism. Numerous publications devoted to the flux qubit testify to an inadequate interpretation by many authors of paradoxical nature of superposition principle and the subject of quantum description.

  17. Hidden Vacuum Rabi Oscillations: Dynamical Quantum Superpositions of On/Off Interaction between a Single Quantum Dot and a Microcavity

    Science.gov (United States)

    Ridolfo, A.; Stassi, R.; Di Stefano, O.

    2017-06-01

    We show that it is possible to realize quantum superpositions of switched-on and -off strong light-matter interaction in a single quantum dot- semiconductor microcavity system. Such superpositions enable the observation of counterintuitive quantum conditional dynamics effects. Situations are possible where cavity photons as well as the emitter luminescence display exponential decay but their joint detection probability exhibits vacuum Rabi oscillations. Remarkably, these quantum correlations are also present in the nonequilibrium steady state spectra of such coherently driven dissipative quantum systems.

  18. Quantum control of electronic fluxes during adiabatic attosecond charge migration in degenerate superposition states of benzene

    Science.gov (United States)

    Jia, Dongming; Manz, Jörn; Paulus, Beate; Pohl, Vincent; Tremblay, Jean Christophe; Yang, Yonggang

    2017-01-01

    We design four linearly x- and y-polarized as well as circularly right (+) and left (-) polarized, resonant π / 2 -laser pulses that prepare the model benzene molecule in four different degenerate superposition states. These consist of equal (0.5) populations of the electronic ground state S0 (1A1g) plus one of four degenerate excited states, all of them accessible by dipole-allowed transitions. Specifically, for the molecule aligned in the xy-plane, these excited states include different complex-valued linear combinations of the 1E1u,x and 1E1u,y degenerate states. As a consequence, the laser pulses induce four different types of periodic adiabatic attosecond (as) charge migrations (AACM) in benzene, all with the same period, 504 as, but with four different types of angular fluxes. One of the characteristic differences of these fluxes are the two angles for zero fluxes, which appear as the instantaneous angular positions of the "source" and "sink" of two equivalent, or nearly equivalent branches of the fluxes which flow in pincer-type patterns from one molecular site (the "source") to the opposite one (the "sink"). These angles of zero fluxes are either fixed at the positions of two opposite carbon nuclei in the yz-symmetry plane, or at the centers of two opposite carbon-carbon bonds in the xz-symmetry plane, or the angles of zero fluxes rotate in angular forward (+) or backward (-) directions, respectively. As a resume, our quantum model simulations demonstrate quantum control of the electronic fluxes during AACM in degenerate superposition states, in the attosecond time domain, with the laser polarization as the key knob for control.

  19. Stochastic versus deterministic kernel-based superposition approaches for dose calculation of intensity-modulated arcs

    Science.gov (United States)

    Tang, Grace; Earl, Matthew A.; Luan, Shuang; Wang, Chao; Cao, Daliang; Yu, Cedric X.; Naqvi, Shahid A.

    2008-09-01

    Dose calculations for radiation arc therapy are traditionally performed by approximating continuous delivery arcs with multiple static beams. For 3D conformal arc treatments, the shape and weight variation per degree is usually small enough to allow arcs to be approximated by static beams separated by 5°-10°. But with intensity-modulated arc therapy (IMAT), the variation in shape and dose per degree can be large enough to require a finer angular spacing. With the increase in the number of beams, a deterministic dose calculation method, such as collapsed-cone convolution/superposition, will require proportionally longer computational times, which may not be practical clinically. We propose to use a homegrown Monte Carlo kernel-superposition technique (MCKS) to compute doses for rotational delivery. The IMAT plans were generated with 36 static beams, which were subsequently interpolated into finer angular intervals for dose calculation to mimic the continuous arc delivery. Since MCKS uses random sampling of photons, the dose computation time only increased insignificantly for the interpolated-static-beam plans that may involve up to 720 beams. Ten past IMRT cases were selected for this study. Each case took approximately 15-30 min to compute on a single CPU running Mac OS X using the MCKS method. The need for a finer beam spacing is dictated by how fast the beam weights and aperture shapes change between the adjacent static planning beam angles. MCKS, however, obviates the concern by allowing hundreds of beams to be calculated in practically the same time as for a few beams. For more than 43 beams, MCKS usually takes less CPU time than the collapsed-cone algorithm used by the Pinnacle3 planning system.

  20. Importance of the ligand basis set in ab initio thermochemical calculations of transition metal species

    Science.gov (United States)

    Plascencia, Cesar; Wang, Jiaqi; Wilson, Angela K.

    2017-10-01

    The impact of basis set choice has been considered for a series of transition metal (TM) species. The need for higher level correlation consistent basis sets on both the metal and ligand has been investigated, and permutations in the pairings of basis set used for TM's and basis set used for ligands can lead to effective routes to complete basis set (CBS) limit extrapolations of thermochemical energetics with little change in thermochemical predictions as compared to those resulting from the use of traditional basis set pairings, while enabling computational cost savings. Basis set superposition errors (BSSE) that can arise have also been considered.

  1. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  2. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  3. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  4. Quantum error correction against photon loss using multicomponent cat states

    Science.gov (United States)

    Bergmann, Marcel; van Loock, Peter

    2016-10-01

    We analyze a generalized quantum error-correction code against photon loss where a logical qubit is encoded into a subspace of a single oscillator mode that is spanned by distinct multicomponent cat states (coherent-state superpositions). We present a systematic code construction that includes the extension of an existing one-photon-loss code to higher numbers of losses. When subject to a photon loss (amplitude damping) channel, the encoded qubits are shown to exhibit a cyclic behavior where the code and error spaces each correspond to certain multiples of losses, half of which can be corrected. As another generalization we also discuss how to protect logical qudits against photon losses, and as an application we consider a one-way quantum communication scheme in which the encoded qubits are periodically recovered while the coherent-state amplitudes are restored as well at regular intervals.

  5. IMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHY

    Directory of Open Access Journals (Sweden)

    Sunil Agrawal

    2012-05-01

    Full Text Available Visual cryptography encodes a secret binary image (SI into shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the shares, however, have no visual meaning and hinder the objectives of visual cryptography. Halftone visual cryptography encodes a secret binary image into n halftone shares (images carrying significant visual information. When secrecy is important factor rather than the quality of recovered image the shares must be of better visual quality. Different filters such as Floyd-Steinberg, Jarvis, Stuki, Burkes, Sierra, and Stevenson’s-Arce are used and their impact on visual quality of shares is seen. The simulation shows that error filters used in error diffusion lays a great impact on the visual quality of the shares.

  6. Aerodynamic Analysis of the Truss-Braced Wing Aircraft Using Vortex-Lattice Superposition Approach

    Science.gov (United States)

    Ting, Eric Bi-Wen; Reynolds, Kevin Wayne; Nguyen, Nhan T.; Totah, Joseph J.

    2014-01-01

    The SUGAR Truss-BracedWing (TBW) aircraft concept is a Boeing-developed N+3 aircraft configuration funded by NASA ARMD FixedWing Project. This future generation transport aircraft concept is designed to be aerodynamically efficient by employing a high aspect ratio wing design. The aspect ratio of the TBW is on the order of 14 which is significantly greater than those of current generation transport aircraft. This paper presents a recent aerodynamic analysis of the TBW aircraft using a conceptual vortex-lattice aerodynamic tool VORLAX and an aerodynamic superposition approach. Based on the underlying linear potential flow theory, the principle of aerodynamic superposition is leveraged to deal with the complex aerodynamic configuration of the TBW. By decomposing the full configuration of the TBW into individual aerodynamic lifting components, the total aerodynamic characteristics of the full configuration can be estimated from the contributions of the individual components. The aerodynamic superposition approach shows excellent agreement with CFD results computed by FUN3D, USM3D, and STAR-CCM+. XXXXX Demand for green aviation is expected to increase with the need for reduced environmental impact. Most large transports today operate within the best cruise L/D range of 18-20 using the conventional tube-and-wing design. This configuration has led to marginal improvements in aerodynamic efficiency over this past century, as aerodynamic improvements tend to be incremental. A big opportunity has been shown in recent years to significantly reduce structural weight or trim drag, hence improved energy efficiency, with the use of lightweight materials such as composites. The Boeing 787 transport is an example of a modern airframe design that employs lightweight structures. High aspect ratio wing design can provide another opportunity for further improvements in energy efficiency. Historically, the study of high aspect ratio wings has been intimately tied to the study of

  7. N-dimensional measurement-device-independent quantum key distribution with N + 1 un-characterized sources: zero quantum-bit-error-rate case.

    Science.gov (United States)

    Hwang, Won-Young; Su, Hong-Yi; Bae, Joonwoo

    2016-01-01

    We study N-dimensional measurement-device-independent quantum-key-distribution protocol where one checking state is used. Only assuming that the checking state is a superposition of other N sources, we show that the protocol is secure in zero quantum-bit-error-rate case, suggesting possibility of the protocol. The method may be applied in other quantum information processing.

  8. 基于干涉图加权叠加的图像压缩算法%Image Compression Algorithm Based on the Interference Figure Weighted Superposition

    Institute of Scientific and Technical Information of China (English)

    苑颖

    2015-01-01

    在对图像进行压缩的过程中,容易出现信息丢失的情况,导致传统图像压缩算法由于相关性低的图像也可参与等权计算,使得图像产生偏差及失真,无法有效实现图像压缩,提出一种基于干涉图加权叠加的图像压缩算法,给出高相关点的平均形变相位变化速率,依据误差传播定律,求出全部干涉图叠加后高相关点受到的大气延迟干扰,对每个图像对应的相关系数进行计算,依据模型采集高相关目标点,干涉图被叠加后,给出高相关点大气延迟对线性形变速率的干扰,通过移位操作获取样本的Exp-Golomb级数,完成编码获取数据的非负映射操作,利用上一个样本的Exp-Golomb编码级数对当前样本值级数进行估测,通过计算原始干涉图数据和压缩后的干涉图数据的压缩比与峰值信噪比,对压缩效果进行度量。通过光谱相对均方误差RQE对压缩前后的原始光谱和复原光谱进行度量。仿真实验结果表明,所提方法具有很高的精度。%In the process of image compression, prone to the condition of the missing information, the traditional image compression algorithms can also be right to participate in such as the image with low correlation calculation, makes the deviation and distortion image, unable to effectively realize the image compression, put forward a kind of image compression algorithm based on weighted superposition interference figure, gives relevant points higher average deformation phase rate of change, according to the law of error propagation, and the superposition of all interference figure after the relevant points higher disturbance by the atmospheric delay, for each image corresponding to the correlation coefficient is calculated, based on the model to collect related target high, after interference figure is superposition, gives relevant points higher linear deformation rate of atmospheric delay, by shift

  9. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  10. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    Institute of Scientific and Technical Information of China (English)

    XIE Shi-Peng; LUO Li-Min

    2012-01-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT).The scatter kernel superposition (SKS) method has been used occasionally in previous studies.However,this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel.This study first evaluates the scatter kernel parameters using the SDB,and then isolates the scatter distribution based on the SKS.The quality of image can be improved by removing the scatter distribution.The results show that the method can effectively reduce the scatter artifacts,and increase the image quality.Our approach increases the image contrast and reduces the magnitude of cupping.The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel.This method is computationally efficient,easy to implement,and provides scatter correction using a single scan acquisition.

  11. Plane-wave superpositions defined by orthonormal scalar functions on two- and three-dimensional manifolds

    Science.gov (United States)

    Borzdov

    2000-04-01

    Vector plane-wave superpositions defined by a given set of orthonormal scalar functions on a two- or three-dimensional manifold-beam manifold-are treated. We present a technique for composing orthonormal beams and some other specific types of fields such as three-dimensional standing waves, moving and evolving whirls. It can be used for any linear fields, in particular, electromagnetic fields in complex media and elastic fields in crystals. For electromagnetic waves in an isotropic medium or free space, unique families of exact solutions of Maxwell's equations are obtained. The solutions are illustrated by calculating fields, energy densities, and energy fluxes of beams defined by the spherical harmonics. It is shown that the obtained results can be used for a transition from the plane-wave approximation to more accurate models of real incident beams in free-space techniques for characterizing complex media. A mathematical formalism convenient for the treatment of various beams defined by the spherical harmonics is presented.

  12. SUPERPOSE-An excel visual basic program for fracture modeling based on the stress superposition method

    Science.gov (United States)

    Ismail Ozkaya, Sait

    2014-03-01

    An Excel Visual Basic program, SUPERPOSE, is presented to predict the distribution, relative size and strike of tensile and shear fractures on anticlinal structures. The program is based on the concept of stress superposition; addition of curvature-related local tensile stress and regional far-field stress. The method accurately predicts fractures on many Middle East Oil Fields that were formed under a strike slip regime as duplexes, flower structures or inverted structures. The program operates on the Excel platform. The program reads the parameters and structural grid data from an Excel template and writes the results to the same template. The program has two routines to import structural grid data in the Eclipse and Zmap formats. The platform of SUPERPOSE is a single layer structural grid of a given cell size (e.g. 50×50 m). In the final output, a single tensile or two conjugate shear fractures are placed in each cell if fracturing criteria are satisfied; otherwise the cell is left blank. Strike of the representative fracture(s) is calculated and exact, but the length is an index of fracture porosity (fracture density×length×aperture) within that cell.

  13. A new optical image cryptosystem based on two-beam coherent superposition and unequal modulus decomposition

    Science.gov (United States)

    Chen, Linfei; Gao, Xiong; Chen, Xudong; He, Bingyu; Liu, Jingyu; Li, Dan

    2016-04-01

    In this paper, a new optical image cryptosystem is proposed based on two-beam coherent superposition and unequal modulus decomposition. Different from the equal modulus decomposition or unit vector decomposition, the proposed method applies common vector decomposition to accomplish encryption process. In the proposed method, the original image is firstly Fourier transformed and the complex function in spectrum domain will be obtained. The complex distribution is decomposed into two vector components with unequal amplitude and phase by the common vector decomposition method. Subsequently, the two components are modulated by two random phases and transformed from spectrum domain to spatial domain, and amplitude parts are extracted as encryption results and phase parts are extracted as private keys. The advantages of the proposed cryptosystem are: four different phase and amplitude information created by the method of common vector decomposition strengthens the security of the cryptosystem, and it fully solves the silhouette problem. Simulation results are presented to show the feasibility and the security of the proposed cryptosystem.

  14. Multi-dimensional color image storage and retrieval for a normal arbitrary quantum superposition state

    Science.gov (United States)

    Li, Hai-Sheng; Zhu, Qingxin; Zhou, Ri-Gui; Song, Lan; Yang, Xing-jiang

    2014-04-01

    Multi-dimensional color image processing has two difficulties: One is that a large number of bits are needed to store multi-dimensional color images, such as, a three-dimensional color image of needs bits. The other one is that the efficiency or accuracy of image segmentation is not high enough for some images to be used in content-based image search. In order to solve the above problems, this paper proposes a new representation for multi-dimensional color image, called a -qubit normal arbitrary quantum superposition state (NAQSS), where qubits represent colors and coordinates of pixels (e.g., represent a three-dimensional color image of only using 30 qubits), and the remaining 1 qubit represents an image segmentation information to improve the accuracy of image segmentation. And then we design a general quantum circuit to create the NAQSS state in order to store a multi-dimensional color image in a quantum system and propose a quantum circuit simplification algorithm to reduce the number of the quantum gates of the general quantum circuit. Finally, different strategies to retrieve a whole image or the target sub-image of an image from a quantum system are studied, including Monte Carlo sampling and improved Grover's algorithm which can search out a coordinate of a target sub-image only running in where and are the numbers of pixels of an image and a target sub-image, respectively.

  15. Applicability condition of time-temperature superposition principle (TTSP) to a multi-phase system

    Science.gov (United States)

    Nakano, Takato

    2013-08-01

    The applicability condition of the time-temperature superposition principle (TTSP) to a multi-phase system is analytically discussed assuming a mixture law. It was concluded that the TTSP does not hold for a multi-phase system in general but does hold for a multi-component system in which some components have the same temperature dependence and the others have no temperature dependence. On the basis of the results, the application of the TTSP to plant materials such as wood and bamboo was examined using a mixture law and a stretched-exponential function having a characteristic relaxation time τ 0 and a stretching parameter β. Wood can be treated as a multi-phase system consisting of a framework (f) and matrix (m). In this case, it was expected that the TTSP holds for the matrix in the shorter time region t≪ τ 0 f under T T gm , where t and T g is a measurement time and the glass transition temperature, respectively.

  16. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    Energy Technology Data Exchange (ETDEWEB)

    Khare, Avinash [Raja Ramanna Fellow, Indian Institute of Science Education and Research (IISER), Pune 411021 (India); Saxena, Avadh [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

    2014-03-15

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ{sup 4}, the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn{sup 2}(x, m), it also admits solutions in terms of dn {sup 2}(x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations.

  17. Probing the conductance superposition law in single-molecule circuits with parallel paths.

    Science.gov (United States)

    Vazquez, H; Skouta, R; Schneebeli, S; Kamenetska, M; Breslow, R; Venkataraman, L; Hybertsen, M S

    2012-10-01

    According to Kirchhoff's circuit laws, the net conductance of two parallel components in an electronic circuit is the sum of the individual conductances. However, when the circuit dimensions are comparable to the electronic phase coherence length, quantum interference effects play a critical role, as exemplified by the Aharonov-Bohm effect in metal rings. At the molecular scale, interference effects dramatically reduce the electron transfer rate through a meta-connected benzene ring when compared with a para-connected benzene ring. For longer conjugated and cross-conjugated molecules, destructive interference effects have been observed in the tunnelling conductance through molecular junctions. Here, we investigate the conductance superposition law for parallel components in single-molecule circuits, particularly the role of interference. We synthesize a series of molecular systems that contain either one backbone or two backbones in parallel, bonded together cofacially by a common linker on each end. Single-molecule conductance measurements and transport calculations based on density functional theory show that the conductance of a double-backbone molecular junction can be more than twice that of a single-backbone junction, providing clear evidence for constructive interference.

  18. Domain Superposition Technique for Free Vibration Analysis of Textile Composite Structures

    Directory of Open Access Journals (Sweden)

    W. G. Jiang

    2015-03-01

    Full Text Available Textile composites consist of interlaced tows, which are impregnated with a matrix material and then cured. The interlacing of the tows offers the potential for increased through-thickness strength compared to conventional laminated composites. However, one disadvantage of textile composites is the difficulty in predicting their performance due to the complex geometry of their internal architectures. Finite element analysis (FEA has become an effective means to predict the response of complex textile composite structures. When an attempt is made to perform a conventional FEA, one of the tough issues faced is how to deal with the topologically complex internal geometries. To overcome this difficult issue, a domain superposition technique (DST has been proposed to implement free vibration analysis of woven composite structures. The significant advantage of the DST over traditional FEA is that it does not need to directly deal with the likely degenerated resin-rich region, thus the DST model is much easier to establish. Numerical results show that DST predictions correlate excellently with traditional FEAs.

  19. Application of time-temperature superposition method in thermal aging life prediction of shipboard cables

    Institute of Scientific and Technical Information of China (English)

    DENG Wen-dong; CHEN Yi-yuan

    2014-01-01

    The life of shipboard cables will decrease due to the complex aging processes. In terms of the safety perspective, remaining life prediction of the cable is essential to maintain a reliable operation. In this paper, firstly, based on Arrhenius equation, residual life of new styrene-butadiene cable is calculated;result indicates that the degradation rate which changes with time is proportional to thermal temperature. Then second order dynamic model is adopted into the residual life prediction, combined with the time-temperature superposition method (TTSP), and a new residual life model is proposed. According to the accelerated thermal aging experiment data and Arrhenius equation, TTSP method demonstrates to be an efficient way for life prediction, and life at normal temperature can be estimated by this model. In order to monitor the state of styrene-butadiene cable more accurately, an improved residual life model based on equivalent environment temperature of cable is proposed, and life of cable under real operation is analyzed. Result indicates that this model is credible and reliable, and it provides an important theoretical base for residual life of cables.

  20. Quantum superposition of a single microwave photon in two different 'colour' states

    Science.gov (United States)

    Zakka-Bajjani, Eva; Nguyen, François; Lee, Minhyea; Vale, Leila R.; Simmonds, Raymond W.; Aumentado, José

    2011-08-01

    Fully controlled coherent coupling of arbitrary harmonic oscillators is an important tool for processing quantum information. Coupling between quantum harmonic oscillators has previously been demonstrated in several physical systems using a two-level system as a mediating element. Direct interaction at the quantum level has only recently been realized by means of resonant coupling between trapped ions. Here we implement a tunable direct coupling between the microwave harmonics of a superconducting resonator by means of parametric frequency conversion. We accomplish this by coupling the mode currents of two harmonics through a superconducting quantum interference device (SQUID) and modulating its flux at the difference (~7GHz) of the harmonic frequencies. We deterministically prepare a single-photon Fock state and coherently manipulate it between multiple modes, effectively controlling it in a superposition of two different 'colours'. This parametric interaction can be described as a beamsplitter-like operation that couples different frequency modes. As such, it could be used to implement linear optical quantum computing protocols on-chip.

  1. Examining the justification of superposition model of FePc ; A DMC study

    CERN Document Server

    Ichibha, Tom; Hongo, Kenta; Maezono, Ryo

    2016-01-01

    We have applied CASSCF-DMC to evaluate relative stabilities of the possible electronic configurations of an isolated FePc under $D_{4h}$ symmetry. It predicts $A_{2g}$ ground state, supporting preceding DFT studies,[J. Chem. Phys. 114, 9780 (2001), Appl. Phys. 95, 165 (2009), Phys. Rev. B 85, 235129 (2012)] with confidence overcoming the ambiguity about exchange-correlation (XC) functionals. By comparing DMC with several XC, we clarified the importance of the short range exchange to describe the relative stability. We examined why the predicted $A_{2g}$ is excluded from possible ground states in the recent ligand field based model.[J. Chem. Phys. 138, 244308 (2013)] Simplified assumptions made in the superposition model [Rep. Prog. Phys. 52, 699 (1989)] are identified to give unreasonably less energy gain for $A_{2g}$ when compared with the reality. The state is found to have possible reasons for the stabilization, reducing the occupations from an unstable anti-bonding orbital, preventing double occupancies i...

  2. An approximate method to acoustic radiation problems: element radiation superposition method

    Institute of Scientific and Technical Information of China (English)

    wANG Bin; TANG weilin; FAN Jun

    2008-01-01

    An approximate method is brought forward to predict the acoustic pressure based on the surface velocity.It is named Element Radiation Superposition Method(ERSM).The study finds that each element in Acoustic Transfer Vector(ATV)equals the acoustic pressure radiated by the corresponding surface element vibrating in unit velocity and other surface elements keep still.that is the acoustic pressure radiated by the corresponding baffled pistonvibrating in unit velocity.So,it utilizes the acoustic pressure radiated by a baffled piston to establish the transfer relationship between the surfaEe velocity and the acoustic pressure.The total acoustic pressure is obtained through summing up the products of the surface velocity and the transfer quantity.It adopts the regular baffle to fit the actual baffle in order to calculate the acoustic pressure radiated by the baffled piston.This approximate method has larger advantage in calculating speed and memory space than Boundary Element Method.Numerical simulations show that this approximate method is reasonable and feasible.

  3. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  4. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    Science.gov (United States)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  5. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  6. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  7. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  8. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  9. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  10. Isolated attosecond pulses generation from coherent superposition state of helium ion in static electric fields and spatial nonhomogeneous fields

    Science.gov (United States)

    Liu, Hao; Zhang, Zhengzhong; Wu, Yangjiang; Jiang, Shicheng; Yu, Chao

    2016-09-01

    We present a systematic study of high-order harmonic generation (HHG) from helium ion with the initial state prepared as a coherent superposition of electronic ground state and an excited state. As a result, the conversion efficiency of the harmonic spectrum is significantly enhanced. When we add a static electric field in fundamental field, the supercontinuum region of the harmonic spectrum is distinctly extended and an isolated 100 as pulse can be generated. Moreover, we use a spatial nonhomogeneous field to increase the cutoff energy in high-order harmonic generation spectrum, which can be extended to about 700 eV, and an isolated 50 as pulse can be obtained directly by the superposition of the supercontinuum harmonics.

  11. R3D Align: global pairwise alignment of RNA 3D structures using local superpositions

    Science.gov (United States)

    Rahrig, Ryan R.; Leontis, Neocles B.; Zirbel, Craig L.

    2010-01-01

    Motivation: Comparing 3D structures of homologous RNA molecules yields information about sequence and structural variability. To compare large RNA 3D structures, accurate automatic comparison tools are needed. In this article, we introduce a new algorithm and web server to align large homologous RNA structures nucleotide by nucleotide using local superpositions that accommodate the flexibility of RNA molecules. Local alignments are merged to form a global alignment by employing a maximum clique algorithm on a specially defined graph that we call the ‘local alignment’ graph. Results: The algorithm is implemented in a program suite and web server called ‘R3D Align’. The R3D Align alignment of homologous 3D structures of 5S, 16S and 23S rRNA was compared to a high-quality hand alignment. A full comparison of the 16S alignment with the other state-of-the-art methods is also provided. The R3D Align program suite includes new diagnostic tools for the structural evaluation of RNA alignments. The R3D Align alignments were compared to those produced by other programs and were found to be the most accurate, in comparison with a high quality hand-crafted alignment and in conjunction with a series of other diagnostics presented. The number of aligned base pairs as well as measures of geometric similarity are used to evaluate the accuracy of the alignments. Availability: R3D Align is freely available through a web server http://rna.bgsu.edu/R3DAlign. The MATLAB source code of the program suite is also freely available for download at that location. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: r-rahrig@onu.edu PMID:20929913

  12. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha

    2015-02-13

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  13. Investigating and improving introductory physics students’ understanding of the electric field and superposition principle

    Science.gov (United States)

    Li, Jing; Singh, Chandralekha

    2017-09-01

    We discuss an investigation of the difficulties that students in a university introductory physics course have with the electric field and superposition principle and how that research was used as a guide in the development and evaluation of a research-validated tutorial on these topics to help students learn these concepts better. The tutorial uses a guided enquiry-based approach to learning and involved an iterative process of development and evaluation. During its development, we obtained feedback both from physics instructors who regularly teach introductory physics in which these concepts are taught and from students for whom the tutorial is intended. The iterative process continued and the feedback was incorporated in the later versions of the tutorial until the researchers were satisfied with the performance of a diverse group of introductory physics students on the post-test after they worked on the tutorial in an individual one-on-one interview situation. Then the final version of the tutorial was administered in several sections of the university physics course after traditional instruction in relevant concepts. We discuss the performance of students in individual interviews and on the pre-test administered before the tutorial (but after traditional lecture-based instruction) and on the post-test administered after the tutorial. We also compare student performance in sections of the class in which students worked on the tutorial with other similar sections of the class in which students only learned via traditional instruction. We find that students performed significantly better in the sections of the class in which the tutorial was used compared to when students learned the material via only lecture-based instruction.

  14. Elementary Green function as an integral superposition of Gaussian beams in inhomogeneous anisotropic layered structures in Cartesian coordinates

    Science.gov (United States)

    Červený, Vlastislav; Pšenčík, Ivan

    2017-08-01

    Integral superposition of Gaussian beams is a useful generalization of the standard ray theory. It removes some of the deficiencies of the ray theory like its failure to describe properly behaviour of waves in caustic regions. It also leads to a more efficient computation of seismic wavefields since it does not require the time-consuming two-point ray tracing. We present the formula for a high-frequency elementary Green function expressed in terms of the integral superposition of Gaussian beams for inhomogeneous, isotropic or anisotropic, layered structures, based on the dynamic ray tracing (DRT) in Cartesian coordinates. For the evaluation of the superposition formula, it is sufficient to solve the DRT in Cartesian coordinates just for the point-source initial conditions. Moreover, instead of seeking 3 × 3 paraxial matrices in Cartesian coordinates, it is sufficient to seek just 3 × 2 parts of these matrices. The presented formulae can be used for the computation of the elementary Green function corresponding to an arbitrary direct, multiply reflected/transmitted, unconverted or converted, independently propagating elementary wave of any of the three modes, P, S1 and S2. Receivers distributed along or in a vicinity of a target surface may be situated at an arbitrary part of the medium, including ray-theory shadow regions. The elementary Green function formula can be used as a basis for the computation of wavefields generated by various types of point sources (explosive, moment tensor).

  15. Influence of the superposition approximation on calculated effective dose rates from galactic cosmic rays at aerospace-related altitudes

    Science.gov (United States)

    Copeland, Kyle

    2015-07-01

    The superposition approximation was commonly employed in atmospheric nuclear transport modeling until recent years and is incorporated into flight dose calculation codes such as CARI-6 and EPCARD. The useful altitude range for this approximation is investigated using Monte Carlo transport techniques. CARI-7A simulates atmospheric radiation transport of elements H-Fe using a database of precalculated galactic cosmic radiation showers calculated with MCNPX 2.7.0 and is employed here to investigate the influence of the superposition approximation on effective dose rates, relative to full nuclear transport of galactic cosmic ray primary ions. Superposition is found to produce results less than 10% different from nuclear transport at current commercial and business aviation altitudes while underestimating dose rates at higher altitudes. The underestimate sometimes exceeds 20% at approximately 23 km and exceeds 40% at 50 km. Thus, programs employing this approximation should not be used to estimate doses or dose rates for high-altitude portions of the commercial space and near-space manned flights that are expected to begin soon.

  16. Java application for the superposition T-matrix code to study the optical properties of cosmic dust aggregates

    CERN Document Server

    Halder, P; Roy, P Deb; Das, H S

    2014-01-01

    In this paper, we report the development of a java application for the Superposition T-matrix code, JaSTA (Java Superposition T-matrix App), to study the light scattering properties of aggregate structures. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precession superposition codes for multi-sphere clusters in random orientation developed by Mackowski and Mischenko (1996). It consists of a graphical user interface (GUI) in the front hand and a database of related data in the back hand. Both the interactive GUI and database package directly enable a user to model by self-monitoring respective input parameters (namely, wavelength, complex refractive indices, grain size, etc.) to study the related optical properties of cosmic dust (namely, extinction, polarization, etc.) instantly, i.e., with zero computational time. This increases the efficiency of the user. The database of JaSTA is now created for a few sets of input parameters with...

  17. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  18. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  19. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  20. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  1. The holographic reconstructing algorithm and its error analysis about phase-shifting phase measurement

    Institute of Scientific and Technical Information of China (English)

    LU Xiaoxu; ZHONG Liyun; ZHANG Yimo

    2007-01-01

    Phase-shifting measurement and its error estimation method were studied according to the holographic principle.A function of synchronous superposition of object complex amplitude reconstructed from N-step phase-shifting through one integral period (N-step phase-shifting function for short) was proposed.In N-step phase-shifting measurement,the interferograms are seen as a series of in-line holograms and the reference beam is an ideal parallel-plane wave.So the N-step phase-shifting function can be obtained by multiplying the interferogram by the original referencc wave.In ideal conditions.the proposed method is a kind of synchronous superposition algorithm in which the complex amplitude is separated,measured and superposed.When error exists in measurement,the result of the N-step phase-shifting function is the optimal expected value of the least-squares fitting method.In the above method,the N+1-step phase-shifting function can be obtained from the N-step phase-shifting function.It shows that the N-step phase-shifting function can be separated into two parts:the ideal N-step phase-shifting function and its errors.The phase-shifting errors in N-steps phase-shifting phase measurement can be treated the same as the relative errors of amplitude and intensity under the understanding of the N+1-step phase-shifting function.The difficulties of the error estimation in phase-shifting phase measurement were restricted by this error estimation method.Meanwhile,the maximum error estimation method of phase-shifting phase measurement and its formula were proposed.

  2. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  3. A nonlinear training set superposition filter derived by neural network training methods for implementation in a shift-invariant optical correlator

    Science.gov (United States)

    Kypraios, Ioannis; Young, Rupert C. D.; Birch, Philip M.; Chatwin, Christopher R.

    2003-08-01

    The various types of synthetic discriminant function (sdf) filter result in a weighted linear superposition of the training set images. Neural network training procedures result in a non-linear superposition of the training set images or, effectively, a feature extraction process, which leads to better interpolation properties than achievable with the sdf filter. However, generally, shift invariance is lost since a data dependant non-linear weighting function is incorporated in the input data window. As a compromise, we train a non-linear superposition filter via neural network methods with the constraint of a linear input to allow for shift invariance. The filter can then be used in a frequency domain based optical correlator. Simulation results are presented that demonstrate the improved training set interpolation achieved by the non-linear filter as compared to a linear superposition filter.

  4. Coherent Superposition States of Atoms and Molecules in a Bose-Einstein Condensate with Exactly Balanced Photo-Associations and Photo-Dissociations

    Institute of Scientific and Technical Information of China (English)

    杨晓雪; 吴颖

    2003-01-01

    We show that there exist a series of coherent superposition states of atoms and molecules in a dilute Bose-Einstein condensate with exactly balanced photo-associations and photo-dissociations, and their analytical expressions are explicitly given. They also correspond to the coherent superposition states of two kinds of photons in optical second harmonic generation processes, which shows exactly balanced down- and up-conversions.

  5. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  6. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  7. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  8. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  9. Microwave sintering versus conventional sintering of NiCuZn ferrites. Part II: Microstructure and DC-bias superposition characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, Chenxin, E-mail: cxouyang@foxmail.com [Department of Materials Science and Engineering, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong 518055 (China); Research Center, Shenzhen Zhenhua Fu Electronics Co., Ltd., Shenzhen, Guangdong 518109 (China); Xiao, Shumin [Department of Materials Science and Engineering, Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen, Guangdong 518055 (China); Zhu, Jianhua [Research Center, Shenzhen Zhenhua Fu Electronics Co., Ltd., Shenzhen, Guangdong 518109 (China); College of Optoelectronic Engineering, Shenzhen University, Shenzhen, Guangdong 518060 (China); Shi, Wei [Research Center, Shenzhen Zhenhua Fu Electronics Co., Ltd., Shenzhen, Guangdong 518109 (China)

    2016-06-01

    NiCuZn ferrites with the composition of (Ni{sub 0.48}Cu{sub 0.10}Zn{sub 0.42}O){sub 1.04}(Fe{sub 2}O{sub 3}){sub 0.96} were consolidated by microwave sintering (MS) and conventional sintering (CS), respectively. The influences of external microwave field and additives (1 wt% BSZ glass or 1 wt% Bi{sub 2}O{sub 3}) on the microstructure and DC-bias superposition characteristics of NiCuZn ferrites were investigated. Experimental results demonstrated that the final grain size was much larger with higher density since applying microwave field. In addition, for undoped ferrites, coarse grains structure obtained from microwave sintering is harmful to the DC-bias superposition characteristics. However, since adding BSZ glass or Bi{sub 2}O{sub 3}, the discrepancy on the final grain size obtained from MS and CS methods is not obvious. NiCuZn ferrites with the addition of BSZ glass or Bi{sub 2}O{sub 3} exhibited a stronger ability to inhibit the drop of permeability under the DC-bias magnetic field. Possible mechanisms behind are discussed in this article. - Highlights: • Magnetization process of NiCuZn ferrite under bias current field is studied. • Coarse grains size from microwave sintering is harmful to endure bias current attack. • BSZ glass and Bi{sub 2}O{sub 3} could enhance the density and DC-bias superposition property.

  10. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  11. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  12. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  13. An application of superpositions of two-state Markovian sources to the modelling of self-similar behaviour

    DEFF Research Database (Denmark)

    Andersen, Allan T.; Nielsen, Bo Friis

    1997-01-01

    We present a modelling framework and a fitting method for modelling second order self-similar behaviour with the Markovian arrival process (MAP). The fitting method is based on fitting to the autocorrelation function of counts a second order self-similar process. It is shown that with this fitting...... algorithm it is possible closely to match the autocorrelation function of counts for a second order self-similar process over 3-5 time-scales with 8-16 state MAPs with a very simple structure, i.e. a superposition of 3 and 4 interrupted Poisson processes (IPP) respectively and a Poisson process. The fitting...

  14. Universal quantum computation with electron spins in quantum dots based on superpositions of spacetime paths and Coulomb blockade

    CERN Document Server

    Lin, C C Y; Wu, Y Z; Zhang, W M; Lin, Cyrus C.Y.; Soo, Chopin; Wu, Yin-Zhong; Zhang, Wei-Min

    2004-01-01

    Using electrostatic gates to control the electron positions, we present a new controlled-NOT gate based on quantum dots. The qubit states are chosen to be the spin states of an excess conductor electron in the quantum dot; and the main ingredients of our scheme are the superpositions of space-time paths of electrons and the effect of Coulomb blockade. All operations are performed only on individual quantum dots and are based on fundamental interactions. Without resorting to spin-spin terms or other assumed interactions, the scheme can be realized with a dedicated circuit and a necessary number of quantum dots. Gate fidelity of the quantum computation is also presented.

  15. Application of coupled mode theory and coherent superposition theory to phase-shift measurements on optical microresonators

    Science.gov (United States)

    Barnes, Jack A.; Loock, Hans-Peter

    2016-10-01

    Several mathematical models exist in the literature to describe the properties of optical resonators. Here, coupled mode theory and coherent superposition theory are compared and their consistency is demonstrated as they are applied to phase-shift cavity ring-down measurements in optical (micro-)cavities. In the particular case of a whispering gallery mode in a microsphere cavity these models are applied to transmission measurements and backscattering measurements through the fiber taper that couples light into the microresonator. It is shown that both models produce identical relations when applied to these traveling wave cavities.

  16. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  17. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  18. Simultaneous nonlinear encryption of grayscale and color images based on phase-truncated fractional Fourier transform and optical superposition principle.

    Science.gov (United States)

    Wang, Xiaogang; Zhao, Daomu

    2013-09-01

    A nonlinear color and grayscale images cryptosystem based on phase-truncated fractional Fourier transform and optical superposition principle is proposed. In order to realize simultaneous encryption of color and grayscale images, each grayscale image is first converted into two phase masks by using an optical coherent superposition, one of which is treated as a part of input information that will be fractional Fourier transformed while the other in the form of a chaotic random phase mask (CRPM) is used as a decryption key. For the purpose of optical performance, all the processes are performed through three channels, i.e., red, green, and blue. Different from most asymmetric encryption methods, the decryption process is designed to be linear for the sake of effective decryption. The encryption level of a double random phase encryption based on phase-truncated Fourier transform is enhanced by extending it into fractional Fourier domain and the load of the keys management and transmission is lightened by using CRPMs. The security of the proposed cryptosystem is discussed and computer simulation results are presented to verify the validity of the proposed method.

  19. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  20. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  1. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  2. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  3. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  4. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  5. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  6. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  7. Error bars in experimental biology

    OpenAIRE

    2007-01-01

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...

  8. Diagnostic errors in pediatric radiology

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)

    2011-03-15

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  9. Transient Error Data Analysis.

    Science.gov (United States)

    1979-05-01

    Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99

  10. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  11. Java application for the superposition T-matrix code to study the optical properties of cosmic dust aggregates

    Science.gov (United States)

    Halder, P.; Chakraborty, A.; Deb Roy, P.; Das, H. S.

    2014-09-01

    In this paper, we report the development of a java application for the Superposition T-matrix code, JaSTA (Java Superposition T-matrix App), to study the light scattering properties of aggregate structures. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precession superposition codes for multi-sphere clusters in random orientation developed by Mackowski and Mischenko (1996). It consists of a graphical user interface (GUI) in the front hand and a database of related data in the back hand. Both the interactive GUI and database package directly enable a user to model by self-monitoring respective input parameters (namely, wavelength, complex refractive indices, grain size, etc.) to study the related optical properties of cosmic dust (namely, extinction, polarization, etc.) instantly, i.e., with zero computational time. This increases the efficiency of the user. The database of JaSTA is now created for a few sets of input parameters with a plan to create a large database in future. This application also has an option where users can compile and run the scattering code directly for aggregates in GUI environment. The JaSTA aims to provide convenient and quicker data analysis of the optical properties which can be used in different fields like planetary science, atmospheric science, nano science, etc. The current version of this software is developed for the Linux and Windows platform to study the light scattering properties of small aggregates which will be extended for larger aggregates using parallel codes in future. Catalogue identifier: AETB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 571570 No. of bytes in distributed program

  12. Modal Characterization using Principal Component Analysis: application to Bessel, higher-order Gaussian beams and their superposition

    Science.gov (United States)

    Mourka, A.; Mazilu, M.; Wright, E. M.; Dholakia, K.

    2013-01-01

    The modal characterization of various families of beams is a topic of current interest. We recently reported a new method for the simultaneous determination of both the azimuthal and radial mode indices for light fields possessing orbital angular momentum. The method is based upon probing the far-field diffraction pattern from a random aperture and using the recorded data as a ‘training set'. We then transform the observed data into uncorrelated variables using the principal component analysis (PCA) algorithm. Here, we show the generic nature of this approach for the simultaneous determination of the modal parameters of Hermite-Gaussian and Bessel beams. This reinforces the widespread applicability of this method for applications including information processing, spectroscopy and manipulation. Additionally, preliminary results demonstrate reliable decomposition of superpositions of Laguerre-Gaussians, yielding the intensities and relative phases of each constituent mode. Thus, this approach represents a powerful method for characterizing the optical multi-dimensional Hilbert space. PMID:23478330

  13. The classification of travelling wave solutions and superposition of multi-solutions to Camassa-Holm equation with dispersion

    Institute of Scientific and Technical Information of China (English)

    Liu Cheng-Shi

    2007-01-01

    Under the travelling wave transformation, the Camassa-Holm equation with dispersion is reduced to an integrable ordinary differential equation (ODE), whose general solution can be obtained using the trick of one-parameter group.Furthermore, by using a complete discrimination system for polynomial, the classification of all single travelling wave solutions to the Camassa-Holm equation with dispersion is obtained. In particular, an affine subspace structure in the set of the solutions of the reduced ODE is obtained. More generally, an implicit linear structure in the Camassa-Holm equation with dispersion is found. According to the linear structure, we obtain the superposition of multi-solutions to Camassa-Holm equation with dispersion.

  14. The optimum modification of energy spectra using FFT convolution/multigrid superposition algorithm on the focus radiation treatment system

    CERN Document Server

    Hanyu, Y; Hoshino, K; Ono, A; Sonoda, T; Hirabayashi, H; Karasawa, K; Mitsuhashi, N

    2003-01-01

    In the convolution/superposition algorithm, the energy spectrum should be modified to make the reconstructed dose distribution consistent with the measured dose distribution. The energy spectrum, which gives the best agreement, is not determined uniquely depending on the reconstruction procedure. In this report, the effects of the characteristics of the energy spectrum on the calculation accuracy are evaluated by comparing the percentage depth dose (PDD) and beam profiles for the reference energy spectrum with those calculated for the modified spectrum in order to optimize the energy spectrum modification procedure when 4 and 10 MV X-ray beams are used. Decreasing the number of energy bins brought a larger decrease rate in the computation accuracy than a decrease rate in computation time. Further, the decrease of the number of energy bins led to a change of the energy spectrum. The balance of the relative fluence weight in a each bin and its average energy, which determines the absoluted dose, are important p...

  15. Quantum Superposition of Parametrically Amplified Multiphoton Pure States whitin a Decoherence-Free Schrödinger-Cat Structure

    CERN Document Server

    Bovino, F A; Mussi, V

    1999-01-01

    The new process of quantum-injection into an optical parametric amplifier operating in entangled configuration is adopted to amplify into a large dimensionality spin 1/2 Hilbert space the quantum entanglement and superposition properties of the photon-couples generated by parametric down-conversion. The structure of the Wigner function and of the field's correlation functions shows a decoherence-free, multiphoton Schroedinger-cat behaviour of the emitted field which is largely detectable against the squeezed-vacuum noise. Furthermore, owing to its entanglement character, the system is found to exhibit multi-particle quantum nonseparability and Bell-type nonlocality properties. These relevant quantum features are analyzed for several travelling-wave optical configurations implying different input quantum-injection schemes

  16. Iron-oxygen vacancy defect centers in PbTi O3 : Newman superposition model analysis and density functional calculations

    Science.gov (United States)

    Meštrić, H.; Eichel, R.-A.; Kloss, T.; Dinse, K.-P.; Laubach, So.; Laubach, St.; Schmidt, P. C.; Schönau, K. A.; Knapp, M.; Ehrenberg, H.

    2005-04-01

    The Fe3+ center in ferroelectric PbTiO3 together with an oxygen vacancy forms a charged defect associate, oriented along the crystallographic c axis. Its microscopic structure has been analyzed in detail comparing results from a semiempirical Newman superposition model analysis based on fine-structure data and from calculations using density functional theory. Both methods give evidence for a substitution of Fe3+ for Ti4+ as an acceptor center. The position of the iron ion in the ferroelectric phase is found to be similar to the B site in the paraelectric phase. Partial charge compensation is locally provided by a directly coordinated oxygen vacancy. Using high-resolution synchrotron powder diffraction, it was verified that lead titanate remains tetragonal down to 12K , exhibiting a c/a ratio of 1.0721.

  17. Errors in CT colonography.

    Science.gov (United States)

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  18. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  19. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  20. Unravelling the Sources of Climate Model Errors in Subpolar Gyre Sea-Surface Temperatures

    Science.gov (United States)

    Rubino, Angelo; Zanchettin, Davide

    2017-04-01

    Climate model biases are systematic errors affecting geophysical quantities simulated by coupled general circulation models and Earth system models against observational targets. To this regard, biases affecting sea-surface temperatures (SSTs) are a major concern due to the crucial role of SST in the dynamical coupling between the atmosphere and the ocean, and for the associated variability. Strong SST biases can be detrimental for the overall quality of historical climate simulations, they contribute to uncertainty in simulated features of climate scenarios and complicate initialization and assessment of decadal climate prediction experiments. We use a dynamic linear model developed within a Bayesian hierarchical framework for a probabilistic assessment of spatial and temporal characteristics of SST errors in ensemble climate simulations. In our formulation, the statistical model distinguishes between local and regional errors, further separated into seasonal and non-seasonal components. This contribution, based on a framework developed for the study of biases in the Tropical Atlantic in the frame of the European project PREFACE, focuses on the subpolar gyre region in the North Atlantic Ocean, where climate models are typically affected by a strong cold SST bias. We will use results from an application of our statistical model to an ensemble of hindcasts with the MiKlip prototype system for decadal climate predictions to demonstrate how the decadal evolution of model errors toward the subpolar gyre cold bias is substantially shaped by a seasonal signal. We will demonstrate that such seasonal signal stems from the superposition of propagating large-scale seasonal errors originated in the Labrador Sea and of large-scale as well as mesoscale seasonal errors originated along the Gulf Stream. Based on these results, we will discuss how pronounced distinctive characteristics of the different error components distinguished by our model allow for a clearer connection

  1. Error bounds for set inclusions

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Xiyin(郑喜印)

    2003-01-01

    A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.

  2. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  3. Feature Referenced Error Correction Apparatus.

    Science.gov (United States)

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  4. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  6. Beta systems error analysis

    Science.gov (United States)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  7. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  8. Experimental repetitive quantum error correction.

    Science.gov (United States)

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  9. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  10. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  11. Prediction of discretization error using the error transport equation

    Science.gov (United States)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  12. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  13. Improved Error Thresholds for Measurement-Free Error Correction

    Science.gov (United States)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  14. PREVENTABLE ERRORS: NEVER EVENTS

    Directory of Open Access Journals (Sweden)

    Narra Gopal

    2014-07-01

    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  15. Concurrent remote entanglement with quantum error correction against photon losses

    Science.gov (United States)

    Roy, Ananda; Stone, A. Douglas; Jiang, Liang

    2016-09-01

    Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.

  16. Comparison of analytical error and sampling error for contaminated soil.

    Science.gov (United States)

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  17. Influences of observation errors in eddy flux data on inverse model parameter estimation

    Directory of Open Access Journals (Sweden)

    G. Lasslop

    2008-09-01

    Full Text Available Eddy covariance data are increasingly used to estimate parameters of ecosystem models. For proper maximum likelihood parameter estimates the error structure in the observed data has to be fully characterized. In this study we propose a method to characterize the random error of the eddy covariance flux data, and analyse error distribution, standard deviation, cross- and autocorrelation of CO2 and H2O flux errors at four different European eddy covariance flux sites. Moreover, we examine how the treatment of those errors and additional systematic errors influence statistical estimates of parameters and their associated uncertainties with three models of increasing complexity – a hyperbolic light response curve, a light response curve coupled to water fluxes and the SVAT scheme BETHY. In agreement with previous studies we find that the error standard deviation scales with the flux magnitude. The previously found strongly leptokurtic error distribution is revealed to be largely due to a superposition of almost Gaussian distributions with standard deviations varying by flux magnitude. The crosscorrelations of CO2 and H2O fluxes were in all cases negligible (R2 below 0.2, while the autocorrelation is usually below 0.6 at a lag of 0.5 h and decays rapidly at larger time lags. This implies that in these cases the weighted least squares criterion yields maximum likelihood estimates. To study the influence of the observation errors on model parameter estimates we used synthetic datasets, based on observations of two different sites. We first fitted the respective models to observations and then added the random error estimates described above and the systematic error, respectively, to the model output. This strategy enables us to compare the estimated parameters with true parameters. We illustrate that the correct implementation of the random error standard deviation scaling with flux

  18. Magnetic-field sensing with quantum error detection under the effect of energy relaxation

    Science.gov (United States)

    Matsuzaki, Yuichiro; Benjamin, Simon

    2017-03-01

    A solid state spin is an attractive system with which to realize an ultrasensitive magnetic field sensor. A spin superposition state will acquire a phase induced by the target field, and we can estimate the field strength from this phase. Recent studies have aimed at improving sensitivity through the use of quantum error correction (QEC) to detect and correct any bit-flip errors that may occur during the sensing period. Here we investigate the performance of a two-qubit sensor employing QEC and under the effect of energy relaxation. Surprisingly, we find that the standard QEC technique to detect and recover from an error does not improve the sensitivity compared with the single-qubit sensors. This is a consequence of the fact that the energy relaxation induces both a phase-flip and a bit-flip noise where the former noise cannot be distinguished from the relative phase induced from the target fields. However, we have found that we can improve the sensitivity if we adopt postselection to discard the state when error is detected. Even when quantum error detection is moderately noisy, and allowing for the cost of the postselection technique, we find that this two-qubit system shows an advantage in sensing over a single qubit in the same conditions.

  19. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  20. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  1. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  2. Calculating dose distributions and wedge factors for photon treatment fields with dynamic wedges based on a convolution/superposition method.

    Science.gov (United States)

    Liu, H H; McCullough, E C; Mackie, T R

    1998-01-01

    A convolution/superposition based method was developed to calculate dose distributions and wedge factors in photon treatment fields generated by dynamic wedges. This algorithm used a dual source photon beam model that accounted for both primary photons from the target and secondary photons scattered from the machine head. The segmented treatment tables (STT) were used to calculate realistic photon fluence distributions in the wedged fields. The inclusion of the extra-focal photons resulted in more accurate dose calculation in high dose gradient regions, particularly in the beam penumbra. The wedge factors calculated using the convolution method were also compared to the measured data and showed good agreement within 0.5%. The wedge factor varied significantly with the field width along the moving jaw direction, but not along the static jaw or the depth direction. This variation was found to be determined by the ending position of the moving jaw, or the STT of the dynamic wedge. In conclusion, the convolution method proposed in this work can be used to accurately compute dose for a dynamic or an intensity modulated treatment based on the fluence modulation in the treatment field.

  3. Method of superposition of dislocations for finding stress-strain state around fan-shaped structure in a brittle rock

    Science.gov (United States)

    Sadovskii, V. M.; Sadovskaya, O. V.

    2016-10-01

    The Tarasov fan-shaped mechanism, simulating the formation of shear ruptures in a brittle rock at stress conditions corresponding to seismogenic depths, is analyzed. For computation of the stress-strain state of a rock near the equilibrium fan-structure the original method is constructed. The fault is modeled as a narrow elongated layer, filled with the domino-blocks, between two elastic half-spaces. Displacements and stresses around the fan are represented in the integral form as a superposition of edge dislocations with an unknown function of distribution of the Burgers vector. To take into account the stresses of lateral thrust, the solution of plane problem of the elasticity is used for a tensile crack, on the surfaces of which the previously unknown normal stresses are distributed. The exact formulation of the problem leads to a system of two nonlinear singular integral equations, which is solved numerically by the method of successive approximations. The obtained solution is used, when setting the initial data in computations of the dynamics of the Tarasov fan-shaped mechanism. With the help of this solution the discontinuous nature of shear ruptures, observed in natural and laboratory experiments, is explained.

  4. On-line and real-time diagnosis method for proton membrane fuel cell (PEMFC) stack by the superposition principle

    Science.gov (United States)

    Lee, Young-Hyun; Kim, Jonghyeon; Yoo, Seungyeol

    2016-09-01

    The critical cell voltage drop in a stack can be followed by stack defect. A method of detecting defective cell is the cell voltage monitoring. The other methods are based on the nonlinear frequency response. In this paper, the superposition principle for the diagnosis of PEMFC stack is introduced. If critical cell voltage drops exist, the stack behaves as a nonlinear system. This nonlinearity can explicitly appear in the ohmic overpotential region of a voltage-current curve. To detect the critical cell voltage drop, a stack is excited by two input direct test-currents which have smaller amplitude than an operating stack current and have an equal distance value from the operating current. If the difference between one voltage excited by a test current and the voltage excited by a load current is not equal to the difference between the other voltage response and the voltage excited by the load current, the stack system acts as a nonlinear system. This means that there is a critical cell voltage drop. The deviation from the value zero of the difference reflects the grade of the system nonlinearity. A simulation model for the stack diagnosis is developed based on the SPP, and experimentally validated.

  5. Gaussian-weighted RMSD superposition of proteins: a structural comparison for flexible proteins and predicted protein structures.

    Science.gov (United States)

    Damm, Kelly L; Carlson, Heather A

    2006-06-15

    Many proteins contain flexible structures such as loops and hinged domains. A simple root mean square deviation (RMSD) alignment of two different conformations of the same protein can be skewed by the difference between the mobile regions. To overcome this problem, we have developed a novel method to overlay two protein conformations by their atomic coordinates using a Gaussian-weighted RMSD (wRMSD) fit. The algorithm is based on the Kabsch least-squares method and determines an optimal transformation between two molecules by calculating the minimal weighted deviation between the two coordinate sets. Unlike other techniques that choose subsets of residues to overlay, all atoms are included in the wRMSD overlay. Atoms that barely move between the two conformations will have a greater weighting than those that have a large displacement. Our superposition tool has produced successful alignments when applied to proteins for which two conformations are known. The transformation calculation is heavily weighted by the coordinates of the static region of the two conformations, highlighting the range of flexibility in the overlaid structures. Lastly, we show how wRMSD fits can be used to evaluate predicted protein structures. Comparing a predicted fold to its experimentally determined target structure is another case of comparing two protein conformations of the same sequence, and the degree of alignment directly reflects the quality of the prediction.

  6. Intercomparison of the GOS approach, superposition T-matrix method, and laboratory measurements for black carbon optical properties during aging

    Science.gov (United States)

    He, Cenlin; Takano, Yoshi; Liou, Kuo-Nan; Yang, Ping; Li, Qinbin; Mackowski, Daniel W.

    2016-11-01

    We perform a comprehensive intercomparison of the geometric-optics surface-wave (GOS) approach, the superposition T-matrix method, and laboratory measurements for optical properties of fresh and coated/aged black carbon (BC) particles with complex structures. GOS and T-matrix calculations capture the measured optical (i.e., extinction, absorption, and scattering) cross sections of fresh BC aggregates, with 5-20% differences depending on particle size. We find that the T-matrix results tend to be lower than the measurements, due to uncertainty in theoretical approximations of realistic BC structures, particle property measurements, and numerical computations in the method. On the contrary, the GOS results are higher than the measurements (hence the T-matrix results) for BC radii 100 nm. We find good agreement (differences methods in asymmetry factors for various BC sizes and aggregating structures. For aged BC particles coated with sulfuric acid, GOS and T-matrix results closely match laboratory measurements of optical cross sections. Sensitivity calculations show that differences between the two methods in optical cross sections vary with coating structures for radii 100 nm. We find small deviations (≤10%) in asymmetry factors computed from the two methods for most BC coating structures and sizes, but several complex structures have 10-30% differences. This study provides the foundation for downstream application of the GOS approach in radiative transfer and climate studies.

  7. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  8. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  9. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  11. Spatial frequency domain error budget

    Energy Technology Data Exchange (ETDEWEB)

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  12. Reducing errors in emergency surgery.

    Science.gov (United States)

    Watters, David A K; Truskett, Philip G

    2013-06-01

    Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  13. Error Analysis in English Language Learning

    Institute of Scientific and Technical Information of China (English)

    杜文婷

    2009-01-01

    Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.

  14. Error Analysis And Second Language Acquisition

    Institute of Scientific and Technical Information of China (English)

    王惠丽

    2016-01-01

    Based on the theories of error and error analysis, the article is trying to explore the effect of error and error analysis on SLA. Thus give some advice to the language teachers and language learners.

  15. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  16. Discretization error of Stochastic Integrals

    CERN Document Server

    Fukasawa, Masaaki

    2010-01-01

    Asymptotic error distribution for approximation of a stochastic integral with respect to continuous semimartingale by Riemann sum with general stochastic partition is studied. Effective discretization schemes of which asymptotic conditional mean-squared error attains a lower bound are constructed. Two applications are given; efficient delta hedging strategies with transaction costs and effective discretization schemes for the Euler-Maruyama approximation are constructed.

  17. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  18. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  19. Onorbit IMU alignment error budget

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  20. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  1. Binary Error Correcting Network Codes

    CERN Document Server

    Wang, Qiwen; Li, Shuo-Yen Robert

    2011-01-01

    We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.

  2. Error Propagation in the Hypercycle

    CERN Document Server

    Campos, P R A; Stadler, P F

    1999-01-01

    We study analytically the steady-state regime of a network of n error-prone self-replicating templates forming an asymmetric hypercycle and its error tail. We show that the existence of a master template with a higher non-catalyzed self-replicative productivity, a, than the error tail ensures the stability of chains in which merror tail is guaranteed for catalytic coupling strengths (K) of order of a. We find that the hypercycle becomes more stable than the chains only for K of order of a2. Furthermore, we show that the minimal replication accuracy per template needed to maintain the hypercycle, the so-called error threshold, vanishes like sqrt(n/K) for large K and n<=4.

  3. FPU-Supported Running Error Analysis

    OpenAIRE

    T. Zahradnický; R. Lórencz

    2010-01-01

    A-posteriori forward rounding error analyses tend to give sharper error estimates than a-priori ones, as they use actual data quantities. One of such a-posteriori analysis – running error analysis – uses expressions consisting of two parts; one generates the error and the other propagates input errors to the output. This paper suggests replacing the error generating term with an FPU-extracted rounding error estimate, which produces a sharper error bound.

  4. Control of selective population transfer and creation of two orthogonal maximally superposition states via a pair of pump and chirped Stokes pulses

    Science.gov (United States)

    Zhang, Zhenhua; Tian, Jin; Du, Juan

    2017-02-01

    We demonstrate a simple way to realize control of population transfer and creation of two orthogonal maximally superposition states in a Λ-type four-level system with closely spaced doublet target states via a pair of pump and chirped Stokes pulses. It is illustrated that the population in the initial state can be selectively, completely and robustly transferred to either of the doublet target states via chirped adiabatic passage with the suitable chirp rate and frequency detuning of the Stokes pulse. Besides, creation of two orthogonal maximally superposition states between the initial state and intermediate state with equal amplitude but inverse relative phases is also shown, which may have potential applications in the preparations of quantum bits.

  5. The manipulation of massive ro-vibronic superpositions using time-frequency-resolved coherent anti-Stokes Raman scattering (TFRCARS) from quantum control to quantum computing

    CERN Document Server

    Zadoyan, R; Lidar, D A; Apkarian, V A

    2001-01-01

    Molecular ro-vibronic coherences, joint energy-time distributions of quantum amplitudes, are selectively prepared, manipulated, and imaged in Time-Frequency-Resolved Coherent Anti-Stokes Raman Scattering (TFRCARS) measurements using femtosecond laser pulses. The studies are implemented in iodine vapor, with its thermally occupied statistical ro-vibrational density serving as initial state. The evolution of the massive ro-vibronic superpositions, consisting of 1000 eigenstates, is followed through two-dimensional images. The first- and second-order coherences are captured using time-integrated frequency-resolved CARS, while the third-order coherence is captured using time-gated frequency-resolved CARS. The Fourier filtering provided by time integrated detection projects out single ro-vibronic transitions, while time-gated detection allows the projection of arbitrary ro-vibronic superpositions from the coherent third-order polarization. Beside the control and imaging of chemistry, the controlled manipulation of...

  6. Small Atomic Orbital Basis Set First-Principles Quantum Chemical Methods for Large Molecular and Periodic Systems: A Critical Analysis of Error Sources.

    Science.gov (United States)

    Sure, Rebecca; Brandenburg, Jan Gerit; Grimme, Stefan

    2016-04-01

    In quantum chemical computations the combination of Hartree-Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double-zeta quality is still widely used, for example, in the popular B3LYP/6-31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean-field methods.

  7. North-south asymmetry of solar activity as a superposition of two realizations - the sign and absolute value

    Science.gov (United States)

    Badalyan, O. G.; Obridko, V. N.

    2017-07-01

    Context. Since the occurrence of north-south asymmetry (NSA) of alternating sign may be determined by different mechanisms, the frequency and amplitude characteristics of this phenomenon should be considered separately. Aims: We propose a new approach to the description of the NSA of solar activity. Methods: The asymmetry defined as A = (N-S)/(N + S) (where N and S are, respectively, the indices of activity of the northern and southern hemispheres) is treated as a superposition of two functions: the sign of asymmetry (signature) and its absolute value (modulus). This approach is applied to the analysis of the NSA of sunspot group areas for the period 1874-2013. Results: We show that the sign of asymmetry provides information on the behavior of the asymmetry. In particular, it displays quasi-periodic variation with a period of 12 yr and quasi-biennial oscillations as the asymmetry itself. The statistics of the so-called monochrome intervals (long periods of positive or negative asymmetry) are considered and it is shown that the distribution of these intervals is described by the random distribution law. This means that the dynamo mechanisms governing the cyclic variation of solar activity must involve random processes. At the same time, the asymmetry modulus has completely different statistical properties and is probably associated with processes that determine the amplitude of the cycle. One can reliably isolate an 11-yr cycle in the behavior of the asymmetry absolute value shifted by half a period with respect to the Wolf numbers. It is shown that the asymmetry modulus has a significant prognostic value: the higher the maximum of the asymmetry modulus, the lower the following Wolf number maximum. Conclusions: A fundamental nature of this concept of NSA is discussed in the context of the general methodology of cognizing the world. It is supposed that the proposed description of the NSA will help clarify the nature of this phenomenon.

  8. Application of a novel superposition technique to the structure of an organophosphorus insecticide and an organometallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, D.E.

    1979-01-01

    The structures of 0-0-dimethyl-0-(3,5,6-trichloro-2-pyridyl) phosphorothioate (Dowco 214) and dicarbonylbis(eta-cyclopentadienyl)-..mu..-carbonyl-..mu..-thiocarbonyldiiron have been solved by single crystal x-ray diffraction and use of a modified Patterson superposition technique that uses two multiple vectors to define a structural parallelogram. This method results in a simpler and more accurate shift vector position determination and a general improvement in map clarity. Dowco 214 crystallizes in the space group P/sub 1//sup -/ with a = 11.598(2) A, b = 13.619(3) A, c = 8.281(1) A, ..cap alpha.. = 94.65(1)/sup 0/, ..beta.. = 94.87(2)/sup 0/, ..gamma.. = 79.97(2)/sup 0/ and four molecules per cell (two per asymmetric unit). A CNDO II calculation was performed and partial charge densities assigned. The molecule contains distances between positively charged centers that correspond well to the reported anionic-esteratic distance (a possible reaction variable) in AChE. Additional reaction variables are discussed. Cp/sub 2/Fe/sub 2/(CO)/sub 3/CS crystallizes in the space group P2/sub 1//c with a = 14.508(8) A, b = 13.618(5) A, c = 15.193(7) A, ..beta.. = 110.50(6)/sup 0/ and eight molecules per unit cell (two per asymmetric unit). The compound contains both a carbonyl and thiocarbonyl bridge and ..pi..-bonded cyclopentadienyl rings that are cis to one another. The iron--iron bond length is intermediate to that of its carbonyl and thiocarbonyl analogs.

  9. Study and verification of the superposition method used for determining the pressure losses of the heat exchangers

    Directory of Open Access Journals (Sweden)

    Petru Michal

    2015-01-01

    Full Text Available This paper deals with study of the pressure losses of the new heat convectors product line. For all devices connected to the heating circuit of the building, it‘s required to declare a tabulated values of pressure drops. The heat exchangers are manufactured in a lot of different dimensions and atypical shapes. An individual assessment of the pressure losses for each type is very time consuming. Therefore based on the resulting data of the experiments and numerical models, an electronic database was created that can be used for calculating the total values of the pressure losses in the optionally assembled exchanger. The measurements are standardly performed by the manufacturer Licon heat hydrodynamic laboratory and the numerical models are carried out in COMSOL Multiphysics. Different variations of the convectors geometry cause non-linear process of energy losses, which is proportionately about 30% larger for the smaller exchanger than for the larger types. The results of the experiments and the numerical simulations were in a very good conjuncture. Considerable influence of the water temperature onto the total size of incurred energy losses has been proven. This is mainly caused by the different ranges of the Reynolds number depending on the viscosity of the used liquid. Concerning to the tested method of superposition, it is not possible to easily find the characteristic values appropriate for the each individual components of the heat exchanger. Every of the components behaves differently, depend on the complexity of the exchanger. However, the correction coefficient, depended on the matrix of the exchanger, that is suitable for the entire range of the developed product line has been found.

  10. APPLICATION OF TIME-TEMPERATURE SUPERPOSITION PRINCIPLE TO EVALUATION OF SCATTERING INTENSITY EVOLUTION IN PHASE SEPARATION FOR PMMA/SAN BLENDS

    Institute of Scientific and Technical Information of China (English)

    Mao Peng; Qiang Zheng

    2000-01-01

    Spinodal phase separation behavior of poly(methyl methacrylate)/poly(styrene-co-acrylonitrile) (PMMA/SAN)blends was investigated by the time-resolved small angle light scattering (SALS) technique. It was found that the influence of temperature on the scattering intensity evolution followed the time-temperature superposition principle. The relationship between temperature and the relaxation time of scattering intensity I(t) can be well described by the Williams-Landel-Ferry (WLF) function.

  11. Realization of GHZ States and the GHZ Test via Cavity QED for a Cavity Prepared in a Superposition of Zero and One Fock States

    CERN Document Server

    Guerra, E S

    2004-01-01

    In this article we discuss the realization of atomic GHZ states involving three-level atoms in a cascade and in a lambda configuration and we show explicitly how to use this state to perform the GHZ test in which it is possible to decide between local realism theories and quantum mechanics. The experimental realizations proposed makes use the interaction of Rydberg atoms with a cavity prepared in a state which is a superposition of zero and one Fock states.

  12. EVALUATION OF ERRORS IN PARAMETERS DETERMINATION FOR THE EARTH HIGHLY ANOMALOUS GRAVITY FIELD

    Directory of Open Access Journals (Sweden)

    L. P. Staroseltsev

    2016-05-01

    Full Text Available Subject of Research.The paper presents research results and the simulation of errors caused by determining the Earth gravity field parameters for regions with high segmentation of gravity field. The Kalman filtering estimation of determining errors is shown. Method. Simulation model for the realization of inertial geodetic method for determining the Earth gravity field parameters is proposed. The model is based on high-precision inertial navigation system (INS at the free gyro and high-accuracy satellite system. The possibility of finding the conformity between the determined and stochastic approaches in gravity potential modeling is shown with the example of a point-mass model. Main Results. Computer simulation shows that for determining the Earth gravity field parameters gyro error model can be reduced to two significant indexes, one for each gyro. It is also shown that for regions with high segmentation of gravity field point-mass model can be used. This model is a superposition of attractive and repulsive masses - the so-called gravitational dipole. Practical Relevance. The reduction of gyro error model can reduce the dimension of the Kalman filter used in the integrated system, which decreases the computation time and increases the visibility of the state vector. Finding the conformity between the determined and stochastic approaches allows the application of determined and statistical terminology. Also it helps to create a simulation model for regions with high segmentation of gravity field.

  13. Extending the lifetime of a quantum bit with error correction in superconducting circuits

    Science.gov (United States)

    Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.

    2016-08-01

    Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.

  14. Extending the lifetime of a quantum bit with error correction in superconducting circuits.

    Science.gov (United States)

    Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S M; Jiang, L; Mirrahimi, Mazyar; Devoret, M H; Schoelkopf, R J

    2016-08-25

    Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The 'break-even' point of QEC--at which the lifetime of a qubit exceeds the lifetime of the constituents of the system--has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.

  15. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  16. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  17. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  18. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Radatz, Hendrik

    1979-01-01

    Five types of errors in an information-processing classification are discussed: language difficulties; difficulties in obtaining spatial information; deficient mastery of prerequisite skills, facts, and concepts; incorrect associations; and application of irrelevant rules. (MP)

  19. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  20. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  1. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  2. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  3. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  4. Error image aware content restoration

    Science.gov (United States)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  5. Quantum error correction for beginners.

    Science.gov (United States)

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  6. Dominant modes via model error

    Science.gov (United States)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  7. π Type Lithium Bond Interaction between Ethylene,Acetylene,or Benzene and Amido-lithium

    Institute of Scientific and Technical Information of China (English)

    YUAN,Kun; LIU,Yanzhi; L(U),Lingling; ZHU,Yuancheng; ZHANG,Ji; ZHANG,dunyan

    2009-01-01

    The optimization geometries and interaction energy corrected by basis set super-position error (BSSE) of the lithium bond complexes between ethylene,acetylene,or benzene and amido-lithium have been calculated at the B3L YP/6-311++G** and MP2/6-311 ++-G** levels.And only one configuration was obtained for each lithium bond system.All the equilibrium geometries were confirmed to be stable state by analytical frequency computations.The calculations showed that all the N(2)-Li(4) bond lengths increased obviously and the red shift of N(2)-Li(4) stretching frequency occurred after complexes formed.The calculated binding energies with BSSE and zero-point vibrational energy corrections of complexes Ⅰ,Ⅱ and Ⅲ are -26.04,-24.86 and -30.02 kJ·mol-1 via an MP2 method,respectively.Natural bond orbital (NBO) theory analysis revealed that the three complexes were all formed with π type lithium bond interaction between ethylene,acetylene,or benzene and amido-lithium.

  8. Theoretical Study on Intermolecular Interactions and Thermodynamic Properties of Nitroamine Dimers

    Institute of Scientific and Technical Information of China (English)

    JU,Xue-Hai(居学海); XIAO,He-Ming(肖鹤鸣)

    2002-01-01

    Ab initio self-consistent field (SCF) and Moller-Plesset correlation correction methods emplo ying 6-31G * * basis set have been applied to the optimizations of nitroamine dimers. The binding energies have been corrected for the basis set superposition error (BSSE) and the zero-point energy. Three optimized dimers have been obtained. The BSSE corrected binding energy of the most stable dimer is predicted to be -31.85k J/mol at the MP4/6-31G* *//MP2/6-31G* * level. The energy barriers of the Walden conversion for - NH2 group are 19.7 kJ/mol and 18.3 kJ/mol for monomer and the most stable dimer, respectively. The molecular interaction makes the internal rotation around N1 - N2 even more difficult. The thermodynamic properties of nitroamine and its dimers at different temperatures have been calculated on the basis of vibrational analyses. The change of the Gibbs free energy for the aggregation from monomer to the most stable dimer at standard pressure and 298.2 K is predicted to be 14.05 kJ/mol.

  9. Theoretical study of N(C)-H…H-B multi-dihydrogen bonds

    Institute of Scientific and Technical Information of China (English)

    KUN Yuan; LIU YanZhi; L(U) LingLing

    2009-01-01

    The optimized geometries, frequencies and interaction energy corrected with basis set superposition error (BSSE) of the multi-dihydrogen bond complexes C4H4NH…BH4 and CH≡CH…BH4- have been calculated at both the B3LYP/6-311++G** and the MP2/6-311++G** levels. The calculations were performed to study the nature of the N-H…H3-B and C-H……H2-B red shift multi dihydrogen bond in complex C4H4NH…BH4-and CH≡CH…BH4- The BSSE-corrected multidihydrogen bond interaction energy of complex I (C4H4NH…BH4- and complex Ⅱ(CH≡CH…BH4- is-76.62 and -33.79 kJ/mol (MP2/6-311++G**), respectively. From the natural bond orbital (NBO) analysis, we detailedly discussed the orbital interactions, electron density transfers, rehybridizations and the essential of the correlative bond length changes in the two complexes. In addition, solvent effect on the geometric structures, vibration frequencies and interaction energy of the monomer and complexes was studied in detail. It is relevant to the relatively dielectric constants (ε).

  10. Harmless error analysis: How do judges respond to confession errors?

    Science.gov (United States)

    Wallace, D Brian; Kassin, Saul M

    2012-04-01

    In Arizona v. Fulminante (1991), the U.S. Supreme Court opened the door for appellate judges to conduct a harmless error analysis of erroneously admitted, coerced confessions. In this study, 132 judges from three states read a murder case summary, evaluated the defendant's guilt, assessed the voluntariness of his confession, and responded to implicit and explicit measures of harmless error. Results indicated that judges found a high-pressure confession to be coerced and hence improperly admitted into evidence. As in studies with mock jurors, however, the improper confession significantly increased their conviction rate in the absence of other evidence. On the harmless error measures, judges successfully overruled the confession when required to do so, indicating that they are capable of this analysis.

  11. Exploración estocástica de las superficies de energía potencial de dímeros cis-trans y trans-trans del ácido fórmico

    Directory of Open Access Journals (Sweden)

    Said F. Figueredo

    2014-01-01

    Full Text Available Potential energy surface (PES of cis-trans and trans-trans formic acid dimers were sampled using a stochastic method, and the geometries, energies, and vibrational frequencies were computed at B3LYP/6-311++G(3df,2p level of theory. The results show that molar free energy of dimerization deviated up to 108.4% when basis set superposition error (BSSE and zero-point energy (ZPE were not considered. For cis-trans dimers, C=O and O - H bond weakened, whereas C - O bonds strengthened due to dimerization. Also, trans-trans FA dimers did not show a trend regarding strengthening or weakening of the C=O, O - H and C - O bonds.

  12. Theoretical Investigation on the Adsorption of Ag+ and Hydrated Ag+ Cations on Clean Si(111)Surface

    Institute of Scientific and Technical Information of China (English)

    SHENG Yong-Li; LI Meng-Hua; WANG Zhi-Guo; LIU Yong-Jun

    2008-01-01

    In this paper,the adsorption of Ag+ and hydrated Ag+ cations on clean Si(111)surface were investigated by using cluster(Gaussian 03)and periodic(DMol3)ab initio calculations.Si(111)surface was described with cluster models(Si14H17 and Si22H21)and a four-silicon layer slab with periodic boundary conditions.The effect of basis set superposition error(BSSE)was taken into account by applying the counterpoise correction.The calculated results indicated that the binding energies between hydrated Ag+ cations and clean Si(111)surface are large,suggesting a strong interaction between hydrated Ag+ cations and the semiconductor surface.With the increase of number,water molecules form hydrogen bond network with one another and only one water molecule binds directly to the Ag+ cation.The Ag+ cation in aqueous solution will safely attach to the clean Si(111)surface.

  13. Experimental and theoretical investigation of the complexation of methacrylic acid and diisopropyl urea

    Science.gov (United States)

    Pogány, Peter; Razali, Mayamin; Szekely, Gyorgy

    2017-01-01

    The present paper explores the complexation ability of methacrylic acid which is one of the most abundant functional monomer for the preparation of molecularly imprinted polymers. Host-guest interactions and the mechanism of complex formation between methacrylic acid and potentially genotoxic 1,3-diisopropylurea were investigated in the pre-polymerization solution featuring both experimental (NMR, IR) and in silico density functional theory (DFT) tools. The continuous variation method revealed the presence of higher-order complexes and the appearance of self-association which were both taken into account during the determination of the association constants. The quantum chemical calculations - performed at B3LYP 6-311 ++G(d,p) level with basis set superposition error (BSSE) corrections - are in agreement with the experimental observations, reaffirming the association constants and justifying the validity of computational investigation of such systems. Furthermore, natural bond orbital analysis was carried out to appraise the binding properties of the complexes.

  14. Ab initio Study on the Intermolecular Interaction and Thermo dynamic Properties of Methyl Nitrate Dimer

    Institute of Scientific and Technical Information of China (English)

    谭金芝; 肖鹤鸣; 贡雪东; 李金山

    2001-01-01

    Three stable dimers of methyl nitrate have been obtained and their geometries have been fully optimized at the HF/6-31G*level. Binding energies have been calculated with correction for the basis set superposition error (BSSE) and zero point energy (ZPE). The cyclic overlap-type structure, the binding energy of which is 11.97 kJ/mol at the MP4SDTQ/6-31G*∥HF/6-31G* level, is the most stable. No intermolecular hydrogen bond was found, and the charge transfer between two subsystems is minute. The thermodynamic properties of methyl nitrate and its dimers have been calculated based on the vibrational analysis and statistical thermodynamics.

  15. Theoretical Studies on Intermolecular Interactions of 4-Amino-5-nitro-1,2,3-triazole Dimers

    Institute of Scientific and Technical Information of China (English)

    LU Ya-Lin; GONG Xue-Dong; JU Xue-Hai; MA Xiu-Fang; XIAO He-Ming

    2006-01-01

    Seven optimized configurations and their electronic structures of 4-amino-5-nitro-1,2,3-triazole dimers on their potential energy surface have been obtained by using density functional theory (DFT) method at the B3LYP/6-311++G** level. The maximum intermolecular interaction energy is -35.42 kJ/mol via the basis set superposition error-correction (BSSE) and zero point energy-correction (ZPE). Charge transfers between the two subsystems are small. The vibration analysis of optimized configurations was performed, and the thermodynamic property changes from monomer to dimer have been obtained with the temperature ranging from 200 to 800 K on the basis of statistical thermodynamics. It is found that the hydrogen bonds contribute to the dimers dominantly, and the extent of intermolecular interaction is mainly determined by the hydrogen bonds' strength rather than their number. The dimerization processes of Ⅳ, Ⅴ and Ⅵ can occur spontaneously at 200 K.

  16. A theoretical study on the intermolecular interaction of energetic system-itromethane dimer

    Institute of Scientific and Technical Information of China (English)

    李金山; 董海山; 肖鹤鸣

    2000-01-01

    Three optimized geometries of nitromethane dimer have been obtained at the HF/6-31G* level. Dimer binding energies have been corrected for the basis set superposition error (BSSE) and the zero point energy. Computed results indicate that the cyclic structure cf (CH3NO2)2 is the most stable of three optimized geometries, whose corrected binding energyis 17.29 kJ·mo1-1 at the MP4SDTQ/6-31G*//HF/6-31G* level. In the optimized structures of nitromethane dimer, the intermolecular hydrogen bond has not been found; and the chargetransfer interaction between CH3NO2 subsystems is weak; and the correlation interaction energy makes a little contribution to the intermolecular interaction energy of the dimer.

  17. Explaining errors in children's questions.

    Science.gov (United States)

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  18. Pauli Exchange Errors in Quantum Computation

    CERN Document Server

    Ruskai, M B

    2000-01-01

    We argue that a physically reasonable model of fault-tolerant computation requires the ability to correct a type of two-qubit error which we call Pauli exchange errors as well as one qubit errors. We give an explicit 9-qubit code which can handle both Pauli exchange errors and all one-bit errors.

  19. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  20. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  1. POSITION ERROR IN STATION-KEEPING SATELLITE

    Science.gov (United States)

    of an error in satellite orientation and the sun being in a plane other than the equatorial plane may result in errors in position determination. The nature of the errors involved is described and their magnitudes estimated.

  2. Orbit IMU alignment: Error analysis

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  3. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  4. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  5. Redundant measurements for controlling errors

    Energy Technology Data Exchange (ETDEWEB)

    Ehinger, M. H.; Crawford, J. M.; Madeen, M. L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program.

  6. Toward a cognitive taxonomy of medical errors.

    OpenAIRE

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of e...

  7. Robust Quantum Error Correction via Convex Optimization

    CERN Document Server

    Kosut, R L; Lidar, D A

    2007-01-01

    Quantum error correction procedures have traditionally been developed for specific error models, and are not robust against uncertainty in the errors. Using a semidefinite program optimization approach we find high fidelity quantum error correction procedures which present robust encoding and recovery effective against significant uncertainty in the error system. We present numerical examples for 3, 5, and 7-qubit codes. Our approach requires as input a description of the error channel, which can be provided via quantum process tomography.

  8. Errors depending on costs in sample surveys

    OpenAIRE

    Marella, Daniela

    2007-01-01

    "This paper presents a total survey error model that simultaneously treats sampling error, nonresponse error and measurement error. The main aim for developing the model is to determine the optimal allocation of the available resources for the total survey error reduction. More precisely, the paper is concerned with obtaining the best possible accuracy in survey estimate through an overall economic balance between sampling and nonsampling error." (author's abstract)

  9. Error-tolerant Tree Matching

    CERN Document Server

    Oflazer, K

    1996-01-01

    This paper presents an efficient algorithm for retrieving from a database of trees, all trees that match a given query tree approximately, that is, within a certain error tolerance. It has natural language processing applications in searching for matches in example-based translation systems, and retrieval from lexical databases containing entries of complex feature structures. The algorithm has been implemented on SparcStations, and for large randomly generated synthetic tree databases (some having tens of thousands of trees) it can associatively search for trees with a small error, in a matter of tenths of a second to few seconds.

  10. Immediate error correction process following sleep deprivation

    National Research Council Canada - National Science Library

    HSIEH, SHULAN; CHENG, I‐CHEN; TSAI, LING‐LING

    2007-01-01

    ...) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event‐related potentials (ERPs...

  11. The error of our ways

    Science.gov (United States)

    Swartz, Clifford E.

    1999-10-01

    In Victorian literature it was usually some poor female who came to see the error of her ways. How prescient of her! How I wish that all writers of manuscripts for The Physics Teacher would come to similar recognition of this centerpiece of measurement. For, Brothers and Sisters, we all err.

  12. Measurement error in geometric morphometrics.

    Science.gov (United States)

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  13. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  14. Having Fun with Error Analysis

    Science.gov (United States)

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  15. Typical errors of ESP users

    Science.gov (United States)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  16. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  17. A brief history of error.

    Science.gov (United States)

    Murray, Andrew W

    2011-10-03

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it.

  18. Error processing in Huntington's disease.

    Directory of Open Access Journals (Sweden)

    Christian Beste

    Full Text Available BACKGROUND: Huntington's disease (HD is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related negativity (Ne/ERN, a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD. Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. METHODOLOGY/PRINCIPLE FINDINGS: We assessed the error negativity (Ne in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. CONCLUSIONS/SIGNIFICANCE: The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated "cognitive" biomarker in HD.

  19. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  20. Input/output error analyzer

    Science.gov (United States)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  1. Amplify Errors to Minimize Them

    Science.gov (United States)

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  2. Toward a cognitive taxonomy of medical errors.

    Science.gov (United States)

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  3. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  4. Analytical method for coupled transmission error of helical gear system with machining errors, assembly errors and tooth modifications

    Science.gov (United States)

    Lin, Tengjiao; He, Zeyin

    2017-07-01

    We present a method for analyzing the transmission error of helical gear system with errors. First a finite element method is used for modeling gear transmission system with machining errors, assembly errors, modifications and the static transmission error is obtained. Then the bending-torsional-axial coupling dynamic model of the transmission system based on the lumped mass method is established and the dynamic transmission error of gear transmission system is calculated, which provides error excitation data for the analysis and control of vibration and noise of gear system.

  5. Reading boundless error-free bits using a single photon

    Science.gov (United States)

    Guha, Saikat; Shapiro, Jeffrey H.

    2013-06-01

    We address the problem of how efficiently information can be encoded into and read out reliably from a passive reflective surface that encodes classical data by modulating the amplitude and phase of incident light. We show that nature imposes no fundamental upper limit to the number of bits that can be read per expended probe photon and demonstrate the quantum-information-theoretic trade-offs between the photon efficiency (bits per photon) and the encoding efficiency (bits per pixel) of optical reading. We show that with a coherent-state (ideal laser) source, an on-off (amplitude-modulation) pixel encoding, and shot-noise-limited direct detection (an overly optimistic model for commercial CD and DVD drives), the highest photon efficiency achievable in principle is about 0.5 bits read per transmitted photon. We then show that a coherent-state probe can read unlimited bits per photon when the receiver is allowed to make joint (inseparable) measurements on the reflected light from a large block of phase-modulated memory pixels. Finally, we show an example of a spatially entangled nonclassical light probe and a receiver design—constructible using a single-photon source, beam splitters, and single-photon detectors—that can in principle read any number of error-free bits of information. The probe is a single photon prepared in a uniform coherent superposition of multiple orthogonal spatial modes, i.e., a W state. The code and joint-detection receiver complexity required by a coherent-state transmitter to achieve comparable photon efficiency performance is shown to be much higher in comparison to that required by the W-state transceiver, although this advantage rapidly disappears with increasing loss in the system.

  6. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    Science.gov (United States)

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their…

  7. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  8. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  9. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  10. Error Analysis of Band Matrix Method

    OpenAIRE

    Taniguchi, Takeo; Soga, Akira

    1984-01-01

    Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.

  11. Error Correction in Oral Classroom English Teaching

    Science.gov (United States)

    Jing, Huang; Xiaodong, Hao; Yu, Liu

    2016-01-01

    As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…

  12. 5 CFR 1601.34 - Error correction.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  13. STRUCTURED BACKWARD ERRORS FOR STRUCTURED KKT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Xin-xiu Li; Xin-guo Liu

    2004-01-01

    In this paper we study structured backward errors for some structured KKT systems.Normwise structured backward errors for structured KKT systems are defined, and computable formulae of the structured backward errors are obtained. Simple numerical examples show that the structured backward errors may be much larger than the unstructured ones in some cases.

  14. Managing human error in aviation.

    Science.gov (United States)

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  15. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  16. Manson’s triple error

    Directory of Open Access Journals (Sweden)

    Delaporte F.

    2008-09-01

    Full Text Available The author discusses the significance, implications and limitations of Manson’s work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error.

  17. Offset Error Compensation in Roundness Measurement

    Institute of Scientific and Technical Information of China (English)

    朱喜林; 史俊; 李晓梅

    2004-01-01

    This paper analyses three causes of offset error in roundness measurement and presents corresponding compensation methods.The causes of offset error include excursion error resulting from the deflection of the sensor's line of measurement from the rotational center in measurement (datum center), eccentricity error resulting from the variance between the workpiece's geometrical center and the rotational center, and tilt error resulting from the tilt between the workpiece's geometrical axes and the rotational centerline.

  18. FAKTOR PENYEBAB MEDICATION ERROR DI INSTALASI RAWAT DARURAT FACTORS AFFECTING MEDICATION ERRORS AT EMERGENCY UNIT

    OpenAIRE

    2014-01-01

    Background: Incident of medication errors is an importantindicator in patient safety and medication error is most commonmedical errors. However, most of medication errors can beprevented and efforts to reduce such errors are available.Due to high number of medications errors in the emergencyunit, understanding of the causes is important for designingsuccessful intervention. This research aims to identify typesand causes of medication errors.Method: Qualitative study was used and data were col...

  19. Error-resilient DNA computation

    Energy Technology Data Exchange (ETDEWEB)

    Karp, R.M.; Kenyon, C.; Waarts, O. [Univ. of California, Berkeley, CA (United States)

    1996-12-31

    The DNA model of computation, with test tubes of DNA molecules encoding bit sequences, is based on three primitives, Extract-A-Bit, which splits a test tube into two test tubes according to the value of a particular bit x, Merge-Two-Tubes and Detect-Emptiness. Perfect operations can test the satisfiability of any boolean formula in linear time. However, in reality the Extract operation is faulty; it misclassifies a certain proportion of the strands. We consider the following problem: given an algorithm based on perfect Extract, Merge and Detect operations, convert it to one that works correctly with high probability when the Extract operation is faulty. The fundamental problem in such a conversion is to construct a sequence of faulty Extracts and perfect Merges that simulates a highly reliable Extract operation. We first determine (up to a small constant factor) the minimum number of faulty Extract operations inherently required to simulate a highly reliable Extract operation. We then go on to derive a general method for converting any algorithm based on error-free operations to an error-resilient one, and give optimal error-resilient algorithms for realizing simple n-variable boolean functions such as Conjunction, Disjunction and Parity.

  20. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    Science.gov (United States)

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  1. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    Science.gov (United States)

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  2. SENSITIVE ERROR ANALYSIS OF CHAOS SYNCHRONIZATION

    Institute of Scientific and Technical Information of China (English)

    HUANG XIAN-GAO; XU JIAN-XUE; HUANG WEI; L(U) ZE-JUN

    2001-01-01

    We study the synchronizing sensitive errors of chaotic systems for adding other signals to the synchronizing signal.Based on the model of the Henon map masking, we examine the cause of the sensitive errors of chaos synchronization.The modulation ratio and the mean square error are defined to measure the synchronizing sensitive errors by quality.Numerical simulation results of the synchronizing sensitive errors are given for masking direct current, sinusoidal and speech signals, separately. Finally, we give the mean square error curves of chaos synchronizing sensitivity and threedimensional phase plots of the drive system and the response system for masking the three kinds of signals.

  3. Error signals driving locomotor adaptation

    DEFF Research Database (Denmark)

    Choi, Julia T; Jensen, Peter; Nielsen, Jens Bo

    2016-01-01

    perturbations. Forces were applied to the ankle joint during the early swing phase using an electrohydraulic ankle-foot orthosis. Repetitive 80 Hz electrical stimulation was applied to disrupt cutaneous feedback from the superficial peroneal nerve (foot dorsum) and medial plantar nerve (foot sole) during...... anaesthesia (n = 5) instead of repetitive nerve stimulation. Foot anaesthesia reduced ankle adaptation to external force perturbations during walking. Our results suggest that cutaneous input plays a role in force perception, and may contribute to the 'error' signal involved in driving walking adaptation when...

  4. (Errors in statistical tests3

    Directory of Open Access Journals (Sweden)

    Kaufman Jay S

    2008-07-01

    Full Text Available Abstract In 2004, Garcia-Berthou and Alcaraz published "Incongruence between test statistics and P values in medical papers," a critique of statistical errors that received a tremendous amount of attention. One of their observations was that the final reported digit of p-values in articles published in the journal Nature departed substantially from the uniform distribution that they suggested should be expected. In 2006, Jeng critiqued that critique, observing that the statistical analysis of those terminal digits had been based on comparing the actual distribution to a uniform continuous distribution, when digits obviously are discretely distributed. Jeng corrected the calculation and reported statistics that did not so clearly support the claim of a digit preference. However delightful it may be to read a critique of statistical errors in a critique of statistical errors, we nevertheless found several aspects of the whole exchange to be quite troubling, prompting our own meta-critique of the analysis. The previous discussion emphasized statistical significance testing. But there are various reasons to expect departure from the uniform distribution in terminal digits of p-values, so that simply rejecting the null hypothesis is not terribly informative. Much more importantly, Jeng found that the original p-value of 0.043 should have been 0.086, and suggested this represented an important difference because it was on the other side of 0.05. Among the most widely reiterated (though often ignored tenets of modern quantitative research methods is that we should not treat statistical significance as a bright line test of whether we have observed a phenomenon. Moreover, it sends the wrong message about the role of statistics to suggest that a result should be dismissed because of limited statistical precision when it is so easy to gather more data. In response to these limitations, we gathered more data to improve the statistical precision, and

  5. Study on Sound Wave Superposition Law in a Cavity with Both Ends Sealed%两端封闭腔体内声波叠加规律研究

    Institute of Scientific and Technical Information of China (English)

    唐子; 董大伟; 闫兵; 鲁志文

    2015-01-01

    大型储气罐由于气体流动时的冲击与共振声辐射会带来不可忽视的环境污染。其结构在考虑声波叠加规律时可视为两端封闭腔体,根据平面波理论对其内声波叠加规律进行理论推导和仿真计算,并作实验验证。表明了声源起始位置对腔内声波叠加有重要影响,而封闭腔总长对含有内插管腔体内的声波叠加并无影响。根据叠加规律,对某工厂大型储气罐进行噪声治理,取得了良好效果。%The impact and resonant sound radiation due to the gas flow inside a gasholder can bring about the environ-ment noise pollution. The gasholder can be regarded as an acoustic cavity with both ends sealed when studying the rules of sound wave superposition. In this paper, the mathematical model of a cavity for sound wave superposition was established and simulated based on the plane wave theory. Then, the superposition law was verified by comparing the simulation results with experimental data. The results show that the initial position of the sound source has important effects on wave superpo-sition in the cavity model, but the cavity length has no effect on wave superposition for the cavity with an extended pipe. Noise control for a large gasholder in a factory was realized according to the wave superposition law, and good result was ob-tained. This work provides a basis for structure design and noise control of sealed cavities.

  6. Coherent superposition theory of SH wave defect mode of phononic crystal%声子晶体中 SH 波缺陷模的相干叠加理论

    Institute of Scientific and Technical Information of China (English)

    刘启能; 刘沁

    2015-01-01

    Using the coherent superposition principle,the transmittance formula and frequency formula of SH wave defect mode are derived in 1D doping phononic crystal,and the coherent superposition theory is established.The coherent superposition theory,the transfer matrix theory and the resonance theory are compared.The coherent superposition theory has advantages of the transfer matrix theory and the reso-nance theory,and the coherent superposition theory does not have disadvantages of the transfer matrix theory and the resonance theory.The coherent superposition theory is a better way to study SH wave defect mode of phononic crystal.%利用波的相干叠加原理推导出一维掺杂声子晶体中 SH 波缺陷模的透射率公式和频率公式,即建立了缺陷模的相干叠加法。将相干叠加法与转移矩阵法和共振理论进行了比较研究,结果表明缺陷模的相干叠加法具备转移矩阵法和共振理论各自的优点,又克服了转移矩阵法和共振理论各自的不足。相干叠加法是研究一维掺杂声子晶体中 SH 波缺陷模的一种更有效的方法。

  7. Errors associated with outpatient computerized prescribing systems

    Science.gov (United States)

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  8. Error detection and reduction in blood banking.

    Science.gov (United States)

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  9. Antenna motion errors in bistatic SAR imagery

    Science.gov (United States)

    Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.

    2015-06-01

    Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.

  10. USE OF SUPERPOSITION PRINCIPLE TO DERIVE A GENERAL MATHEMATICAL MODEL TO SIMULATE ONE-TO-ONE, ONE-TO-MULTI AND MULTI-TO-MULTI SAW FILTER DESIGNS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper explains and summarizes a new attempt to derive a general mathematical model [GMM] to simulate surface acoustic wave (SAW) filters, using the superposition principle and delta function model. GMM can be used to simulate One-to-One, One-to-Multi and Multi-to-Multi SAW filter devices. The simulation program was written using MATLAB (the language of technical computing). Four-design structures (One-to-One, One-to-Two, One-to-Three and Ten-to-Ten) ware selected to test the correctness of GMM. The frequency response of the simulation and test results are similar in center frequency and 3-dB bandwidth, but the insertion loss is different, because of some second order effects (Issa Haitham, 1999).

  11. Superposition of orbital angular momentum of photons by a combined computer-generated hologram fabricated in silica glass with femtosecond laser pulses

    Institute of Scientific and Technical Information of China (English)

    Guo Zhong-Yi; Qu Shi-Liang; Sun Zheng-He; Liu Shu-Tian

    2008-01-01

    This paper introduces a novel method to realize the superposition of orbital angular momentum of photons by combined computer-generated hologram (CCGH) fabricated in silica glass with femtosecond laser pulses.Firstly,the two computer-generated holograms (CGH) of optical vortex were obtained and combined as a CCGH according to the design.Then the CCGH was directly written inside glass by femtosecond laser pulses induced microexplosion without any pre-or post-treatment of the material.The vortex beams with different vortex topological charges (including new topological charges) have been restructured using a collimated He-Ne laser beam incidence to the CCGH normally.A theoretical and experimental explanation has been presented for the generations of the new topological charges.

  12. Theoretical calculation on ICI reduction using digital coherent superposition of optical OFDM subcarrier pairs in the presence of laser phase noise.

    Science.gov (United States)

    Yi, Xingwen; Xu, Bo; Zhang, Jing; Lin, Yun; Qiu, Kun

    2014-12-15

    Digital coherent superposition (DCS) of optical OFDM subcarrier pairs with Hermitian symmetry can reduce the inter-carrier-interference (ICI) noise resulted from phase noise. In this paper, we show two different implementations of DCS-OFDM that have the same performance in the presence of laser phase noise. We complete the theoretical calculation on ICI reduction by using the model of pure Wiener phase noise. By Taylor expansion of the ICI, we show that the ICI power is cancelled to the second order by DCS. The fourth order term is further derived out and only decided by the ratio of laser linewidth to OFDM subcarrier symbol rate, which can greatly simplify the system design. Finally, we verify our theoretical calculations in simulations and use the analytical results to predict the system performance. DCS-OFDM is expected to be beneficial to certain optical fiber transmissions.

  13. STIRAP preparation of a coherent superposition of ThO $H^3\\Delta_1$ states for an improved electron EDM measurement

    CERN Document Server

    Panda, C D; West, A D; Baron, J; Hess, P W; Hoffman, C; Kirilov, E; Overstreet, C B; West, E P; DeMille, D; Doyle, J M; Gabrielse, G

    2016-01-01

    Experimental searches for the electron electric dipole moment (EDM) probe new physics beyond the Standard Model. The current best EDM limit was set by the ACME Collaboration [Science \\textbf{343}, 269 (2014)], constraining time reversal symmetry ($T$) violating physics at the TeV energy scale. ACME used optical pumping to prepare a coherent superposition of ThO $H^3\\Delta_1$ states that have aligned electron spins. Spin precession due to the molecule's internal electric field was measured to extract the EDM. We report here on an improved method for preparing this spin-aligned state of the electron by using STIRAP. We demonstrate a transfer efficiency of $75\\pm5\\%$, representing a significant gain in signal for a next generation EDM experiment. We discuss the particularities of implementing STIRAP in systems such as ours, where molecular ensembles with large phase-space distributions are transfered via weak molecular transitions with limited laser power and limited optical access.

  14. Medication errors: hospital pharmacist perspective.

    Science.gov (United States)

    Guchelaar, Henk-Jan; Colen, Hadewig B B; Kalmeijer, Mathijs D; Hudson, Patrick T W; Teepe-Twiss, Irene M

    2005-01-01

    In recent years medication error has justly received considerable attention, as it causes substantial mortality, morbidity and additional healthcare costs. Risk assessment models, adapted from commercial aviation and the oil and gas industries, are currently being developed for use in clinical pharmacy. The hospital pharmacist is best placed to oversee the quality of the entire drug distribution chain, from prescribing, drug choice, dispensing and preparation to the administration of drugs, and can fulfil a vital role in improving medication safety. Most elements of the drug distribution chain can be optimised; however, because comparative intervention studies are scarce, there is little scientific evidence available demonstrating improvements in medication safety through such interventions. Possible interventions aimed at reducing medication errors, such as developing methods for detection of patients with increased risk of adverse drug events, performing risk assessment in clinical pharmacy and optimising the drug distribution chain are discussed. Moreover, the specific role of the clinical pharmacist in improving medication safety is highlighted, both at an organisational level and in individual patient care.

  15. Cosine tuning minimizes motor errors.

    Science.gov (United States)

    Todorov, Emanuel

    2002-06-01

    Cosine tuning is ubiquitous in the motor system, yet a satisfying explanation of its origin is lacking. Here we argue that cosine tuning minimizes expected errors in force production, which makes it a natural choice for activating muscles and neurons in the final stages of motor processing. Our results are based on the empirically observed scaling of neuromotor noise, whose standard deviation is a linear function of the mean. Such scaling predicts a reduction of net force errors when redundant actuators pull in the same direction. We confirm this prediction by comparing forces produced with one versus two hands and generalize it across directions. Under the resulting neuromotor noise model, we prove that the optimal activation profile is a (possibly truncated) cosine--for arbitrary dimensionality of the workspace, distribution of force directions, correlated or uncorrelated noise, with or without a separate cocontraction command. The model predicts a negative force bias, truncated cosine tuning at low muscle cocontraction levels, and misalignment of preferred directions and lines of action for nonuniform muscle distributions. All predictions are supported by experimental data.

  16. A Method Applying Gray Image Superposition to Improve Ranging Accuracy in Planar Array Laser Radar%一种直接应用灰度叠加提高面阵激光雷达测距精度的方法

    Institute of Scientific and Technical Information of China (English)

    方毅; 张秀达; 胡剑; 王鹏鹏; 严惠民

    2013-01-01

    相比传统的点扫描激光雷达,基于距离选通和增益调制测距原理的面阵成像激光雷达,具有测距速度快的优点.但同时会引起单帧图像信噪比降低.根据成像激光雷达的原理,建立了其在散粒噪声影响下的测距精度模型,基于该精度模型和机载面阵激光雷达成像的特点,提出了一种应用灰度图像配准叠加提高测距精度的方法.对飞机姿态、光源均匀性、叠加帧数等因素在采用该方法时是否对测距精度有影响进行了分析.分析结果表明,图像完全匹配后,飞机姿态对测距精度没有影响,光源均匀性优于40%时对测距精度影响可以忽略,一定叠加帧数内时灰度叠加不会影响目标之间的相对距离.进行了地面动态实验和机载航拍实验,并将此方法应用于图像的校正,实验结果验证了该方法的有效性.%Compared with traditional point-scanning laser radar, the imaging laser radar based on range gating and gain-modulation ranging principles has faster ranging speed, but at the same time, signal noise ratio (SNR) of single-frame image is lower. According to principles of imaging laser radar, the ranging accuracy model under the influence of shot noise is built. Based on the model and the characteristics of airborne imaging laser radar, a new method which employs the techniques of gray image registration and superposition to improve ranging accuracy is put forward. Then the factors, such as the flight attitude, light uniformity and superposition frame number that influence the application of gray registration and superposition, are analyzed theoretically. The results show that flight attitude has no effect on the method and the light uniformity's effect is small when the uniformity is better than 40%. There will not be other errors between the targets when superposition frame number is within a certain range. A ground of dynamic experiments and aerial experiment are conducted to verify the

  17. Field errors in hybrid insertion devices

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, R.D. [Lawrence Berkeley Lab., CA (United States)

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  18. Medical errors: legal and ethical responses.

    Science.gov (United States)

    Dickens, B M

    2003-04-01

    Liability to err is a human, often unavoidable, characteristic. Errors can be classified as skill-based, rule-based, knowledge-based and other errors, such as of judgment. In law, a key distinction is between negligent and non-negligent errors. To describe a mistake as an error of clinical judgment is legally ambiguous, since an error that a physician might have made when acting with ordinary care and the professional skill the physician claims, is not deemed negligent in law. If errors prejudice patients' recovery from treatment and/or future care, in physical or psychological ways, it is legally and ethically required that they be informed of them in appropriate time. Senior colleagues, facility administrators and others such as medical licensing authorities should be informed of serious forms of error, so that preventive education and strategies can be designed. Errors for which clinicians may be legally liable may originate in systemically defective institutional administration.

  19. Experimental demonstration of topological error correction.

    Science.gov (United States)

    Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei

    2012-02-22

    Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.

  20. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  1. L’errore nel laboratorio di Microbiologia

    Directory of Open Access Journals (Sweden)

    Paolo Lanzafame

    2006-03-01

    Full Text Available Error management plays one of the most important roles in facility process improvement efforts. By detecting and reducing errors quality and patient care improve. The records of errors was analysed over a period of 6 months and another was used to study the potential bias in the registrations.The percentage of errors detected was 0,17% (normalised 1720 ppm and the errors in the pre-analytical phase was the largest part.The major rate of errors was generated by the peripheral centres which send only sometimes the microbiology tests and don’t know well the specific procedures to collect and storage biological samples.The errors in the management of laboratory supplies were reported too. The conclusion is that improving operators training, in particular concerning samples collection and storage, is very important and that an affective system of error detection should be employed to determine the causes and the best corrective action should be applied.

  2. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  3. Error Propagation in a System Model

    Science.gov (United States)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  4. Experimental demonstration of topological error correction

    OpenAIRE

    2012-01-01

    Scalable quantum computing can only be achieved if qubits are manipulated fault-tolerantly. Topological error correction - a novel method which combines topological quantum computing and quantum error correction - possesses the highest known tolerable error rate for a local architecture. This scheme makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the first experimental demonstration of topological error correction with a...

  5. Sampling error of observation impact statistics

    OpenAIRE

    Kim, Sung-Min; Kim, Hyun Mee

    2014-01-01

    An observation impact is an estimate of the forecast error reduction by assimilating observations with numerical model forecasts. This study compares the sampling errors of the observation impact statistics (OBIS) of July 2011 and January 2012 using two methods. One method uses the random error under the assumption that the samples are independent, and the other method uses the error with lag correlation under the assumption that the samples are correlated with each other. The OBIS are obtain...

  6. Acoustic Evidence for Phonologically Mismatched Speech Errors

    Science.gov (United States)

    Gormley, Andrea

    2015-01-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of…

  7. Medication errors: the importance of safe dispensing.

    NARCIS (Netherlands)

    Cheung, K.C.; Bouvy, M.L.; Smet, P.A.G.M. de

    2009-01-01

    1. Although rates of dispensing errors are generally low, further improvements in pharmacy distribution systems are still important because pharmacies dispense such high volumes of medications that even a low error rate can translate into a large number of errors. 2. From the perspective of pharmacy

  8. Understanding EFL Students' Errors in Writing

    Science.gov (United States)

    Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti

    2015-01-01

    Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…

  9. Error Analysis of Quadrature Rules. Classroom Notes

    Science.gov (United States)

    Glaister, P.

    2004-01-01

    Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

  10. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  11. Error Analysis and the EFL Classroom Teaching

    Science.gov (United States)

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  12. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  13. Errors and Uncertainty in Physics Measurement.

    Science.gov (United States)

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  14. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  15. Jonas Olson's Evidence for Moral Error Theory

    NARCIS (Netherlands)

    Evers, Daan

    2016-01-01

    Jonas Olson defends a moral error theory in (2014). I first argue that Olson is not justified in believing the error theory as opposed to moral nonnaturalism in his own opinion. I then argue that Olson is not justified in believing the error theory as opposed to moral contextualism either (although

  16. AWARENESS OF DE NTISTS ABOUT MEDICATION ERRORS

    Directory of Open Access Journals (Sweden)

    Sangeetha

    2014-01-01

    Full Text Available OBJECTIVE: To assess the awareness of medication errors among dentists. METHODS: Medication errors are the most common single preventable cause o f adverse events in medication practice. We conducted a survey with a sample of sixty dentists. Among them 30 were general dentists (BDS and 30 were dental specialists (MDS. Questionnaires were distributed to them with questions regarding medication erro rs and they were asked to fill up the questionnaire. Data was collected and subjected to statistical analysis using Fisher exact and Chi square test. RESULTS: In our study, sixty percent of general dentists and 76.7% of dental specialists were aware about the components of medication error. Overall 66.7% of the respondents in each group marked wrong duration as the dispensing error. Almost thirty percent of the general dentists and 56.7% of the dental specialists felt that technologic advances could accompl ish diverse task in reducing medication errors. This was of suggestive statistical significance with a P value of 0.069. CONCLUSION: Medication errors compromise patient confidence in the health - care system and increase health - care costs. Overall, the dent al specialists were more knowledgeable than the general dentists about the Medication errors. KEY WORDS: Medication errors; Dosing error; Prevention of errors; Adverse drug events; Prescribing errors; Medical errors.

  17. Error-Compensated Integrate and Hold

    Science.gov (United States)

    Matlin, M.

    1984-01-01

    Differencing circuit cancels error caused by switching transistors capacitance. In integrate and hold circuit using JFET switch, gate-to-source capacitance causes error in output voltage. Differential connection cancels out error. Applications in systems where very low voltages sampled or many integrate-and -hold cycles before circuit is reset.

  18. Jonas Olson's Evidence for Moral Error Theory

    NARCIS (Netherlands)

    Evers, Daan

    2016-01-01

    Jonas Olson defends a moral error theory in (2014). I first argue that Olson is not justified in believing the error theory as opposed to moral nonnaturalism in his own opinion. I then argue that Olson is not justified in believing the error theory as opposed to moral contextualism either (although

  19. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    Human errors are divided in two groups. The first group contains human errors, which effect the reliability directly. The second group contains human errors, which will not directly effect the reliability of the structure. The methodology used to estimate so-called reliability distributions on ba...

  20. The Problematic of Second Language Errors

    Science.gov (United States)

    Hamid, M. Obaidul; Doan, Linh Dieu

    2014-01-01

    The significance of errors in explicating Second Language Acquisition (SLA) processes led to the growth of error analysis in the 1970s which has since maintained its prominence in English as a second/foreign language (L2) research. However, one problem with this research is errors are often taken for granted, without problematising them and their…

  1. Error estimate for Doo-Sabin surfaces

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Based on a general bound on the distance error between a uniform Doo-Sabin surface and its control polyhedron, an exponential error bound independent of the subdivision process is presented in this paper. Using the exponential bound, one can predict the depth of recursive subdivision of the Doo-Sabin surface within any user-specified error tolerance.

  2. Medication errors: the importance of safe dispensing.

    NARCIS (Netherlands)

    Cheung, K.C.; Bouvy, M.L.; Smet, P.A.G.M. de

    2009-01-01

    1. Although rates of dispensing errors are generally low, further improvements in pharmacy distribution systems are still important because pharmacies dispense such high volumes of medications that even a low error rate can translate into a large number of errors. 2. From the perspective of pharmacy

  3. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error pre

  4. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    Science.gov (United States)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  5. Correlated measurement error hampers association network inference.

    Science.gov (United States)

    Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B

    2014-09-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow

  6. Model error estimation in ensemble data assimilation

    Directory of Open Access Journals (Sweden)

    S. Gillijns

    2007-01-01

    Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.

  7. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  8. Errors in quantum tomography: diagnosing systematic versus statistical errors

    Science.gov (United States)

    Langford, Nathan K.

    2013-03-01

    A prime goal of quantum tomography is to provide quantitatively rigorous characterization of quantum systems, be they states, processes or measurements, particularly for the purposes of trouble-shooting and benchmarking experiments in quantum information science. A range of techniques exist to enable the calculation of errors, such as Monte-Carlo simulations, but their quantitative value is arguably fundamentally flawed without an equally rigorous way of authenticating the quality of a reconstruction to ensure it provides a reasonable representation of the data, given the known noise sources. A key motivation for developing such a tool is to enable experimentalists to rigorously diagnose the presence of technical noise in their tomographic data. In this work, I explore the performance of the chi-squared goodness-of-fit test statistic as a measure of reconstruction quality. I show that its behaviour deviates noticeably from expectations for states lying near the boundaries of physical state space, severely undermining its usefulness as a quantitative tool precisely in the region which is of most interest in quantum information processing tasks. I suggest a simple, heuristic approach to compensate for these effects and present numerical simulations showing that this approach provides substantially improved performance.

  9. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  10. Adjoint Error Estimation for Linear Advection

    Energy Technology Data Exchange (ETDEWEB)

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  11. On the Combination Procedure of Correlated Errors

    CERN Document Server

    Erler, Jens

    2015-01-01

    When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors.

  12. On the combination procedure of correlated errors

    Energy Technology Data Exchange (ETDEWEB)

    Erler, Jens [Universidad Nacional Autonoma de Mexico, Instituto de Fisica, Mexico D.F. (Mexico)

    2015-09-15

    When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors. (orig.)

  13. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  14. Human error: A significant information security issue

    Energy Technology Data Exchange (ETDEWEB)

    Banks, W.W.

    1994-12-31

    One of the major threats to information security human error is often ignored or dismissed with statements such as {open_quotes}There is not much we can do about it.{close_quotes} This type of thinking runs counter to reality because studies have shown that, of all systems threats, human error has the highest probability of occurring and that, with professional assistance, human errors can be prevented or significantly reduced Security analysts often overlook human error as a major threat; however, other professionals such as human factors engineers are trained to deal with these probabilistic occurrences and mitigate them. In a recent study 55% of the respondents surveyed considered human error as the most important security threat. Documentation exists to show that human error was a major cause of the consequences suffered at Three Mile Island, Chernobyl, Bhopal, and the Exxon tanker, Valdez. Ironically, causes of human error can usually be quickly and easily eliminated.

  15. Radar error statistics for the space shuttle

    Science.gov (United States)

    Lear, W. M.

    1979-01-01

    Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.

  16. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    Science.gov (United States)

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  17. Orthogonality of inductosyn angle-measuring system error and error-separating technology

    Institute of Scientific and Technical Information of China (English)

    任顺清; 曾庆双; 王常虹

    2003-01-01

    Round inductosyn is widely used in inertial navigation test equipment, and its accuracy has significant effect on the general accuracy of the equipment. Four main errors of round inductosyn,i. e. the first-order long-period (360°) harmonic error, the second-order long-period harmonic error, the first-order short-period harmonic error and the second-order short-period harmonic error, are described, and the orthogonality of these tour kinds of errors is studied. An error separating technology is proposed to separate these four kinds of errors,and in the process of separating the short-period harmonic errors, the arrangement in the order of decimal part of the angle pitch number can be omitted. The effectiveness of the technology proposed is proved through measuring and adjusting the angular errors.

  18. Error processing network dynamics in schizophrenia.

    Science.gov (United States)

    Becerril, Karla E; Repovs, Grega; Barch, Deanna M

    2011-01-15

    Current theories of cognitive dysfunction in schizophrenia emphasize an impairment in the ability of individuals suffering from this disorder to monitor their own performance, and adjust their behavior to changing demands. Detecting an error in performance is a critical component of evaluative functions that allow the flexible adjustment of behavior to optimize outcomes. The dorsal anterior cingulate cortex (dACC) has been repeatedly implicated in error-detection and implementation of error-based behavioral adjustments. However, accurate error-detection and subsequent behavioral adjustments are unlikely to rely on a single brain region. Recent research demonstrates that regions in the anterior insula, inferior parietal lobule, anterior prefrontal cortex, thalamus, and cerebellum also show robust error-related activity, and integrate into a functional network. Despite the relevance of examining brain activity related to the processing of error information and supporting behavioral adjustments in terms of a distributed network, the contribution of regions outside the dACC to error processing remains poorly understood. To address this question, we used functional magnetic resonance imaging to examine error-related responses in 37 individuals with schizophrenia and 32 healthy controls in regions identified in the basic science literature as being involved in error processing, and determined whether their activity was related to behavioral adjustments. Our imaging results support previous findings showing that regions outside the dACC are sensitive to error commission, and demonstrated that abnormalities in brain responses to errors among individuals with schizophrenia extend beyond the dACC to almost all of the regions involved in error-related processing in controls. However, error related responses in the dACC were most predictive of behavioral adjustments in both groups. Moreover, the integration of this network of regions differed between groups, with the

  19. Embedded wavelet video coding with error concealment

    Science.gov (United States)

    Chang, Pao-Chi; Chen, Hsiao-Ching; Lu, Ta-Te

    2000-04-01

    We present an error-concealed embedded wavelet (ECEW) video coding system for transmission over Internet or wireless networks. This system consists of two types of frames: intra (I) frames and inter, or predicted (P), frames. Inter frames are constructed by the residual frames formed by variable block-size multiresolution motion estimation (MRME). Motion vectors are compressed by arithmetic coding. The image data of intra frames and residual frames are coded by error-resilient embedded zerotree wavelet (ER-EZW) coding. The ER-EZW coding partitions the wavelet coefficients into several groups and each group is coded independently. Therefore, the error propagation effect resulting from an error is only confined in a group. In EZW coding any single error may result in a totally undecodable bitstream. To further reduce the error damage, we use the error concealment at the decoding end. In intra frames, the erroneous wavelet coefficients are replaced by neighbors. In inter frames, erroneous blocks of wavelet coefficients are replaced by data from the previous frame. Simulations show that the performance of ECEW is superior to ECEW without error concealment by 7 to approximately 8 dB at the error-rate of 10-3 in intra frames. The improvement still has 2 to approximately 3 dB at a higher error-rate of 10-2 in inter frames.

  20. Regression calibration with heteroscedastic error variance.

    Science.gov (United States)

    Spiegelman, Donna; Logan, Roger; Grove, Douglas

    2011-01-01

    The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses' Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice.