WorldWideScience

Sample records for monochrome error diffusion

  1. Mirror monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Mankos, Marian [Electron Optica, Inc., Palo Alto, CA (United States); Shadman, Khashayar [Electron Optica, Inc., Palo Alto, CA (United States)

    2014-12-02

    In this SBIR project, Electron Optica, Inc. (EOI) is developing a mirror electron monochromator (MirrorChrom) attachment to new and retrofitted electron microscopes (EMs) for improving the energy resolution of the EM from the characteristic range of 0.2-0.5 eV to the range of 10-50 meV. This improvement will enhance the characterization of materials by imaging and spectroscopy. In particular, the monochromator will refine the energy spectra characterizing materials, as obtained from transmission EMs [TEMs] fitted with electron spectrometers, and it will increase the spatial resolution of the images of materials taken with scanning EMs (SEMs) operated at low voltages. EOI’s MirrorChrom technology utilizes a magnetic prism to simultaneously deflect the electron beam off the axis of the microscope column by 90° and disperse the electrons in proportional to their energies into a module with an electron mirror and a knife-edge. The knife-edge cuts off the tails of the energy distribution to reduce the energy spread of the electrons that are reflected, and subsequently deflected, back into the microscope column. The knife-edge is less prone to contamination, and thereby charging, than the conventional slits used in existing monochromators, which improves the reliability and stability of the module. The overall design of the MirrorChrom exploits the symmetry inherent in reversing the electron trajectory in order to maintain the beam brightness – a parameter that impacts how well the electron beam can be focused downstream onto a sample. During phase I, EOI drafted a set of candidate monochromator architectures and evaluated the trade-offs between energy resolution and beam current to achieve the optimum design for three particular applications with market potential: increasing the spatial resolution of low voltage SEMs, increasing the energy resolution of low voltage TEMs (beam energy of 5-20 keV), and increasing the energy resolution of conventional TEMs (beam

  2. Color extended visual cryptography using error diffusion.

    Science.gov (United States)

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  3. HIRDLS monochromator calibration equipment

    Science.gov (United States)

    Hepplewhite, Christopher L.; Barnett, John J.; Djotni, Karim; Whitney, John G.; Bracken, Justain N.; Wolfenden, Roger; Row, Frederick; Palmer, Christopher W. P.; Watkins, Robert E. J.; Knight, Rodney J.; Gray, Peter F.; Hammond, Geoffory

    2003-11-01

    A specially designed and built monochromator was developed for the spectral calibration of the HIRDLS instrument. The High Resolution Dynamics Limb Sounder (HIRDLS) is a precision infra-red remote sensing instrument with very tight requirements on the knowledge of the response to received radiation. A high performance, vacuum compatible monochromator, was developed with a wavelength range from 4 to 20 microns to encompass that of the HIRDLS instrument. The monochromator is integrated into a collimating system which is shared with a set of tiny broad band sources used for independent spatial response measurements (reported elsewhere). This paper describes the design and implementation of the monochromator and the performance obtained during the period of calibration of the HIRDLS instrument at Oxford University in 2002.

  4. Improved spectral vector error diffusion by dot gain compensation

    Science.gov (United States)

    Nyström, Daniel; Norberg, Ole

    2013-02-01

    Spectral Vector Error Diffusion, sVED, is an interesting approach to achieve spectral color reproduction, i.e. reproducing the spectral reflectance of an original, creating a reproduction that will match under any illumination. For each pixel in the spectral image, the colorant combination producing the spectrum closest to the target spectrum is selected, and the spectral error is diffused to surrounding pixels using an error distribution filter. However, since the colorant separation and halftoning is performed in a single step in sVED, compensation for dot gain cannot be made for each color channel independently, as in a conventional workflow where the colorant separation and halftoning is performed sequentially. In this study, we modify the sVED routine to compensate for the dot gain, applying the Yule-Nielsen n-factor to modify the target spectra, i.e. performing the computations in (1/n)-space. A global n-factor, optimal for each print resolution, reduces the spectral reproduction errors by approximately a factor of 4, while an n-factor that is individually optimized for each target spectrum reduces the spectral reproduction error to 7% of that for the unmodified prints. However, the improvements when using global n-values are still not sufficient for the method to be of any real use in practice, and to individually optimize the n-values for each target is not feasible in a real workflow. The results illustrate the necessity to properly account for the dot gain in the printing process, and that further developments is needed in order to make Spectral Vector Error Diffusion a realistic alternative for spectral color reproduction.

  5. Evaluation of digital halftones image by vector error diffusion

    Science.gov (United States)

    Kouzaki, Masahiro; Itoh, Tetsuya; Kawaguchi, Takayuki; Tsumura, Norimichi; Haneishi, Hideaki; Miyake, Yoichi

    1998-12-01

    The vector error diffusion (VED) method is applied to proudce the digital halftone images by an electrophotographic printer with 600 dpi. Objective image quality of those obtained images is evaluated and analyzed. As a result, in the color reproduction of halftone image by the VED method, it was clear that there are large color difference between target color and printed color typically in the mid-tone colors. We consider it is due to the printer properties including dot-gain. It was also clear that the color noise of the VED method is larger compared with that of the conventional scalar error diffusion method in some patches. It was remarkable that ununiform patterns are generated by the VED method.

  6. Edge-Directed Error Diffused Digital Halftoning: A Steerable Filter Approach

    Directory of Open Access Journals (Sweden)

    Pardeep Garg

    2009-09-01

    Full Text Available In this paper the edge-directed error diffused digital halftoning in noisy media is analyzed. It is known that there occurs error in transmitting data through a communication channel due toaddition of noise, generally additive white Gaussian noise (AWGN. The proposed work employs Steerable stochastic error diffusion (SSED approach, a hybrid scheme that utilizes the advantages of Steerable filter for edge detection purpose and five neighbor stochastic error diffusion (FNSED approach for error diffusion purpose. Analysis of different methods of edgedetectionand error diffusion in the presence of zero mean AWGN with different values of variance has also been made. The results show that the proposed scheme produces halftones of better quality in the presence of even large value of noise variance compared to other approaches of edge detection and error diffusion.

  7. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  8. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  9. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    Science.gov (United States)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  10. Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging.

    Science.gov (United States)

    Van, Anh T; Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C

    2017-04-01

    To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed.

  11. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  12. Monochromator development at 4W1B beamline of BSRF

    Science.gov (United States)

    Xie, Yaning; Yan, Y.; Hu, T. D.; Liu, T.; Xian, D. C.

    2001-07-01

    The 4W1B is a X-ray monochromator beamline for XAFS at BSRF. During the upgrading phase, we have redesigned the monochromator to improve the performance of the beamline. It is a goniometer based, fixed exit double crystal monochromator. A mechanical linkage is employed to adjust the distance between the surfaces of the two crystals as the Bragg angle is changed to keep the outgoing beam direction constant. The whole mechanism is driven by only one stepping motor. The testing result shows that over the scanning range of 5-30°, the shift of outgoing beam position is less then 70 μm in the vertical direction. The basic principle, the mechanical realization, and the error analysis are discussed in detail. The performance and the testing results are also presented in this paper.

  13. Monochromator development at 4W1B beamiline of BSRF

    Institute of Scientific and Technical Information of China (English)

    YaningXie; Y.Yan; T.D.Hu; T.Liu; D.C.Xian

    2001-01-01

    The 4W1B is a X-ary monochromator beamline for XAFS at BSRF.During the upgrading phase,we have redsigned the monochromator to improve the performnce of beamline.It is a goniometer based,fixed exit double crystal monochromator.A mechanical linkage is employed to adjust the distance between the surfaces of the two crystals as the Bragg angle is changed to keep the outgoing beam direction constant.The whole mechanism is driven by only one stepping motor.The testing result shows that over the scanning range of 5-30°,the shift of outgoing beam position is less then 70μm in the vertical direction.The basic principle,the mechanical realization,and the error analysis are discussed in detail.The performance and the testing results are also presented in this paper.2001 Elsevier Science B.V.All rights reserved.

  14. Practical aspects of monochromators developed for transmission electron microscopy

    Science.gov (United States)

    Kimoto, Koji

    2014-01-01

    A few practical aspects of monochromators recently developed for transmission electron microscopy are briefly reviewed. The basic structures and properties of four monochromators, a single Wien filter monochromator, a double Wien filter monochromator, an omega-shaped electrostatic monochromator and an alpha-shaped magnetic monochromator, are outlined. The advantages and side effects of these monochromators in spectroscopy and imaging are pointed out. A few properties of the monochromators in imaging, such as spatial or angular chromaticity, are also discussed. PMID:25125333

  15. Optimization of a Segmented Filter with a New Error Diffusion Approach

    Institute of Scientific and Technical Information of China (English)

    Ayman; Al; Falou; Marwa; ELBouz

    2003-01-01

    The segmented filters, based on spectral cutting, proved their efficiency for the multi-correlation. In this article we propose an optimisation of this cutting according to a new error diffusion method.

  16. APS high heat load monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Lee, W.K.; Mills, D.

    1993-02-01

    This document contains the design specifications of the APS high heat load (HHL) monochromator and associated accessories as of February 1993. It should be noted that work is continuing on many parts of the monochromator including the mechanical design, crystal cooling designs, etc. Where appropriate, we have tried to add supporting documentation, references to published papers, and calculations from which we based our decisions. The underlying philosophy behind performance specifications of this monochromator was to fabricate a device that would be useful to as many APS users as possible, that is, the design should be as generic as possible. In other words, we believe that this design will be capable of operating on both bending magnet and ID beamlines (with the appropriate changes to the cooling and crystals) with both flat and inclined crystal geometries and with a variety of coolants. It was strongly felt that this monochromator should have good energy scanning capabilities over the classical energy range of about 4 to 20 keywith Si (111) crystals. For this reason, a design incorporating one rotation stage to drive both the first and second crystals was considered most promising. Separate rotary stages for the first and second crystals can sometimes provide more flexibility in their capacities to carry heavy loads (for heavily cooled first crystals or sagittal benders of second crystals), but their tuning capabilities were considered inferior to the single axis approach.

  17. A posteriori error estimates of constrained optimal control problem governed by convection diffusion equations

    Institute of Scientific and Technical Information of China (English)

    Ningning YAN; Zhaojie ZHOU

    2008-01-01

    In this paper,we study a posteriori error estimates of the edge stabilization Galerkin method for the constrained optimal control problem governed by convection-dominated diffusion equations.The residual-type a posteriori error estimators yield both upper and lower bounds for control u measured in L2-norm and for state y and costate p measured in energy norm.Two numerical examples are presented to illustrate the effectiveness of the error estimators provided in this paper.

  18. High-heat-load monochromator options for the RIXS beamline at the APS with the MBA lattice

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zunping, E-mail: zpliu@anl.gov; Gog, Thomas, E-mail: gog@aps.anl.gov; Stoupin, Stanislav A.; Upton, Mary H.; Ding, Yang; Kim, Jung-Ho; Casa, Diego M.; Said, Ayman H.; Carter, Jason A.; Navrotski, Gary [Advanced Photon Source, Argonne National Laboratory, 9700 S. Cass Ave, Lemont, IL 60439 (United States)

    2016-07-27

    With the MBA lattice for APS-Upgrade, tuning curves of 2.6 cm period undulators meet the source requirements for the RIXS beamline. The high-heat-load monochromator (HHLM) is the first optical white beam component. There are four options for the HHLM such as diamond monochromators with refrigerant of either water or liquid nitrogen (LN{sub 2}), and silicon monochromators of either direct or indirect cooling system. Their performances are evaluated at energy 11.215 keV (Ir L-III edge). The cryo-cooled diamond monochromator has similar performance as the water-cooled diamond monochromator because GaIn of the Cu-GaIn-diamond interface becomes solid. The cryo-cooled silicon monochromators perform better, not only in terms of surface slope error due to thermal deformation, but also in terms of thermal capacity.

  19. Color Extended Visual Cryptography Using Error Diffusion for High Visual Quality Shares

    Directory of Open Access Journals (Sweden)

    Lavanya Bandamneni

    2012-06-01

    Full Text Available for providing meaningful shares with high visual quality color visual cryptography is not sufficient. This paper introduces a color visual cryptography encryption method that produces meaningful color shares with high visual quality via visual information pixel (VIP synchronization and error diffusion. VIPs synchronize the positions of pixels that carry visual information of original images across the color channels so as to retain the original pixel values the same before and after encryption. To generate shares pleasant to human eyes Error diffusion is used. This method provides better results compared to the previous techniques.

  20. Local error estimates for adaptive simulation of the reaction-diffusion master equation via operator splitting

    Science.gov (United States)

    Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.

  1. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    Science.gov (United States)

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  2. Wide-aperture laser beam measurement using transmission diffuser: errors modeling

    Science.gov (United States)

    Matsak, Ivan S.

    2015-06-01

    Instrumental errors of measurement wide-aperture laser beam diameter were modeled to build measurement setup and justify its metrological characteristics. Modeled setup is based on CCD camera and transmission diffuser. This method is appropriate for precision measurement of large laser beam width from 10 mm up to 1000 mm. It is impossible to measure such beams with other methods based on slit, pinhole, knife edge or direct CCD camera measurement. The method is suitable for continuous and pulsed laser irradiation. However, transmission diffuser method has poor metrological justification required in field of wide aperture beam forming system verification. Considering the fact of non-availability of a standard of wide-aperture flat top beam modelling is preferred way to provide basic reference points for development measurement system. Modelling was conducted in MathCAD. Super-Lorentz distribution with shape parameter 6-12 was used as a model of the beam. Using theoretical evaluations there was found that the key parameters influencing on error are: relative beam size, spatial non-uniformity of the diffuser, lens distortion, physical vignetting, CCD spatial resolution and, effective camera ADC resolution. Errors were modeled for 90% of power beam diameter criteria. 12-order Super-Lorentz distribution was primary model, because it precisely meets experimental distribution at the output of test beam forming system, although other orders were also used. The analytic expressions were obtained analyzing the modelling results for each influencing data. Attainability of <1% error based on choice of parameters of expression was shown. The choice was based on parameters of commercially available components of the setup. The method can provide up to 0.1% error in case of using calibration procedures and multiple measurements.

  3. Consistent robust a posteriori error majorants for approximate solutions of diffusion-reaction equations

    Science.gov (United States)

    Korneev, V. G.

    2016-11-01

    Efficiency of the error control of numerical solutions of partial differential equations entirely depends on the two factors: accuracy of an a posteriori error majorant and the computational cost of its evaluation for some test function/vector-function plus the cost of the latter. In the paper consistency of an a posteriori bound implies that it is the same in the order with the respective unimprovable a priori bound. Therefore, it is the basic characteristic related to the first factor. The paper is dedicated to the elliptic diffusion-reaction equations. We present a guaranteed robust a posteriori error majorant effective at any nonnegative constant reaction coefficient (r.c.). For a wide range of finite element solutions on a quasiuniform meshes the majorant is consistent. For big values of r.c. the majorant coincides with the majorant of Aubin (1972), which, as it is known, for relatively small r.c. (< ch -2 ) is inconsistent and looses its sense at r.c. approaching zero. Our majorant improves also some other majorants derived for the Poisson and reaction-diffusion equations.

  4. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.

    2014-05-30

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  5. Numerical analysis of magnetic field diffusion in ferromagnetic laminations by minimization of constitutive error

    Energy Technology Data Exchange (ETDEWEB)

    Fresa, R. [Consorzio CREATE, DIIIE, University of Salerno, I-84084 Fisciano (Saudi Arabia), (Italy); Serpico, C. [Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742 (United States); Department of Electrical Engineering, University of Naples ' ' Federico II' ' , I-80152 Napoli, (Italy); Visone, C. [Department of Electrical Engineering, University of Naples ' ' Federico II' ' , I-80152 Napoli, (Italy)

    2000-05-01

    In this article, the diffusion of electromagnetic fields into a ferromagnetic lamination is numerically studied by means of an error-based numerical method. This technique has been developed so far only for the case of nonhysteretic constitutive relations. The generalization to the hysteretic case requires a modification of the technique in order to take into account the evolution of the ''magnetization state'' of the media. Numerical computations obtained by using this approach are reported and discussed. (c) 2000 American Institute of Physics.

  6. Double crystal monochromator controlled by integrated computing on BL07A in New SUBARU, Japan

    Energy Technology Data Exchange (ETDEWEB)

    Okui, Masato, E-mail: okui@kohzu.co.jp [Kohzu Precision Co., Ltd., 2-6-15, Kurigi, Asao-ku, Kawasaki-shi, Kanagawa 215-8521 (Japan); Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan); Yato, Naoki; Watanabe, Akinobu; Lin, Baiming; Murayama, Norio [Kohzu Precision Co., Ltd., 2-6-15, Kurigi, Asao-ku, Kawasaki-shi, Kanagawa 215-8521 (Japan); Fukushima, Sei, E-mail: FUKUSHIMA.Sei@nims.go.jp [Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan); National Institute for Material Sciences (Japan); Kanda, Kazuhiro [Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan)

    2016-07-27

    The BL07A beamline in New SUBARU, University of Hyogo, has been used for many studies of new materials. A new double crystal monochromator controlled by integrated computing was designed and installed in the beamline in 2014. In this report we will discuss the unique features of this new monochromator, MKZ-7NS. This monochromator was not designed exclusively for use in BL07A; on the contrary, it was designed to be installed at low cost in various beamlines to facilitate the industrial applications of medium-scale synchrotron radiation facilities. Thus, the design of the monochromator utilized common packages that can satisfy the wide variety of specifications required at different synchrotron radiation facilities. This monochromator can be easily optimized for any beamline due to the fact that a few control parameters can be suitably customized. The beam offset can be fixed precisely even if one of the two slave axes is omitted. This design reduces the convolution of mechanical errors. Moreover, the monochromator’s control mechanism is very compact, making it possible to reduce the size of the vacuum chamber can be made smaller.

  7. Large monochromator systems at PETRA III

    Energy Technology Data Exchange (ETDEWEB)

    Horbach, J., E-mail: Jan.Horbach@desy.de [Deutsches Elektronen-Synchrotron Hamburg, Notkestrasse 85, 22607 Hamburg (Germany); Degenhardt, M.; Hahn, U.; Heuer, J.; Peters, H.B.; Schulte-Schrepping, H. [Deutsches Elektronen-Synchrotron Hamburg, Notkestrasse 85, 22607 Hamburg (Germany); Donat, A.; Luedecke, H. [Deutsches Elektronen-Synchrotron Zeuthen, Platanenallee 6, 15738 Zeuthen (Germany)

    2011-09-01

    For the beamlines of the new synchrotron radiation source PETRA III, fixed exit double crystal monochromators with specific features were developed. To achieve a compact arrangement of the canted undulator beamlines at Sectors 2 and 6, it is necessary to shift one of the two beamlines in vertical direction. This is done by Large Offset Monochromators (LOM). One of these monochromators (LOM500, installed at beamline P03) is cooled with liquid nitrogen as it accepts the white beam. LOM1250 (installed at beamline P08) accepts a monochromatic beam and therefore needs no cooling system. The challenge with this monochromator is its large beam offset by 1.25 m. The energy range in combination with this large vertical beam offset demands for a relative crystal movement of roughly 3 m along the beam direction. This is solved by translating each crystal by up to 1.5 m. LOM1250 is equipped with a laser-based stabilisation, which allows compensating the thermal drift of the mechanical components involved in the positioning of the crystals. This is done by piezo actors below the crystals using the laser beam position after passing each crystal as feedback. With this approach we provide a closed loop system without attenuation of the X-ray beam by position monitors. The third monochromator at beamline P06 shifts the beam only by 21 mm upwards but has a linear travel of one crystal by 3.9 m. This is due to its large energy range of 4.4-90 keV using multilayer crystals. The technical design and mechanical engineering issues of the three Large Monochromator Systems at beamlines P03, P06 and P08 are highlighted in this article.

  8. Monochromator design for the HADAS reflectometer in Jülich

    Science.gov (United States)

    Rücker, U.; Alefeld, B.; Kentzinger, E.; Brückel, Th

    2000-06-01

    A reflectometer with polarization analysis is being built on the basis of the HADAS spectrometer in the neutron guide hall at the research reactor FRJ-2 (DIDO) in Jülich. For obtaining the optimal flux at the sample position, the performances of several monochromator designs have been calculated, e.g. focusing mirrors, mosaic monochromator crystals and bent perfect crystal monochromators. Under the given geometrical limitations a double monochromator with bent perfect Si crystals and vertical focusing has the best performance.

  9. Effective temperature and exergy of monochromic blackbody radiation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new parameter named monochromic effective temperature Tλ is proposed, which represents the thermodynamic quality of monochromic blackbody radiation. The exergy of the monochromic blackbody radiation is expressed by Tλ. The monochromic effective temperature equation is developed, which shows that the produci of Tλ and the wavelength is constant, which equals 5.33016×10-3 tion in photosynthesis can be explained by the results of this work.

  10. Novel method for converting digital Fresnel hologram to phase-only hologram based on bidirectional error diffusion.

    Science.gov (United States)

    Tsang, P W M; Poon, T-C

    2013-10-01

    We report a novel and fast method for converting a digital, complex Fresnel hologram into a phase-only hologram. Briefly, the pixels in the complex hologram are scanned sequentially in a row by row manner. The odd and even rows are scanned from opposite directions, constituting to a bidirectional error diffusion process. The magnitude of each visited pixel is forced to be a constant value, while preserving the exact phase value. The resulting error is diffused to the neighboring pixels that have not been visited before. The resulting novel phase-only hologram is called the bidirectional error diffusion (BERD) hologram. The reconstructed image from the BERD hologram exhibits high fidelity as compared with those obtained with the original complex hologram.

  11. Fast conversion of digital Fresnel hologram to phase-only hologram based on localized error diffusion and redistribution.

    Science.gov (United States)

    Tsang, P W M; Jiao, A S M; Poon, T-C

    2014-03-10

    Past research has demonstrated that a digital, complex Fresnel hologram can be converted into a phase-only hologram with the use of the bi-direction error diffusion (BERD) algorithm. However, the recursive nature error diffusion process is lengthy and increases monotonically with hologram size. In this paper, we propose a method to overcome this problem. Briefly, each row of a hologram is partitioned into short non-overlapping segments, and a localized error diffusion algorithm is applied to convert the pixels in each segment into phase only values. Subsequently, the error signal is redistributed with low-pass filtering. As the operation on each segment is independent of others, the conversion process can be conducted at high speed with the graphic processing unit. The hologram obtained with the proposed method, known as the Localized Error Diffusion and Redistribution (LERDR) hologram, is over two orders of magnitude faster than that obtained by BERD for a 2048×2048 hologram, exceeding the capability of generating quality phase-only holograms in video rate.

  12. Numerical solutions and error estimations for the space fractional diffusion equation with variable coefficients via Fibonacci collocation method.

    Science.gov (United States)

    Bahşı, Ayşe Kurt; Yalçınbaş, Salih

    2016-01-01

    In this study, the Fibonacci collocation method based on the Fibonacci polynomials are presented to solve for the fractional diffusion equations with variable coefficients. The fractional derivatives are described in the Caputo sense. This method is derived by expanding the approximate solution with Fibonacci polynomials. Using this method of the fractional derivative this equation can be reduced to a set of linear algebraic equations. Also, an error estimation algorithm which is based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation algorithm. If the exact solution of the problem is not known, the absolute error function of the problems can be approximately computed by using the Fibonacci polynomial solution. By using this error estimation function, we can find improved solutions which are more efficient than direct numerical solutions. Numerical examples, figures, tables are comparisons have been presented to show efficiency and usable of proposed method.

  13. Angular vibrations of cryogenically cooled double-crystal monochromators.

    Science.gov (United States)

    Sergueev, I; Döhrmann, R; Horbach, J; Heuer, J

    2016-09-01

    The effect of angular vibrations of the crystals in cryogenically cooled monochromators on the beam performance has been studied theoretically and experimentally. A simple relation between amplitude of the vibrations and size of the focused beam is developed. It is shown that the double-crystal monochromator vibrations affect not only the image size but also the image position along the optical axis. Several methods to measure vibrations with the X-ray beam are explained and analyzed. The methods have been applied to systematically study angular crystal vibrations at monochromators installed at the PETRA III light source. Characteristic values of the amplitudes of angular vibrations for different monochromators are presented.

  14. New method for spectrofluorometer monochromator wavelength calibration.

    Science.gov (United States)

    Paladini, A A; Erijman, L

    1988-09-01

    A method is presented for wavelength calibration of spectrofluorometer monochromators. It is based on the distortion that the characteristic absorption bands of glass filters (holmium or didymium oxide), commonly used for calibration of spectrophotometers, introduce in the emitted fluorescence of fluorophores like indole, diphenyl hexatriene, xylene or rhodamine 6G. Those filters or a well characterized absorber with sharp bands like benzene vapor can be used for the same purpose. The wavelength calibration accuracy obtained with this method is better than 0.1 nm, and requires no modification in the geometry of the spectrofluorometer sample compartment.

  15. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    Science.gov (United States)

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  16. Design and optimization of the grating monochromator for soft X-ray self-seeding FELs

    Energy Technology Data Exchange (ETDEWEB)

    Serkez, Svitozar

    2015-10-15

    The emergence of Free Electron Lasers (FEL) as a fourth generation of light sources is a breakthrough. FELs operating in the X-ray range (XFEL) allow one to carry out completely new experiments that probably most of the natural sciences would benefit. Self-amplified spontaneous emission (SASE) is the baseline FEL operation mode: the radiation pulse starts as a spontaneous emission from the electron bunch and is being amplified during an FEL process until it reaches saturation. The SASE FEL radiation usually has poor properties in terms of a spectral bandwidth or, on the other side, longitudinal coherence. Self-seeding is a promising approach to narrow the SASE bandwidth of XFELs significantly in order to produce nearly transformlimited pulses. It is achieved by the radiation pulse monochromatization in the middle of an FEL amplification process. Following the successful demonstration of the self-seeding setup in the hard X-ray range at the LCLS, there is a need for a self-seeding extension into the soft X-ray range. Here a numerical method to simulate the soft X-ray self seeding (SXRSS) monochromator performance is presented. It allows one to perform start-to-end self-seeded FEL simulations along with (in our case) GENESIS simulation code. Based on this method, the performance of the LCLS self-seeded operation was simulated showing a good agreement with an experiment. Also the SXRSS monochromator design developed in SLAC was adapted for the SASE3 type undulator beamline at the European XFEL. The optical system was studied using Gaussian beam optics, wave optics propagation method and ray tracing to evaluate the performance of the monochromator itself. Wave optics analysis takes into account the actual beam wavefront of the radiation from the coherent FEL source, third order aberrations and height errors from each optical element. The monochromator design is based on a toroidal VLS grating working at a fixed incidence angle mounting without both entrance and exit

  17. On progress of the solution of the stationary 2-dimensional neutron diffusion equation: a polynomial approximation method with error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ceolin, C., E-mail: celina.ceolin@gmail.com [Universidade Federal de Santa Maria (UFSM), Frederico Westphalen, RS (Brazil). Centro de Educacao Superior Norte; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T., E-mail: celina.ceolin@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica

    2015-07-01

    Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)

  18. Grating monochromator for soft X-ray self-seeding the European XFEL

    Energy Technology Data Exchange (ETDEWEB)

    Serkez, Svitozar; Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany)

    2013-02-15

    Self-seeding is a promising approach to significantly narrow the SASE bandwidth of XFELs to produce nearly transform-limited pulses. The implementation of this method in the soft X-ray wavelength range necessarily involves gratings as dispersive elements. We study a very compact self-seeding scheme with a grating monochromator originally designed at SLAC, which can be straightforwardly installed in the SASE3 type undulator beamline at the European XFEL. The monochromator design is based on a toroidal VLS grating working at a fixed incidence angle mounting without entrance slit. It covers the spectral range from 300 eV to 1000 eV. The optical system was studied using wave optics method (in comparison with ray tracing) to evaluate the performance of the self-seeding scheme. Our wave optics analysis takes into account the actual beam wavefront of the radiation from the coherent FEL source, third order aberrations, and errors from each optical element. Wave optics is the only method available, in combination with FEL simulations, for the design of a self-seeding monochromator without exit slit. We show that, without exit slit, the self-seeding scheme is distinguished by the much needed experimental simplicity, and can practically give the same resolving power (about 7000) as with an exit slit. Wave optics is also naturally applicable to calculations of the self-seeding scheme efficiency, which include the monochromator transmittance and the effect of the mismatching between seed beam and electron beam. Simulations show that the FEL power reaches 1 TW and that the spectral density for a TW pulse is about two orders of magnitude higher than that for the SASE pulse at saturation.

  19. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  20. Quantifying equation-of-state and opacity errors using integrated supersonic diffusive radiation flow experiments on the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Guymer, T. M., E-mail: Thomas.Guymer@awe.co.uk; Moore, A. S.; Morton, J.; Allan, S.; Bazin, N.; Benstead, J.; Bentley, C.; Comley, A. J.; Garbett, W.; Reed, L.; Stevenson, R. M. [AWE Plc., Aldermaston, Reading RG7 4PR (United Kingdom); Kline, J. L.; Cowan, J.; Flippo, K.; Hamilton, C.; Lanier, N. E.; Mussack, K.; Obrey, K.; Schmidt, D. W.; Taccetti, J. M. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); and others

    2015-04-15

    A well diagnosed campaign of supersonic, diffusive radiation flow experiments has been fielded on the National Ignition Facility. These experiments have used the accurate measurements of delivered laser energy and foam density to enable an investigation into SESAME's tabulated equation-of-state values and CASSANDRA's predicted opacity values for the low-density C{sub 8}H{sub 7}Cl foam used throughout the campaign. We report that the results from initial simulations under-predicted the arrival time of the radiation wave through the foam by ≈22%. A simulation study was conducted that artificially scaled the equation-of-state and opacity with the intended aim of quantifying the systematic offsets in both CASSANDRA and SESAME. Two separate hypotheses which describe these errors have been tested using the entire ensemble of data, with one being supported by these data.

  1. Discretization error analysis and adaptive meshing algorithms for fluorescence diffuse optical tomography in the presence of measurement noise.

    Science.gov (United States)

    Zhou, Lu; Yazici, Birsen

    2011-04-01

    Quantitatively accurate fluorescence diffuse optical tomographic (FDOT) image reconstruction is a computationally demanding problem that requires repeated numerical solutions of two coupled partial differential equations and an associated inverse problem. Recently, adaptive finite element methods have been explored to reduce the computation requirements of the FDOT image reconstruction. However, existing approaches ignore the ubiquitous presence of noise in boundary measurements. In this paper, we analyze the effect of finite element discretization on the FDOT forward and inverse problems in the presence of measurement noise and develop novel adaptive meshing algorithms for FDOT that take into account noise statistics. We formulate the FDOT inverse problem as an optimization problem in the maximum a posteriori framework to estimate the fluorophore concentration in a bounded domain. We use the mean-square-error (MSE) between the exact solution and the discretized solution as a figure of merit to evaluate the image reconstruction accuracy, and derive an upper bound on the MSE which depends upon the forward and inverse problem discretization parameters, noise statistics, a priori information of fluorophore concentration, source and detector geometry, as well as background optical properties. Next, we use this error bound to develop adaptive meshing algorithms for the FDOT forward and inverse problems to reduce the MSE due to discretization in the reconstructed images. Finally, we present a set of numerical simulations to illustrate the practical advantages of our adaptive meshing algorithms for FDOT image reconstruction.

  2. A hard X-ray laboratory for monochromator characterisation

    Energy Technology Data Exchange (ETDEWEB)

    Hamelin, B. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France)

    1997-04-01

    Since their installation at ILL during the 1970`s the ILL {gamma}-ray diffractometers have been intensively used in the development of neutron monochromators. However, the ageing of the sources and new developments in hard X-ray diffractometry lead to a decision at the end of 1995 to replace the existing {gamma}-ray laboratory with a hard X-ray laboratory, based on a 420 keV generator, making available in the long term several beam-lines for rapid characterisation of monochromator crystals. The facility is now installed and its characteristics and advantages are outlined. (author). 2 refs.

  3. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems; Methodes de Galerkine discontinues et analyse d'erreur a posteriori pour les problemes de diffusion heterogene

    Energy Technology Data Exchange (ETDEWEB)

    Stephansen, A.F

    2007-12-15

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  4. Composite germanium monochromators - results for the TriCS

    Energy Technology Data Exchange (ETDEWEB)

    Schefer, J.; Fischer, S.; Boehm, M.; Keller, L.; Horisberger, M.; Medarde, M.; Fischer, P. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-09-01

    Composite germanium monochromators are in the beginning of their application in neutron diffraction. We show here the importance of the permanent quality control with neutrons on the example of the 311 wafers which will be used on the single crystal diffractometer TriCS at SINQ. (author) 2 figs., 3 refs.

  5. Monochromator on a synchrotron undulator source for liquid surface studies

    DEFF Research Database (Denmark)

    Als-Nielsen, Jens Aage; Freund, A.K.

    1992-01-01

    a monochromator made of a beryllium mosaic crystal using the (002) reflection in Laue geometry placed in undulator beams of DORIS III at the Hamburger Synchrotronstrahlungslabor and of the European Synchrotron Radiation Facility. An analysis of the diffraction properties in terms of mosaic spread, heat load...

  6. The performance of a cryogenically cooled monochromator for an in-vacuum undulator beamline.

    Science.gov (United States)

    Zhang, Lin; Lee, Wah Keat; Wulff, Michael; Eybert, Laurent

    2003-07-01

    The channel-cut silicon monochromator on beamline ID09 at the European Synchrotron Radiation Facility is indirectly cooled from the sides by liquid nitrogen. The thermal slope error of the diffracting surface is calculated by finite-element analysis and the results are compared with experiments. The slope error is studied as a function of cooling coefficients, beam size, position of the footprint and power distribution. It is found that the slope error versus power curve can be divided into three regions: (i). The linear region: the thermal slope error is linearly proportional to the power. (ii). The transition region: the temperature of the Si crystal is close to 125 K; the thermal slope error is below the straight line extrapolated from the linear curve described above. (iii). The non-linear region: the temperature of the Si crystal is higher than 125 K and the thermal slope error increases much faster than the power. Heat-load tests were also performed and the measured rocking-curve widths are compared with those calculated by finite-element modeling. When the broadening from the intrinsic rocking-curve width and mounting strain are included, the calculated rocking-curve width versus heat load is in excellent agreement with experiment.

  7. Grating monochromator for soft X-ray self-seeding the European XFEL

    CERN Document Server

    Serkez, Svitozar; Kocharyan, Vitali; Saldin, Evgeni

    2013-01-01

    Self-seeding is a promising approach to significantly narrow the SASE bandwidth of XFELs to produce nearly transform-limited pulses. The implementation of this method in the soft X-ray wavelength range necessarily involves gratings as dispersive elements. We study a very compact self-seeding scheme with a grating monochromator originally designed at SLAC, which can be straightforwardly installed in the SASE3 type undulator beamline at the European XFEL. The monochromator design is based on a toroidal VLS grating working at a fixed incidence angle mounting without entrance slit. It covers the spectral range from 300 eV to 1000 eV. The optical system was studied using wave optics method (in comparison with ray tracing) to evaluate the performance of the self-seeding scheme. Our wave optics analysis takes into account the actual beam wavefront of the radiation from the coherent FEL source, third order aberrations, and errors from each optical element. Wave optics is the only method available, in combination with...

  8. Design and fabrication of an active polynomial grating for soft-X-ray monochromators and spectrometers

    CERN Document Server

    Chen, S J; Perng, S Y; Kuan, C K; Tseng, T C; Wang, D J

    2001-01-01

    An active polynomial grating has been designed for use in synchrotron radiation soft-X-ray monochromators and spectrometers. The grating can be dynamically adjusted to obtain the third-order-polynomial surface needed to eliminate the defocus and coma aberrations at any photon energy. Ray-tracing results confirm that a monochromator or spectrometer based on this active grating has nearly no aberration limit to the overall spectral resolution in the entire soft-X-ray region. The grating substrate is made of a precisely milled 17-4 PH stainless steel parallel plate, which is joined to a flexure-hinge bender shaped by wire electrical discharge machining. The substrate is grounded into a concave cylindrical shape with a nominal radius and then polished to achieve a roughness of 0.45 nm and a slope error of 1.2 mu rad rms. The long trace profiler measurements show that the active grating can reach the desired third-order polynomial with a high degree of figure accuracy.

  9. Design and fabrication of an active polynomial grating for soft-X-ray monochromators and spectrometers

    Science.gov (United States)

    Chen, S.-J.; Chen, C. T.; Perng, S. Y.; Kuan, C. K.; Tseng, T. C.; Wang, D. J.

    2001-07-01

    An active polynomial grating has been designed for use in synchrotron radiation soft-X-ray monochromators and spectrometers. The grating can be dynamically adjusted to obtain the third-order-polynomial surface needed to eliminate the defocus and coma aberrations at any photon energy. Ray-tracing results confirm that a monochromator or spectrometer based on this active grating has nearly no aberration limit to the overall spectral resolution in the entire soft-X-ray region. The grating substrate is made of a precisely milled 17-4 PH stainless steel parallel plate, which is joined to a flexure-hinge bender shaped by wire electrical discharge machining. The substrate is grounded into a concave cylindrical shape with a nominal radius and then polished to achieve a roughness of 0.45 nm and a slope error of 1.2 μrad rms. The long trace profiler measurements show that the active grating can reach the desired third-order polynomial with a high degree of figure accuracy.

  10. A methodology for visually lossless JPEG2000 compression of monochrome stereo images.

    Science.gov (United States)

    Feng, Hsin-Chang; Marcellin, Michael W; Bilgin, Ali

    2015-02-01

    A methodology for visually lossless compression of monochrome stereoscopic 3D images is proposed. Visibility thresholds are measured for quantization distortion in JPEG2000. These thresholds are found to be functions of not only spatial frequency, but also of wavelet coefficient variance, as well as the gray level in both the left and right images. To avoid a daunting number of measurements during subjective experiments, a model for visibility thresholds is developed. The left image and right image of a stereo pair are then compressed jointly using the visibility thresholds obtained from the proposed model to ensure that quantization errors in each image are imperceptible to both eyes. This methodology is then demonstrated via a particular 3D stereoscopic display system with an associated viewing condition. The resulting images are visually lossless when displayed individually as 2D images, and also when displayed in stereoscopic 3D mode.

  11. A vacuum ultraviolet filtering monochromator for synchrotron-based spectroscopy

    Science.gov (United States)

    Janik, Ireneusz; Marin, Timothy W.

    2013-01-01

    We describe the design, characterization, and implementation of a vacuum ultraviolet (VUV) monochromator for use in filtering stray and scattered light from the principal monochromator output of the Stainless Steel Seya VUV synchrotron beam line at the Synchrotron Radiation Center, University of Wisconsin-Madison. We demonstrate a reduction of three orders of magnitude of stray and scattered light over the wavelength range 1400-2000 Å with minimal loss of light intensity, allowing for over six orders of magnitude of dynamic range in light detection. We suggest that a similar filtering scheme can be utilized in any variety of spectroscopic applications where a large dynamic range and low amount of background signal are of import, such as in transmittance experiments with very high optical density.

  12. Design and performance of the ALS double-crystal monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Jones, G.; Ryce, S.; Perera, R.C.C. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-04-01

    A new {open_quotes}Cowan type{close_quotes} double-crystal monochromator, based on the boomerang design used at NSLS beamline X-24A, has been developed for beamline 9.3.1 at the ALS, a windowless UHV beamline covering the 1-6 keV photon-energy range. Beamline 9.3.1 is designed to simultaneously achieve the goals of high energy resolution, high flux, and high brightness at the sample. The mechanical design has been simplified, and recent developments in technology have been included. Measured mechanical precision of the monochromator shows significant improvement over existing designs. In tests with x-rays at NSLS beamline X-23 A2, maximum deviations in the intensity of monochromatic light were just 7% during scans of several hundred eV in the vicinity of the Cr K edge (6 keV) with the monochromator operating without intensity feedback. Such precision is essential because of the high brightness of the ALS radiation and the overall length of beamline 9.3.1 (26 m).

  13. In-situ metrology for the optimization of bent crystals used in hard-X-ray monochromators: Comparison between measurement and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Thomasset, Muriel, E-mail: muriel.thomasset@synchrotron-soleil.f [Synchrotron SOLEIL, L' orme des Merisiers, BP 48, 91192 Gif sur Yvette (France); Moreno, Thierry; Capitanio, Blandine; Idir, Mourad [Synchrotron SOLEIL, L' orme des Merisiers, BP 48, 91192 Gif sur Yvette (France); Bucourt, Samuel [Imagine Optic, 18 rue Charles de Gaulle, Orsay 91400 (France)

    2010-05-01

    Crystal sagittal focusing is known as one of the most efficient way of focusing synchrotron X-ray radiation from bending magnet sources, thus delivering increases photon flux at the sample position. To optimize the performance of a sagittaly bent crystal inside a monochromator, it is necessary to have knowledge of its radius of curvature. However, this measurement is not very easy to obtain. Even though the use of the X-ray beam is the ultimate source for optimizing the system, it is still necessary to have a prior knowledge of the radius of curvature as a function of the motor bender positions to avoid any catastrophic failure. In this paper, we describe a simple, efficient and accurate method of measuring the radius of curvature of sagitally bent monochromator crystals at several bending magnet beamlines at synchrotron SOLEIL. To optimize the crystal bending inside these monochromators, we used a Shack-Hartmann sensor (HP 26) developed by the Imagine Optic Company (Orsay/France). This high accuracy two-dimensional metrology tool was originally designed to be installed on a Long Trace Profiler translation stage to measure the mirrors profiles. During a period where the SOLEIL synchrotron was in shutdown, this instrument was directly mounted inside the monochromator so that the radius of curvature could be measured in-situ. This method allows us to optimize the curvature and eliminate twist before bending strongly the crystal below radii of curvature of less than 2 m. The second step in the optimization process was to use the X-ray beam for the final adjustments of the bending system, where X-ray images are then used to analyse the residual defaults of the system. Using SpotX, a ray-tracing simulation tool, these errors can be fully analysed and a fully optimized system can then be obtained. Overall, five beamlines at synchrotron SOLEIL have used in this method to optimize their monochromators.

  14. Characterisation of a Sr-90 based electron monochromator

    CERN Document Server

    Arfaoui, S; CERN; Casella, C; ETH Zurich

    2015-01-01

    This note describes the characterisation of an energy filtered Sr-90 source to be used in laboratory studies that require Minimum Ionising Particles (MIP) with a kinetic energy of up to approx. 2 MeV. The energy calibration was performed with a LYSO scintillation crystal read out by a digital Silicon Photomultiplier (dSiPM). The LYSO/dSiPM set-up was pre-calibrated using a Na-22 source. After introducing the motivation behind the usage of such a device, this note presents the principle and design of the electron monochromator as well as its energy and momentum characterisation.

  15. Vibrational stability of a cryocooled horizontal double-crystal monochromator

    Science.gov (United States)

    Kristiansen, Paw; Johansson, Ulf; Ursby, Thomas; Jensen, Brian Norsk

    2016-01-01

    The vibrational stability of a horizontally deflecting double-crystal monochromator (HDCM) is investigated. Inherently a HDCM will preserve the vertical beam stability better than a ‘normal’ vertical double-crystal monochromator as the vibrations of a HDCM will almost exclusively affect the horizontal stability. Here both the relative pitch vibration between the first and second crystal and the absolute pitch vibration of the second crystal are measured. All reported measurements are obtained under active cooling by means of flowing liquid nitrogen (LN2). It is found that it is favorable to circulate the LN2 at high pressures and low flow rates (up to 5.9 bar and down to 3 l min−1 is tested) to attain low vibrations. An absolute pitch stability of the second crystal of 18 nrad RMS, 2–2500 Hz, and a relative pitch stability between the two crystals of 25 nrad RMS, 1–2500 Hz, is obtained under cryocooling conditions that allow for 1516 W to be adsorbed by the LN2 before it vaporizes. PMID:27577758

  16. Performance of a beam-multiplexing diamond crystal monochromator at the Linac Coherent Light Source

    DEFF Research Database (Denmark)

    Zhu, Diling; Feng, Yiping; Stoupin, Stanislav

    2014-01-01

    A double-crystal diamond monochromator was recently implemented at the Linac Coherent Light Source. It enables splitting pulses generated by the free electron laser in the hard x-ray regime and thus allows the simultaneous operations of two instruments. Both monochromator crystals are High-Pressu...

  17. Microcontroller-based servo for two-crystal X-ray monochromators.

    Science.gov (United States)

    Siddons, D P

    1998-05-01

    Microcontrollers have become increasingly easy to incorporate into instruments as the architectures and support tools have developed. The PIC series is particularly easy to use, and this paper describes a controller used to stabilize the output of a two-crystal X-ray monochromator at a given offset from its peak intensity position, as such monochromators are generally used.

  18. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  19. MACS low-background doubly focusing neutron monochromator

    CERN Document Server

    Smee, S A; Scharfstein, G A; Qiu, Y; Brand, P C; Anand, D K; Broholm, C L

    2002-01-01

    A novel doubly focusing neutron monochromator has been developed as part of the Multi-Analyzer Crystal Spectrometer (MACS) at the NIST Center for Neutron Research. The instrument utilizes a unique vertical focusing element that enables active vertical and horizontal focusing with a large, 357-crystal (1428 cm sup 2), array. The design significantly reduces the amount of structural material in the beam path as compared to similar instruments. Optical measurements verify the excellent focal performance of the device. Analytical and Monte Carlo simulations predict that, when mounted at the NIST cold-neutron source, the device should produce a monochromatic beam (DELTA E=0.2 meV) with flux phi>10 sup 8 n/cm sup 2 s. (orig.)

  20. SUMS: synchronous undulator-monochromator scans at Synchrotron Soleil.

    Science.gov (United States)

    Izquierdo, Manuel; Hardion, Vincent; Renaud, Guillaume; Chapuis, Lilian; Millet, Raphael; Langlois, Florent; Marteau, Fabrice; Chauvet, Christian

    2012-07-01

    A strategy for performing synchronous undulator-monochromator scans (SUMS) compatible with the control system of Synchrotron Soleil has been developed. The implementation of the acquisition scheme has required the development of an electronic interface between the undulator and the beamline. The characterization of delays and jitters in the synchronous movement of various motor axes has motivated the development of a new electronic synchronization scheme among various axes, including the case when one of the axes is electronically accessible in `read-only' mode. A software prototype has been developed to allow the existing hard continuous software to work in user units. The complete strategy has been implemented and successfully tested at the TEMPO beamline.

  1. Monochromator-Based Absolute Calibration of Radiation Thermometers

    Science.gov (United States)

    Keawprasert, T.; Anhalt, K.; Taubert, D. R.; Hartmann, J.

    2011-08-01

    A monochromator integrating-sphere-based spectral comparator facility has been developed to calibrate standard radiation thermometers in terms of the absolute spectral radiance responsivity, traceable to the PTB cryogenic radiometer. The absolute responsivity calibration has been improved using a 75 W xenon lamp with a reflective mirror and imaging optics to a relative standard uncertainty at the peak wavelength of approximately 0.17 % ( k = 1). Via a relative measurement of the out-of-band responsivity, the spectral responsivity of radiation thermometers can be fully characterized. To verify the calibration accuracy, the absolutely calibrated radiation thermometer is used to measure Au and Cu freezing-point temperatures and then to compare the obtained results with the values obtained by absolute methods, resulting in T - T 90 values of +52 mK and -50 mK for the gold and copper fixed points, respectively.

  2. Reference optical phantoms for diffuse optical spectroscopy. Part 1--Error analysis of a time resolved transmittance characterization method.

    Science.gov (United States)

    Bouchard, Jean-Pierre; Veilleux, Israël; Jedidi, Rym; Noiseux, Isabelle; Fortin, Michel; Mermut, Ozzy

    2010-05-24

    Development, production quality control and calibration of optical tissue-mimicking phantoms require a convenient and robust characterization method with known absolute accuracy. We present a solid phantom characterization technique based on time resolved transmittance measurement of light through a relatively small phantom sample. The small size of the sample enables characterization of every material batch produced in a routine phantoms production. Time resolved transmittance data are pre-processed to correct for dark noise, sample thickness and instrument response function. Pre-processed data are then compared to a forward model based on the radiative transfer equation solved through Monte Carlo simulations accurately taking into account the finite geometry of the sample. The computational burden of the Monte-Carlo technique was alleviated by building a lookup table of pre-computed results and using interpolation to obtain modeled transmittance traces at intermediate values of the optical properties. Near perfect fit residuals are obtained with a fit window using all data above 1% of the maximum value of the time resolved transmittance trace. Absolute accuracy of the method is estimated through a thorough error analysis which takes into account the following contributions: measurement noise, system repeatability, instrument response function stability, sample thickness variation refractive index inaccuracy, time correlated single photon counting system time based inaccuracy and forward model inaccuracy. Two sigma absolute error estimates of 0.01 cm(-1) (11.3%) and 0.67 cm(-1) (6.8%) are obtained for the absorption coefficient and reduced scattering coefficient respectively.

  3. Volumetric apparatus for hydrogen adsorption and diffusion measurements: sources of systematic error and impact of their experimental resolutions.

    Science.gov (United States)

    Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni; Agostino, Raffaele Giuseppe

    2013-10-01

    The development of a volumetric apparatus (also known as a Sieverts' apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.

  4. Double-crystal monochromator as the first optical element in BESSRC-CAT beamlines (abstract)

    Science.gov (United States)

    Beno, Mark A.; Ramanathan, Mohan

    1996-09-01

    The first optical element in the BESSRC-CAT beamlines at the Advanced Photon Source will be a monochromator, so that a standard design for this critical component is advantageous. The monochromator we have designed is a double-crystal, fixed-exit scheme with a constant offset designed for UHV operation, thereby allowing windowless operation of the beamlines. The crystals are mounted on a turntable with the first crystal at the center of rotation. A mechanical linkage is used to correctly position the second crystal and maintain a constant offset. The main drive for the rotary motion is provided by a vacuum-compatible Huber goniometer isolated from the main vacuum chamber. Rotary motion of the primary monochromator stage is accomplished by using two adjacent vacuum chambers connected only by the small annular opening around a hollow stainless steel shaft, which connects the Huber goniometer to the turntable on which the crystals are mounted. The design of the monochromator is such that it can accommodate both water and liquid nitrogen cooling for the crystal optics. The basic design for the monochromator linkage mechanism will be presented along with details of the monochromator chamber. The results of initial optical tests of the monochromator system using tilt sensors and a precision autocollimator will also be given.

  5. The development of a 200 kV monochromated field emission electron source

    Energy Technology Data Exchange (ETDEWEB)

    Mukai, Masaki, E-mail: mmukai@jeol.co.jp [JEOL Ltd., 3-1-2 Musashino, Akishima, Tokyo 196-8558 (Japan); Kim, Judy S. [University of Oxford, Department of Materials, Parks Road, Oxford, OX1 3PH (United Kingdom); Omoto, Kazuya; Sawada, Hidetaka; Kimura, Atsushi; Ikeda, Akihiro; Zhou, Jun; Kaneyama, Toshikatsu [JEOL Ltd., 3-1-2 Musashino, Akishima, Tokyo 196-8558 (Japan); Young, Neil P.; Warner, Jamie H.; Nellist, Peter D.; Kirkland, Angus I. [University of Oxford, Department of Materials, Parks Road, Oxford, OX1 3PH (United Kingdom)

    2014-05-01

    We report the development of a monochromator for an intermediate-voltage aberration-corrected electron microscope suitable for operation in both STEM and TEM imaging modes. The monochromator consists of two Wien filters with a variable energy selecting slit located between them and is located prior to the accelerator. The second filter cancels the energy dispersion produced by the first filter and after energy selection forms a round monochromated, achromatic probe at the specimen plane. The ultimate achievable energy resolution has been measured as 36 meV at 200 kV and 26 meV at 80 kV. High-resolution Annular Dark Field STEM images recorded using a monochromated probe resolve Si–Si spacings of 135.8 pm using energy spreads of 218 meV at 200 kV and 217 meV at 80 kV respectively. In TEM mode an improvement in non-linear spatial resolution to 64 pm due to the reduction in the effects of partial temporal coherence has been demonstrated using broad beam illumination with an energy spread of 134 meV at 200 kV. - Highlights: • Monochromator for 200 kV aberration corrected TEM and STEM was developed. • Monochromator produces monochromated and achromatic probe at specimen plane. • Ultimate energy resolution was measured to be 36 meV at 200 kV and 26 meV at 80 kV. • Atomic resolution STEM images were recorded using monochromated electron probe. • Improvements of TEM resolution were confirmed using monochromated illumination.

  6. Realisation of a novel crystal bender for a fast double crystal monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Zaeper, R.; Richwin, M. E-mail: richwin@uni-wuppertal.de; Wollmann, R.; Luetzenkirchen-Hecht, D.; Frahm, R

    2001-07-21

    A novel crystal bender for an X-ray undulator beamline as part of a fast double crystal monochromator development for full EXAFS energy range was characterized. Rocking curves of the monochromator crystal system were recorded under different heat loads and bending forces of the indirectly cooled first Si(1 1 1) crystal. The monochromator development implements new piezo-driven tilt tables with wide angular range to adjust the crystals' Bragg angles and a high pressure actuated bender mechanism for the first crystal.

  7. Realisation of a novel crystal bender for a fast double crystal monochromator

    CERN Document Server

    Zaeper, R; Wollmann, R; Luetzenkirchen-Hecht, D; Frahm, R

    2001-01-01

    A novel crystal bender for an X-ray undulator beamline as part of a fast double crystal monochromator development for full EXAFS energy range was characterized. Rocking curves of the monochromator crystal system were recorded under different heat loads and bending forces of the indirectly cooled first Si(1 1 1) crystal. The monochromator development implements new piezo-driven tilt tables with wide angular range to adjust the crystals' Bragg angles and a high pressure actuated bender mechanism for the first crystal.

  8. Optimization of Monochromated TEM for Ultimate Resolution Imaging and Ultrahigh Resolution Electron Energy Loss Spectroscopy

    KAUST Repository

    Lopatin, Sergei

    2017-09-01

    The performance of a monochromated transmission electron microscope with Wien type monochromator is optimized to achieve an extremely narrow energy spread of electron beam and an ultrahigh energy resolution with spectroscopy. The energy spread in the beam is improved by almost an order of magnitude as compared to specified values. The optimization involves both the monochromator and the electron energy loss detection system. We demonstrate boosted capability of optimized systems with respect to ultra-low loss EELS and sub-angstrom resolution imaging (in a combination with spherical aberration correction).

  9. Aberration corrected and monochromated environmental transmission electron microscopy: challenges and prospects for materials science

    DEFF Research Database (Denmark)

    Hansen, Thomas Willum; Wagner, Jakob Birkedal; Dunin-Borkowski, Rafal E.

    2010-01-01

    The latest generation of environmental transmission electron microscopes incorporates aberration correctors and monochromators, allowing studies of chemical reactions and growth processes with improved spatial resolution and spectral sensitivity. Here, we describe the performance of such an instr......The latest generation of environmental transmission electron microscopes incorporates aberration correctors and monochromators, allowing studies of chemical reactions and growth processes with improved spatial resolution and spectral sensitivity. Here, we describe the performance...

  10. IMCA-CAT BM first monochromator crystal optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, I.N.; Cimpoes, S.; Chrzas, J. [CSRRI, Illinois Institute of Technology, 3301 S. Dearborn Street, Chicago, Il 60616 (United States)

    1996-09-01

    The high heat load at the surfaces of the first x-ray optical elements at the APS requires special measures to be taken to more completely utilize the beam. A conceptually new design for such an element, proposed, realized, and tested by M. Hart and conveniently called {open_quote}{open_quote}matchbox,{close_quote}{close_quote} is to be implemented at the IMCA-CAT BM beamline as the first monochromator crystal. The requirements of the IMCA-CAT companies for the BM beamline dictate that an optimization of the design is made for a given x-ray energy range E=13 keV {plus_minus}1 keV. A modification of the original design to improve the vacuum compatibility of the device was made in collaboration with M. Hart. A FEA optimization of the geometry is made using the ALGOR and ABAQUS programs. Determination of the resulting slopes and the useful crystal surface after the best compensation of the thermal distortions are also made. The surface profile obtained by the FEA study was used to perform a ray-tracing analysis of the IMCA-CAT BM beamline. The results of the ray-tracing study will be presented. {copyright} {ital 1996 American Institute of Physics.}

  11. Monochromator-Based Absolute Calibration of a Standard Radiation Thermometer

    Science.gov (United States)

    Mantilla, J. M.; Hernanz, M. L.; Campos, J.; Martín, M. J.; Pons, A.; del Campo, D.

    2014-04-01

    Centro Español de Metrología (CEM) is disseminating the International Temperature Scale (ITS-90), at high temperatures, by using the fixed points of Ag and Cu and a standard radiation thermometer. However, the future mise-en-pratique for the definition of the kelvin ( MeP-K) will include the dissemination of the kelvin by primary methods and by indirect approximations capable of exceptionally low uncertainties or increased reliability. Primary radiometry is, at present, able to achieve uncertainties competitive with the ITS-90 above the silver point with one of the possible techniques the calibration for radiance responsivity of an imaging radiometer (radiance method). In order to carry out this calibration, IO-CSIC (Spanish Designated Institute for luminous intensity and luminous flux) has collaborated with CEM, allowing traceability to its cryogenic radiometer. A monochromator integrating sphere-based spectral comparator facility has been used to calibrate one of the CEM standard radiation thermometers. The absolute calibrated standard radiation thermometer has been used to determine the temperatures of the fixed points of Cu, Co-C, Pt-C, and Re-C. The results obtained are 1357.80 K, 1597.10 K, 2011.66 K, and 2747.64 K, respectively, with uncertainties ranging from 0.4 K to 1.1 K.

  12. Moessbauer-Fresnel zone plate as nuclear monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, T.M.; Alp, E.E.; Yun, W.B.

    1992-06-01

    Zone plates currently used in x-ray optics derive their focusing power from (a spatial variation of) the electronic refractive index -- that is, from the collective effect of electronic x-ray-scattering amplitudes. Nuclei also scatter x rays, and resonant nuclear-scattering amplitudes, particularly those associated with Moessbauer fluorescence, can dominate the refractive index for x-rays whose energies are very near the nuclear-resonance energy. A zone plate whose Fresnel zones are filled alternately with {sup 57}Fe and {sup 56}Fe ({sup 57}Fe has a nuclear resonance of natural width {Gamma} = 4.8 nano-eV at 14.413 keV; {sup 56}Fe has no such resonance) has a resonant focusing efficiency; it focuses only those x-rays whose energies are within several {Gamma} of resonance. When followed by an absorbing screen with a small pinhole, such a zone plate can function as a synchrotron-radiation monochromator with an energy resolution of a few parts in 10{sup 12}. The energy-dependent focusing efficiency and the resulting time-dependent response of a resonant zone plate are discussed.

  13. Liquid-crystal displays for medical imaging: a discussion of monochrome versus color

    Science.gov (United States)

    Wright, Steven L.; Samei, Ehsan

    2004-05-01

    A common view is that color displays cannot match the performance of monochrome displays, normally used for diagnostic x-ray imaging. This view is based largely on historical experience with cathode-ray tube (CRT) displays, and does not apply in the same way to liquid-crystal displays (LCDs). Recent advances in color LCD technology have considerably narrowed performance differences with monochrome LCDs for medical applications. The most significant performance advantage of monochrome LCDs is higher luminance, a concern for use under bright ambient conditions. LCD luminance is limited primarily by backlight design, yet to be optimized for color LCDs for medical applications. Monochrome LCDs have inherently higher contrast than color LCDs, but this is not a major advantage under most conditions. There is no practical difference in luminance precision between color and monochrome LCDs, with a slight theoretical advantage for color. Color LCDs can provide visualization and productivity enhancement for medical applications, using digital drive from standard commercial graphics cards. The desktop computer market for color LCDs far exceeds the medical monitor market, with an economy of scale. The performance-to-price ratio for color LCDs is much higher than monochrome, and warrants re-evaluation for medical applications.

  14. Cascade self-seeding scheme with wake monochromator for narrow-bandwidth X-ray FELs

    CERN Document Server

    Geloni, Gianluca; Saldin, Evgeni

    2010-01-01

    Three different approaches have been proposed so far for production of highly monochromatic X-rays from a baseline XFEL undulator: (i) single-bunch self-seeding scheme with a four crystal monochromator in Bragg reflection geometry; (ii) double-bunch self-seeding scheme with a four-crystal monochromator in Bragg reflection geometry; (iii) single-bunch self-seeding scheme with a wake monochromator. A unique element of the X-ray optical design of the last scheme is the monochromatization of X-rays using a single crystal in Bragg-transmission geometry. A great advantage of this method is that the monochromator introduces no path delay of X-rays. This fact eliminates the need for a long electron beam bypass, or for the creation of two precisely separated, identical electron bunches, as required in the other two self-seeding schemes. In its simplest configuration, the self-seeded XFEL consists of an input undulator and an output undulator separated by a monochromator. In some experimental situations this simplest t...

  15. On the influence of monochromator thermal deformations on X-ray focusing

    Science.gov (United States)

    Antimonov, M. A.; Khounsary, A. M.; Sandy, A. R.; Narayanan, S.; Navrotski, G.

    2016-06-01

    A cooled double crystal monochromator system is used on many high heat load X-ray synchrotron radiation beamlines in order to select, by diffraction, a narrow spectrum of the beam. Thermal deformation of the first crystal monochromator - and the potential loss of beam brightness - is often a concern. However, if downstream beam focusing is planned, the lensing effect of the monochromator must be considered even if thermal deformations are small. In this paper we report on recent experiments at an Advanced Photon Source (APS) beamline that focuses the X-ray beam using compound refractive lenses downstream of an X-ray monochromator system. Increasing the X-ray beam power by increasing the storage ring current from 100 mA to 130 mA resulted in an effective doubling of the focal distance. We show quantitatively that this is due to a lensing effect of the distorted monochromator that results in the creation of a virtual source downstream of the actual source. An analysis of the defocusing and options to mitigate this effect are explored.

  16. On the influence of monochromator thermal deformations on X-ray focusing

    Energy Technology Data Exchange (ETDEWEB)

    Antimonov, M.A. [Department of Mechanical and Industrial Engineering, University of Illinois at Chicago, Chicago, IL 60607 (United States); X-ray Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Peter the Great St. Petersburg Polytechnic University, Saint Petersburg 195251 (Russian Federation); Khounsary, A.M., E-mail: amk@iit.edu [Department of Physics, Illinois Institute of Technology, Chicago, IL 60616 (United States); Department of Mechanical and Industrial Engineering, University of Illinois at Chicago, Chicago, IL 60607 (United States); Sandy, A.R.; Narayanan, S.; Navrotski, G. [X-ray Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States)

    2016-06-01

    A cooled double crystal monochromator system is used on many high heat load X-ray synchrotron radiation beamlines in order to select, by diffraction, a narrow spectrum of the beam. Thermal deformation of the first crystal monochromator – and the potential loss of beam brightness – is often a concern. However, if downstream beam focusing is planned, the lensing effect of the monochromator must be considered even if thermal deformations are small. In this paper we report on recent experiments at an Advanced Photon Source (APS) beamline that focuses the X-ray beam using compound refractive lenses downstream of an X-ray monochromator system. Increasing the X-ray beam power by increasing the storage ring current from 100 mA to 130 mA resulted in an effective doubling of the focal distance. We show quantitatively that this is due to a lensing effect of the distorted monochromator that results in the creation of a virtual source downstream of the actual source. An analysis of the defocusing and options to mitigate this effect are explored.

  17. The residual stress instrument with optimized Si(220) monochromator and position-sensitive detector at HANARO

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang-Hee [Korea Atomic Energy Research Institute, Yusung, Daejon 305-600 (Korea, Republic of); Moon, Myung-Kook [Korea Atomic Energy Research Institute, Yusung, Daejon 305-600 (Korea, Republic of)]. E-mail: moonmk@kaeri.re.kr; Em, Vyacheslav T. [Korea Atomic Energy Research Institute, Yusung, Daejon 305-600 (Korea, Republic of); Choi, Young-Hyun [Korea Atomic Energy Research Institute, Yusung, Daejon 305-600 (Korea, Republic of); Cheon, Jong-Kyu [Korea Atomic Energy Research Institute, Yusung, Daejon 305-600 (Korea, Republic of); Nam, Uk-Won [Korea Astronomy Observatory, Yusung, Daejon 305-348 (Korea, Republic of); Kong, Kyung-Nam [Korea Astronomy Observatory, Yusung, Daejon 305-348 (Korea, Republic of)

    2005-06-11

    An upgraded residual stress instrument at the HANARO reactor of the KAERI is described. A horizontally focusing bent perfect crystal Si(220) monochromator (instead of a mosaic vertical focusing Ge monochromator) is installed in a drum with a tunable (2{theta}{sub M}=0-60{sup o}) take-off angle/wavelength. A specially designed position-sensitive detector (60% efficiency for {lambda}=1.8A) with 200mm (instead of 100mm) high-active area is used. There are no Soller type collimators in the instrument. The minimum possible monochromator to sample distance, L{sub MS}=2m, and sample to detector distance, L{sub SD}=1.2m, were found to be optimal. The new PSD and bent Si(220) monochromator combined with the possibility of selecting an appropriate wavelength resulted in about a ten-fold gain in data collection rate. The optimal reflections of austenitic and ferritic steels, aluminum and nickel for stress measurements with a Si(220) monochromator were chosen experimentally. The ability of the instrument to make strain measurements deep inside the austenitic and ferritic steels has been tested. For the chosen reflections and wavelengths, no shift of peak position (apparent strain) was observed up to 56mm length of path.

  18. A double multilayer monochromator for the B16 Test beamline at the Diamond Light Source

    Science.gov (United States)

    Sawhney, K. J. S.; Dolbnya, I. P.; Scott, S. M.; Tiwari, M. K.; Preece, G. M.; Alcock, S. G.; Malandain, A. W.

    2011-09-01

    The B16 Test beamline at the Diamond Light Source is in user operation. It has been recently upgraded with the addition of a double multilayer monochromator (DMM), which provides further functionality and versatility to the beamline. The multilayer monochromator is equipped with two pairs of multilayer optics (Ni/B4C and Ru/B4C) to cover the wide photon energy range of 2 - 20 keV, with good efficiency. The DMM provides a broad bandpass / high flux operational mode for the beamline and, when used in tandem with the Si (111) double crystal monochromator, it gives a very high higher-order harmonics suppression. The design details of the DMM and the first commissioning results obtained using the DMM are presented.

  19. Performance of a beam-multiplexing diamond crystal monochromator at the Linac Coherent Light Source

    OpenAIRE

    Zhu, Diling; Feng, Yiping; Stoupin, Stanislav; Terentyev, Sergey A.; Lemke, Henrik T.; Fritz, David M.; Chollet, Matthieu; Glownia, J. M.; Alonso-Mori, Roberto; Sikorski, Marcin; Song, Sanghoon; Brandt van Driel, Tim; Williams, Garth J; Messerschmidt, Marc; Boutet, Sébastien

    2014-01-01

    A double-crystal diamond monochromator was recently implemented at the Linac Coherent Light Source. It enables splitting pulses generated by the free electron laser in the hard x-ray regime and thus allows the simultaneous operations of two instruments. Both monochromator crystals are High-Pressure High-Temperature grown type-IIa diamond crystal plates with the (111) orientation. The first crystal has a thickness of ∼100 μm to allow high reflectivity within the Bragg bandwidth and good transm...

  20. Comparison of Color LCD and Medical-grade Monochrome LCD Displays in Diagnostic Radiology

    OpenAIRE

    2007-01-01

    In diagnostic radiology, medical-grade monochrome displays are usually recommended because of their higher luminance. Standard color displays can be used as a less expensive alternative, but have a lower luminance. The aim of the present study was to compare image quality for these two types of displays. Images of a CDRAD contrast-detail phantom were read by four radiologists using a 2-megapixel (MP) color display (143 cd/m2 maximum luminance) as well as 2-MP (295 cd/m2) and 3-MP monochrome d...

  1. A diffracted-beam monochromator for long linear detectors in X-ray diffractometers with Bragg-Brentano parafocusing geometry

    NARCIS (Netherlands)

    Van der Pers, N.M.; Hendrikx, R.W.A.; Delhez, R.; Böttger, A.J.

    2013-01-01

    A new diffracted-beam monochromator has been developed for Bragg-Brentano X-ray diffractometers equipped with a linear detector. The monochromator consists of a cone-shaped graphite highly oriented pyrolytic graphite crystal oriented out of the equatorial plane such that the parafocusing geometry is

  2. 基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法%Color Error Diffusion Halftoning Method Based on Image Tone and Human Visual System

    Institute of Scientific and Technical Information of China (English)

    易尧华; 于晓庆

    2009-01-01

    In the process of color error diffusion halfioning, the quality of the color halftoning image will be affected directly by the design of the error diffusion filter with different color channels. This paper studied the method of error diffusion based on tone and the human visual system(HVS), optimized the filter coefficient and the threshold by applying the luminance and chrominance HVS, and the color error diffusion halftoning method based on the image tone and HVS had been received. The results showed that this method can reduce the artifacts in color halftoning images effectively and improve the accuracy of color rendition.%在彩色误差扩散网目调处理过程中,各色通道不同的误差滤波器设计将直接影响彩色网目调图像的质量.本文对基于阶调的误差扩散方法以及人眼视觉特性进行了分析研究,应用亮度和色度人眼视觉模型对误差扩散过程中的滤波器系数和阈值进行优化,实现了基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法.实验结果表明,该方法能够有效地减少彩色网目调图像的人工纹理,并显著提高再现彩色图像的色彩还原精度.

  3. Nanoscale mapping of optical band gaps using monochromated electron energy loss spectroscopy

    Science.gov (United States)

    Zhan, W.; Granerød, C. S.; Venkatachalapathy, V.; Johansen, K. M. H.; Jensen, I. J. T.; Kuznetsov, A. Yu; Prytz, Ø.

    2017-03-01

    Using monochromated electron energy loss spectroscopy in a probe-corrected scanning transmission electron microscope we demonstrate band gap mapping in ZnO/ZnCdO thin films with a spatial resolution below 10 nm and spectral precision of 20 meV.

  4. Measurement & Minimization of Mount Induced Strain on Double Crystal Monochromator Crystals

    Science.gov (United States)

    Kelly, J.; Alcock, S. G.

    2013-03-01

    Opto-mechanical mounts can cause significant distortions to monochromator crystals and mirrors if not designed or implemented carefully. A slope measuring profiler, the Diamond-NOM [1], was used to measure the change in tangential slope as a function of crystal clamping configuration and load. A three point mount was found to exhibit the lowest surface distortion (Diamond Light Source.

  5. Comparison of color LCD and medical-grade monochrome LCD displays in diagnostic radiology.

    Science.gov (United States)

    Geijer, Håkan; Geijer, Mats; Forsberg, Lillemor; Kheddache, Susanne; Sund, Patrik

    2007-06-01

    In diagnostic radiology, medical-grade monochrome displays are usually recommended because of their higher luminance. Standard color displays can be used as a less expensive alternative, but have a lower luminance. The aim of the present study was to compare image quality for these two types of displays. Images of a CDRAD contrast-detail phantom were read by four radiologists using a 2-megapixel (MP) color display (143 cd/m(2) maximum luminance) as well as 2-MP (295 cd/m(2)) and 3-MP monochrome displays. Thirty lumbar spine radiographs were also read by four radiologists using the color and the 2-MP monochrome display in a visual grading analysis (VGA). Very small differences were found between the displays when reading the CDRAD images. The VGA scores were -0.28 for the color and -0.25 for the monochrome display (p = 0.24; NS). It thus seems possible to use color displays in diagnostic radiology provided that grayscale adjustment is used.

  6. Self-seeding scheme with gas monochromator for narrow-bandwidth soft X-ray FELs

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2011-03-15

    Self-seeding schemes, consisting of two undulators with a monochromator in between, aim at reducing the bandwidth of SASE X-ray FELs. We recently proposed to use a new method of monochromatization exploiting a single crystal in Bragg transmission geometry for self-seeding in the hard X-ray range. Here we consider a possible extension of this method to the soft X-ray range using a cell filled with resonantly absorbing gas as monochromator. The transmittance spectrum in the gas exhibits an absorbing resonance with narrow bandwidth. Then, similarly to the hard X-ray case, the temporal waveform of the transmitted radiation pulse is characterized by a long monochromatic wake. In fact, the FEL pulse forces the gas atoms to oscillate in a way consistent with a forward-propagating, monochromatic radiation beam. The radiation power within this wake is much larger than the equivalent shot noise power in the electron bunch. Further on, the monochromatic wake of the radiation pulse is combined with the delayed electron bunch and amplified in the second undulator. The proposed setup is extremely simple, and composed of as few as two simple elements. These are the gas cell, to be filled with noble gas, and a short magnetic chicane. The installation of the magnetic chicane does not perturb the undulator focusing system and does not interfere with the baseline mode of operation. In this paper we assess the features of gas monochromator based on the use of He and Ne.We analyze the processes in the monochromator gas cell and outside it, touching upon the performance of the differential pumping system as well. We study the feasibility of using the proposed self-seeding technique to generate narrow bandwidth soft X-ray radiation in the LCLS-II soft X-ray beam line. We present conceptual design, technical implementation and expected performances of the gas monochromator self-seeding scheme. (orig.)

  7. A magnetically adsorbed fine adjustment mechanism of the second crystal in a double-crystal monochromator

    Institute of Scientific and Technical Information of China (English)

    CAO Chong-Zhen; GAO Xue-Guan; MA Pei-Sun; WANG Feng-Qin; HE Dong-Qing; HUANG Yu-Ying; LIU Peng

    2005-01-01

    In a fine adjustment mechanism of the second crystal in a double-crystal monochromator, a compression spring is usually used as a return force element, but it often produces permanent deform after some time. A novel fine adjustment mechanism is put forward, which utilizes permanent-magnet as the return force element instead of a compression spring. Its principle and advantages of adjusting the pitch angle and the roll angle are analyzed, and the structure parameters of the permanent-magnet, which is the key pa rt of the fine adjustment mechanism, are optimized. The magnetically adsorbed fine adjustment mechanism has been testified and applied successfully in the double-crystal monochromator of 4W1B beam line in Beijing Synchrotron Radiation Facility (BSRF).

  8. Stress mitigation of x-ray beamline monochromators using topography test unit.

    Energy Technology Data Exchange (ETDEWEB)

    Maj, J.; Waldschmidt, G.; Baldo, P.; Macrander, A.; Koshelev, I.; Huang, R.; Maj, L.; Maj, A.; Univ. of Chicago; Northeastern Ohio Univ. Coll. of Medicine; Rosalind Franklin Univ. of Medicine and Science

    2007-01-01

    Silicon and diamond monochromators (crystals), often used in the Advanced Photon Source X-ray beamlines, require a good quality surface finish and stress-free installation to ensure optimal performance. The device used to mount the crystal has been shown to be ajor contributing source of stress. In this case, an adjustable mounting device is an effective method of reducing stresses and improve the rocking curve to levels much closer to ideal. Analysis by a topography test unit has been used to determine the distribution of stresses and to measure the rocking curve, as well as create CCD images of the crystal. This paper describes the process of measuring these stresses and manipulating the mounting device and crystal to create a substantially improved monochromator.

  9. Alignment and characterization of the two-stage time delay compensating XUV monochromator

    CERN Document Server

    Eckstein, Martin; Kubin, Markus; Yang, Chung-Hsin; Frassetto, Fabio; Poletto, Luca; Vrakking, Marc J J; Kornilov, Oleg

    2016-01-01

    We present the design, implementation and alignment procedure for a two-stage time delay compensating monochromator. The setup spectrally filters the radiation of a high-order harmonic generation source providing wavelength-selected XUV pulses with a bandwidth of 300 to 600~meV in the photon energy range of 3 to 50~eV. XUV pulses as short as $12\\pm3$~fs are demonstrated. Transmission of the 400~nm (3.1~eV) light facilitates precise alignment of the monochromator. This alignment strategy together with the stable mechanical design of the motorized beamline components enables us to automatically scan the XUV photon energ in pump-probe experiments that require XUV beam pointing stability. The performance of the beamline is demonstrated by the generation of IR-assisted sidebands in XUV photoionization of argon atoms.

  10. Fast continuous energy scan with dynamic coupling of the monochromator and undulator at the DEIMOS beamline.

    Science.gov (United States)

    Joly, L; Otero, E; Choueikani, F; Marteau, F; Chapuis, L; Ohresser, P

    2014-05-01

    In order to improve the efficiency of X-ray absorption data recording, a fast scan method, the Turboscan, has been developed on the DEIMOS beamline at Synchrotron SOLEIL, consisting of a software-synchronized continuous motion of the monochromator and undulator motors. This process suppresses the time loss when waiting for the motors to reach their target positions, as well as software dead-time, while preserving excellent beam characteristics.

  11. Focusing characteristics of diamond crystal x-ray monochromators. An experimental and theoretical comparison

    DEFF Research Database (Denmark)

    Rio, M.S. del; Grübel, G.; Als-Nielsen, J.

    1995-01-01

    Perfect crystals in transmission (Laue) geometry can be used effectively for x-ray monochromators, and moreover, perfect Laue crystals show an interesting focusing effect when the incident beam is white and divergent. This focusing is directly dependent on the incident beam divergence and on the ...... from a diamond crystal in Lane geometry, and we analyze and explain the results by comparison with ray-tracing simulations. (C) 1995 American Institute of Physics....

  12. FEA analysis of diamond as IMCA{close_quote}s monochromator crystal

    Energy Technology Data Exchange (ETDEWEB)

    Chrzas, J.; Cimpoes, S.; Ivanov, I.N. [CSRRI, Illinois Institute of Technology, 3301 S. Dearborn Street, Chicago, IL 60616 (United States)

    1996-09-01

    A great deal of effort has been make in recent years in the field of undulator high heat load optics, and currently there are several tractable options [Rev. Sci. Instrum. {bold 69}, 2792 (1994); Nucl. Instrum. Methods A {bold 266}, 517 (1988); Nucl. Instrum. Methods A {bold 239}, 555 (1993)]. Diamond crystals offer some attractive options{endash}water as the coolant, the use of established monochromator mechanisms, simpler monochromator design as compared to the use of liquid nitrogen or gallium. The use of diamond crystals as the optical elements in a double-crystal monochromator for the IMCA-CAT and MR-CAT ID beamlines has been studied. A first crystal mounting scheme using an indium-gallium eutectic as the heat transfer medium developed in collaboration with DND-CAT and M. Hart will be presented. A FEA analysis of the IMCA-CAT ID beamline arrangement using the APS undulator A as the radiaiton source will be presented. {copyright} {ital 1996 American Institute of Physics.}

  13. Optimization of the polyplanar optical display electronics for a monochrome B-52 display

    Energy Technology Data Exchange (ETDEWEB)

    DeSanto, L.

    1998-04-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten-inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a new 200 mW green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments (TI). In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) chip is operated remotely from the Texas Instruments circuit board. In order to achieve increased brightness a monochrome digitizing interface was investigated. The operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with the RS-170 video format specific to the B-52 aircraft will be discussed, including the increased brightness of the monochrome digitizing interface. A brief description of the electronics required to drive the new 200 mW laser is also presented.

  14. Bent Crystal Monochromator with Constant Crystal Center Position and 2-theta Arm for a Dispersive Beamline

    Science.gov (United States)

    Neuenschwander, Regis T.; Tolentino, Hélio C. N.

    2004-05-01

    For the new LNLS dispersive beam line it was designed a single-crystal monochromator and a 2-theta arm. The monochromator uses a new bender design assembled on the top of an in-vacuum HUBER goniometer. This bender is able to apply independent torque on each extremity of the crystal in a way that changes in the curvature radius do not affect the position of the center of the crystal. It also has a twist mechanism, based on eccentric bearings and elastic components. The crystal extremities are clamped to the bender using two water-cooled copper blocks, for thermal stabilization. All the bender's movements are done with vacuum compatible stepping-motors. The vacuum chamber was built with enough space to allow future installation of another bender for crystals with different Bragg planes. The internal mechanics is isolated from the vacuum chamber and can move up and down with three high precision jacks. The design of the 2-theta arm is based on two linear translation stages and some special bearings. The two stages are equipped with linear encoders, ball screws end linear bearings. With a proper alignment procedure, it is possible to find the equations that controls each translation stage in order to get a virtual rotation referenced to the monochromator center. The main arm is composed of a steel frame, a 3m long granite block, a central aluminum optical rail and two auxiliary side rails.

  15. Milli-electronvolt monochromatization of hard X-rays with a sapphire backscattering monochromator

    Science.gov (United States)

    Sergueev, I.; Wille, H.-C.; Hermann, R. P.; Bessas, D.; Shvyd’ko, Yu. V.; Zając, M.; Rüffer, R.

    2011-01-01

    A sapphire backscattering monochromator with 1.1 (1) meV bandwidth for hard X-rays (20–40 keV) is reported. The optical quality of several sapphire crystals has been studied and the best crystal was chosen to work as the monochromator. The small energy bandwidth has been obtained by decreasing the crystal volume impinged upon by the beam and by choosing the crystal part with the best quality. The monochromator was tested at the energies of the nuclear resonances of 121Sb at 37.13 keV, 125Te at 35.49 keV, 119Sn at 23.88 keV, 149Sm at 22.50 keV and 151Eu at 21.54 keV. For each energy, specific reflections with sapphire temperatures in the 150–300 K region were chosen. Applications to nuclear inelastic scattering with these isotopes are demonstrated. PMID:21862862

  16. A bent Laue-Laue monochromator for a synchrotron-based computed tomography system

    CERN Document Server

    Ren, B; Chapman, L D; Ivanov, I; Wu, X Y; Zhong, Z; Huang, X

    1999-01-01

    We designed and tested a two-crystal bent Laue-Laue monochromator for wide, fan-shaped synchrotron X-ray beams for the program multiple energy computed tomography (MECT) at the National Synchrotron Light Source (NSLS). MECT employs monochromatic X-ray beams from the NSLS's X17B superconducting wiggler beamline for computed tomography (CT) with an improved image quality. MECT uses a fixed horizontal fan-shaped beam with the subject's apparatus rotating around a vertical axis. The new monochromator uses two Czochralski-grown Si crystals, 0.7 and 1.4 mm thick, respectively, and with thick ribs on their upper and lower ends. The crystals are bent cylindrically, with the axis of the cylinder parallel to the fan beam, using 4-rod benders with two fixed rods and two movable ones. The bent-crystal feature of the monochromator resolved the difficulties we had had with the flat Laue-Laue design previously used in MECT, which included (a) inadequate beam intensity, (b) excessive fluctuations in beam intensity, and (c) i...

  17. Resolution enhancement in transmission electron microscopy with 60-kV monochromated electron source

    Energy Technology Data Exchange (ETDEWEB)

    Morishita, Shigeyuki; Mukai, Masaki; Sawada, Hidetaka [JEOL Ltd., 3-1-2 Musashino, Akishima, Tokyo 196-8558 (Japan); Suenaga, Kazutomo [National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Higashi, Tsukuba, Ibaraki 305-8565 (Japan)

    2016-01-04

    Transmission electron microscopy (TEM) at low accelerating voltages is useful to obtain images with low irradiation damage. For a low accelerating voltage, linear information transfer, which determines the resolution for observation of single-layered materials, is largely limited by defocus spread, which improves when a narrow energy spread is used in the electron source. In this study, we have evaluated the resolution of images obtained at 60 kV by TEM performed with a monochromated electron source. The defocus spread has been evaluated by comparing diffractogram tableaux from TEM images obtained under nonmonochromated and monochromated illumination. The information limits for different energy spreads were precisely measured by using diffractograms with a large beam tilt. The result shows that the information limit reaches 0.1 nm with an energy width of 0.10 eV. With this monochromated source and a higher-order aberration corrector, we have obtained images of single carbon atoms in a graphene sheet by TEM at 60 kV.

  18. High heat flux x-ray monochromators: What are the limits?

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, C.S.

    1997-06-01

    First optical elements at third-generation, hard x-ray synchrotrons, such as the Advanced Photon Source (APS), are subjected to immense heat fluxes. The optical elements include crystal monochromators, multilayers and mirrors. This paper presents a mathematical model of the thermal strain of a three-layer (faceplate, heat exchanger, and baseplate), cylindrical optic subjected to narrow beam of uniform heat flux. This model is used to calculate the strain gradient of a liquid-gallium-cooled x-ray monochromator previously tested on an undulator at the Cornell High Energy Synchrotron Source (CHESS). The resulting thermally broadened rocking curves are calculated and compared to experimental data. The calculated rocking curve widths agree to within a few percent of the measured values over the entire current range tested (0 to 60 mA). The thermal strain gradient under the beam footprint varies linearly with the heat flux and the ratio of the thermal expansion coefficient to the thermal conductivity. The strain gradient is insensitive to the heat exchanger properties and the optic geometry. This formulation provides direct insight into the governing parameters, greatly reduces the analysis time, and provides a measure of the ultimate performance of a given monochromator.

  19. Study of a scattering shield in a high heat load monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Rong, E-mail: rh66@cornell.edu [IMCA-CAT, Hauptman-Woodward Institute (United States); Meron, Mati [CARS, The University of Chicago (United States)

    2013-07-11

    The techniques for the cooling of the first crystal of a monochromator are by now mature and are used routinely to deal with the heat loads resulting from the intense beams generated by third generation synchrotron insertion device sources. However, the thermal stability of said monochromators, which crucially depends on proper shielding of X-ray scattering off the first crystal, remains a serious consideration. This will become even more so in the near future, as many synchrotron facilities are upgrading to higher beam currents and energies. During a recent upgrade of the 17-ID beamline at the APS it was recognized that accurate simulation of the spatial distribution of the power scattered off the first crystal was essential for the understanding and remediation of the observed large temperature increase of the first crystal's scattering shield. The calculation is complex, due to the broad energy spectrum of the undulator and the prevalence of multiple X-ray scattering events within the bulk of the crystal, thus the Monte Carlo method is the natural tool for such a task. A successful simulation was developed, for the purpose of the 17-ID upgrade, and used to significantly improve the design of the first crystal's scattering shield. -- Highlights: • We use the Monte Carlo method to simulate X-ray scattering from monochromator crystals. • Scattered X-ray power on each surface of the scattering shield has been calculated. • Overheating on the original shield is well explained with simulated scattering power. • The thermal stability of the modified scattering shield is satisfactory.

  20. 1-40-keV fixed-exit monochromator for a wafer mapping TXRF facility

    Science.gov (United States)

    Comin, Fabio; Apostolo, G.; Freund, Andreas K.; Mangiagalli, P.; Navizet, M.; Troxel, C. L.

    1998-12-01

    An industrial facility for the mapping of trace impurities on the surface of 300 mm Silicon wafers will be commissioned at the end of 1998. The elements to be detected range from Na to Hg with a target routine detection limit of 108 atoms/cm2. The monochromator of the facility plays a central role and fulfills the following requirements: ease of operations and fast tuning (one single motor); extended energy range (1 - 40 KeV covered by a fixed exit Si(111) channel cut and multilayer pair); smooth and reliable running (water cooling even in the powerful ESRF undulator beams at high energies). The mechanical structure of the monochromator is based on well-established concepts: an external goniometer transfers the main rotation to the in-vacuum plateau via a hollow differentially pumped feed-through. The optical arrangement shows some novelties: the plateau can be cooled either by water or liquid nitrogen and it holds the convex- concave machined Si(111) channel-cut for fixed exit performances. The shape of the machined surfaces of the crystal helps also on to spread the power density of the beam on the silicon surface. A set of two identical multilayers are also mounted on the plateau and the transition from the Si(111) crystal to the multilayer operation is performed by rotating the wafer main axis by about 180 degrees. The whole facility is centered around the three main components: the monochromator, the wafer handling robots and the two linear arrays of solid state fluorescence detectors.

  1. Comparison of the commercial color LCD and the medical monochrome LCD using randomized object test patterns.

    Directory of Open Access Journals (Sweden)

    Jay Wu

    Full Text Available Workstations and electronic display devices in a picture archiving and communication system (PACS provide a convenient and efficient platform for medical diagnosis. The performance of display devices has to be verified to ensure that image quality is not degraded. In this study, we designed a set of randomized object test patterns (ROTPs consisting of randomly located spheres with various image characteristics to evaluate the performance of a 2.5 mega-pixel (MP commercial color LCD and a 3 MP diagnostic monochrome LCD in several aspects, including the contrast, resolution, point spread effect, and noise. The ROTPs were then merged into 120 abdominal CT images. Five radiologists were invited to review the CT images, and receiver operating characteristic (ROC analysis was carried out using a five-point rating scale. In the high background patterns of ROTPs, the sensitivity performance was comparable between both monitors in terms of contrast and resolution, whereas, in the low background patterns, the performance of the commercial color LCD was significantly poorer than that of the diagnostic monochrome LCD in all aspects. The average area under the ROC curve (AUC for reviewing abdominal CT images was 0.717±0.0200 and 0.740±0.0195 for the color monitor and the diagnostic monitor, respectively. The observation time (OT was 145±27.6 min and 127±19.3 min, respectively. No significant differences appeared in AUC (p = 0.265 and OT (p = 0.07. The overall results indicate that ROTPs can be implemented as a quality control tool to evaluate the intrinsic characteristics of display devices. Although there is still a gap in technology between different types of LCDs, commercial color LCDs could replace diagnostic monochrome LCDs as a platform for reviewing abdominal CT images after monitor calibration.

  2. Comparison of the commercial color LCD and the medical monochrome LCD using randomized object test patterns.

    Science.gov (United States)

    Wu, Jay; Wu, Tung H; Han, Rou P; Chang, Shu J; Shih, Cheng T; Sun, Jing Y; Hsu, Shih M

    2012-01-01

    Workstations and electronic display devices in a picture archiving and communication system (PACS) provide a convenient and efficient platform for medical diagnosis. The performance of display devices has to be verified to ensure that image quality is not degraded. In this study, we designed a set of randomized object test patterns (ROTPs) consisting of randomly located spheres with various image characteristics to evaluate the performance of a 2.5 mega-pixel (MP) commercial color LCD and a 3 MP diagnostic monochrome LCD in several aspects, including the contrast, resolution, point spread effect, and noise. The ROTPs were then merged into 120 abdominal CT images. Five radiologists were invited to review the CT images, and receiver operating characteristic (ROC) analysis was carried out using a five-point rating scale. In the high background patterns of ROTPs, the sensitivity performance was comparable between both monitors in terms of contrast and resolution, whereas, in the low background patterns, the performance of the commercial color LCD was significantly poorer than that of the diagnostic monochrome LCD in all aspects. The average area under the ROC curve (AUC) for reviewing abdominal CT images was 0.717±0.0200 and 0.740±0.0195 for the color monitor and the diagnostic monitor, respectively. The observation time (OT) was 145±27.6 min and 127±19.3 min, respectively. No significant differences appeared in AUC (p = 0.265) and OT (p = 0.07). The overall results indicate that ROTPs can be implemented as a quality control tool to evaluate the intrinsic characteristics of display devices. Although there is still a gap in technology between different types of LCDs, commercial color LCDs could replace diagnostic monochrome LCDs as a platform for reviewing abdominal CT images after monitor calibration.

  3. A New Flexible Monochromator Setup for Quick Scanning X-ray Absorption Spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Stotzel, J.; Lutzenkirchen-Hecht, D; Frahm, R

    2010-01-01

    A new monochromator setup for quick scanning x-ray absorption spectroscopy in the subsecond time regime is presented. Novel driving mechanics allow changing the energy range of the acquired spectra by remote control during data acquisition for the first time, thus dramatically increasing the flexibility and convenience of this method. Completely new experiments are feasible due to the fact that time resolution, edge energy, and energy range of the acquired spectra can be changed continuously within seconds without breaking the vacuum of the monochromator vessel and even without interrupting the measurements. The advanced mechanics are explained in detail and the performance is characterized with x-ray absorption spectra of pure metal foils. The energy scale was determined by a fast and accurate angular encoder system measuring the Bragg angle of the monochromator crystal with subarcsecond resolution. The Bragg angle range covered by the oscillating crystal can currently be changed from 0{sup o} to 3.0{sup o} within 20 s, while the mechanics are capable to move with frequencies of up to ca. 35 Hz, leading to ca. 14 ms/spectrum time resolution. A new software package allows performing programmed scan sequences, which enable the user to measure stepwise with alternating parameters in predefined time segments. Thus, e.g., switching between edges scanned with the same energy range is possible within one in situ experiment, while also the time resolution can be varied simultaneously. This progress makes the new system extremely user friendly and efficient to use for time resolved x-ray absorption spectroscopy at synchrotron radiation beamlines.

  4. A new flexible monochromator setup for quick scanning x-ray absorption spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Stoetzel, J.; Luetzenkirchen-Hecht, D.; Frahm, R. [Fachbereich C, Physik, Bergische Universitaet Wuppertal, Gaussstr. 20, 42097 Wuppertal (Germany)

    2010-07-15

    A new monochromator setup for quick scanning x-ray absorption spectroscopy in the subsecond time regime is presented. Novel driving mechanics allow changing the energy range of the acquired spectra by remote control during data acquisition for the first time, thus dramatically increasing the flexibility and convenience of this method. Completely new experiments are feasible due to the fact that time resolution, edge energy, and energy range of the acquired spectra can be changed continuously within seconds without breaking the vacuum of the monochromator vessel and even without interrupting the measurements. The advanced mechanics are explained in detail and the performance is characterized with x-ray absorption spectra of pure metal foils. The energy scale was determined by a fast and accurate angular encoder system measuring the Bragg angle of the monochromator crystal with subarcsecond resolution. The Bragg angle range covered by the oscillating crystal can currently be changed from 0 deg. to 3.0 deg. within 20 s, while the mechanics are capable to move with frequencies of up to ca. 35 Hz, leading to ca. 14 ms/spectrum time resolution. A new software package allows performing programmed scan sequences, which enable the user to measure stepwise with alternating parameters in predefined time segments. Thus, e.g., switching between edges scanned with the same energy range is possible within one in situ experiment, while also the time resolution can be varied simultaneously. This progress makes the new system extremely user friendly and efficient to use for time resolved x-ray absorption spectroscopy at synchrotron radiation beamlines.

  5. A new gradient monochromator for the IN13 back-scattering spectrometer

    Science.gov (United States)

    Ciampolini, L.; Bove, L. E.; Mondelli, C.; Alianelli, L.; Labbe-Lavigne, S.; Natali, F.; Bée, M.; Deriu, A.

    2005-06-01

    We present new McStas simulations of the back-scattering thermal neutron spectrometer IN13 to evaluate the advantages of a new temperature gradient monochromator relative to a conventional one. The simulations show that a flux gain up to a factor 7 can be obtained with just a 10% loss in energy resolution and a 20% increase in beam spot size at the sample. The results also indicate that a moderate applied temperature gradient (ΔT≃16 K) is sufficient to obtain this significant flux gain.

  6. A new gradient monochromator for the IN13 back-scattering spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Ciampolini, L. [Istituto Nazionale per la Fisica della Materia, Unita di Parma (Italy)]. E-mail: ciampolinil@ieee.org; Bove, L.E. [Istituto Nazionale per la Fisica della Materia, OGG, ILL Grenoble (France); Mondelli, C. [Istituto Nazionale per la Fisica della Materia, OGG, ILL Grenoble (France); Alianelli, L. [Istituto Nazionale per la Fisica della Materia, OGG, ILL Grenoble (France); Institut Laue Langevin, Grenoble (France); Labbe-Lavigne, S. [CNRS, Grenoble (France); Natali, F. [Istituto Nazionale per la Fisica della Materia, OGG, ILL Grenoble (France); Bee, M. [Universite Joseph Fourier, Grenoble (France); Deriu, A. [Istituto Nazionale per la Fisica della Materia, Unita di Parma (Italy); Dipartimento di Fisica, Universita di Parma (Italy)

    2005-06-01

    We present new McStas simulations of the back-scattering thermal neutron spectrometer IN13 to evaluate the advantages of a new temperature gradient monochromator relative to a conventional one. The simulations show that a flux gain up to a factor 7 can be obtained with just a 10% loss in energy resolution and a 20% increase in beam spot size at the sample. The results also indicate that a moderate applied temperature gradient ({delta}T{approx}16K) is sufficient to obtain this significant flux gain. n.

  7. Synchronous scanning of undulator gap and monochromator for XAFS measurements in soft x-ray region.

    Science.gov (United States)

    Tanaka, T; Matsubayashi, N; Imamura, M; Shimada, H

    2001-03-01

    Synchronous scanning of the undulator gap and a monochromator was done to obtain smooth profiles of incident x-rays that are suitable for XAFS measurements. By changing the gap from 150 mm(B=0.12 T) to 140 mm (B=0.15 T) with the use of the 3rd to 11th harmonic peaks, soft x-rays with energy from 200 eV to 1200 eV were obtained. The smooth profile of the incident x-rays provided high-quality measurement of XANES and EXAFS spectra in the soft x-ray region. Issues that would improve the synchronous scanning system are discussed.

  8. Ultra high energy resolution focusing monochromator for inelastic X-ray scattering spectrometer

    CERN Document Server

    Suvorov, A; Chubar, O; Cai, Y Q

    2015-01-01

    A further development of a focusing monochromator concept for X-ray energy resolution of 0.1 meV and below is presented. Theoretical analysis of several optical layouts based on this concept was supported by numerical simulations performed in the "Synchrotron Radiation Workshop" software package using the physical-optics approach and careful modeling of partially-coherent synchrotron (undulator) radiation. Along with the energy resolution, the spectral shape of the energy resolution function was investigated. It was shown that under certain conditions the decay of the resolution function tails can be faster than that of the Gaussian function.

  9. A soft X-ray plane-grating monochromator optimized for elliptical dipole radiation from modern sources

    Energy Technology Data Exchange (ETDEWEB)

    Kachel, Torsten, E-mail: torsten.kachel@helmholtz-berlin.de; Eggenstein, Frank [Helmholtz-Zentrum Berlin für Materialien und Energie, Albert-Einstein-Strasse 15, 12489 Berlin (Germany); Follath, Rolf [Paul Scherrer Institute, 5232 Villigen (Switzerland)

    2015-07-14

    The utilization of elliptical dipole radiation in a collimated plane-grating monochromator at BESSY II is described. A new but yet well proven way of making elliptically polarized dipole radiation from the BESSY II storage ring applicable to the SX700-type collimated plane-grating monochromator PM3 is described. It is shown that due to the limited vertical acceptance of the grating a simple use of vertical apertures is not possible in this case. Rather, deflecting the beam upwards or downwards by rotating the vertically collimating toroidal mirror M1 around the light axis leads to excellent performance. The resulting detuning of the photon energy can be taken into account by a readjustment of the monochromator internal plane mirror M2. The energy resolution of the beamline is not affected by the non-zero ‘roll’ of the collimating mirror.

  10. Optimisation and fabrication of a composite pyrolytic graphite monochromator for the Pelican instrument at the ANSTO OPAL reactor

    Science.gov (United States)

    Freund, A. K.; Yu, D. H.

    2011-04-01

    The triple monochromator for the TOF neutron spectrometer Pelican at ANSTO has been fully optimised in terms of overall performance, including the determination of the thickness of the pyrolytic graphite crystals. A total of 24 composite crystals were designed and fabricated. The calculated optimum thickness of 1.3 mm and the length of 15 cm of the monochromator crystals, that are not available commercially, were obtained by cleaving and soldering with indium. An extensive characterisation of the crystals using X-ray and neutron diffraction was conducted before and after the cleaving and bonding processes. The results proved that no damage was introduced during fabrication and showed that the design goals were fully met. The measured peak reflectivity and rocking curve widths were indeed in an excellent agreement with theory. In addition to the superior efficiency of the triple monochromator achieved by this novel approach, the amount of the crystal material required could be reduced by 1/3.

  11. Self-seeding scheme with gas monochromator for narrow-bandwidth soft X-ray FELs

    CERN Document Server

    Geloni, Gianluca; Saldin, Evgeni

    2011-01-01

    Self-seeding schemes, consisting of two undulators with a monochromator in between, aim at reducing the bandwidth of SASE X-ray FELs. We recently proposed to use a new method of monochromatization exploiting a single crystal in Bragg-transmission geometry for self-seeding in the hard X-ray range. Here we consider a possible extension of this method to the soft X-ray range using a cell filled with resonantly absorbing gas as monochromator. The transmittance spectrum in the gas exhibits an absorbing resonance with narrow bandwidth. Then, similarly to the hard X-ray case, the temporal waveform of the transmitted radiation pulse is characterized by a long monochromatic wake. In fact, the FEL pulse forces the gas atoms to oscillate in a way consistent with a forward-propagating, monochromatic radiation beam. The radiation power within this wake is much larger than the equivalent shot noise power in the electron bunch. Further on, the monochromatic wake of the radiation pulse is combined with the delayed electron b...

  12. $YB_{66} a new soft X-ray monochromator for synchrotron radiation

    CERN Document Server

    Wong, J; Rowen, M; Schäfers, F; Müller, B R; Rek, Z U

    1999-01-01

    For pt.I see Nucl. Instrum. Methods Phys. Res., vol.A291, p.243-8, 1990. YB/sub 66/, a complex boron-rich man-made crystal, has been singled out as a potential monochromator material to disperse synchrotron soft X-rays in the 1-2 keV region. Results of a series of systematic property characterizations pertinent for this application are presented in this paper. These include Laue diffraction patterns and high-precision lattice-constant determination, etch rate, stoichiometry, thermal expansion, soft X-ray reflectivity and rocking-curve measurements, thermal load effects on monochromator performance, nature of intrinsic positive glitches and their reduction. The 004 reflection of YB/sub 66/ has a reflectance of ~3 in this spectral region. The width of the rocking curve varies from 0.25 eV at 1.1 keV to 1.0 eV at 2 keV, which is a factor of two better than that of beryl(1010) in the same energy range, and enables measurements of high-resolution XANES spectra at the Mg, Al and Si K- edges. The thermal bump on the...

  13. An independent survey of monochrome and color low light level TV cameras

    Science.gov (United States)

    Preece, Bradley L.; Tomkinson, David M.; Reynolds, Joseph P.

    2015-05-01

    Using the latest models from the U.S. Army Night Vision Electronic Sensors Directorate (NVESD), a survey of monochrome and color imaging systems at daylight and low light levels is conducted. Each camera system is evaluated and compared under several different assumptions, such as equivalent field of view with equal and variable f/#, common lens focal length and aperture, with high dynamic range comparisons and over several light levels. The modeling is done by use of the Targeting Task Performance (TTP) metric using the latest version of the Night Vision Integrated Performance Model (NV⁸IPM). The comparison is performed over the V parameter, the main output of the TTP metric. Probability of identification (PID) versus range predictions are a direct non-linear mapping of the V parameter as a function of range. Finally, a comparison between the performance of a Bayer-filtered color camera, the Bayer-filtered color camera with the IR block filter removed, and a monochrome version of the same camera is also conducted.

  14. Vibration measurements of high-heat-load monochromators for DESY PETRA III extension

    Energy Technology Data Exchange (ETDEWEB)

    Kristiansen, Paw, E-mail: paw.kristiansen@fmb-oxford.com [FMB Oxford Ltd, Unit 1 Ferry Mills, Oxford OX2 0ES (United Kingdom); Horbach, Jan; Döhrmann, Ralph; Heuer, Joachim [DESY, Deutsches Elektronen-Synchrotron Hamburg, Notkestrasse 85, 22607 Hamburg (Germany)

    2015-05-09

    Vibration measurements of a cryocooled double-crystal monochromator are presented. The origins of the vibrations are identified. The minimum achieved vibration of the relative pitch between the two crystals is 48 nrad RMS and the minimum achieved absolute vibration of the second crystal is 82 nrad RMS. The requirement for vibrational stability of beamline optics continues to evolve rapidly to comply with the demands created by the improved brilliance of the third-generation low-emittance storage rings around the world. The challenge is to quantify the performance of the instrument before it is installed at the beamline. In this article, measurement techniques are presented that directly and accurately measure (i) the relative vibration between the two crystals of a double-crystal monochromator (DCM) and (ii) the absolute vibration of the second-crystal cage of a DCM. Excluding a synchrotron beam, the measurements are conducted under in situ conditions, connected to a liquid-nitrogen cryocooler. The investigated DCM utilizes a direct-drive (no gearing) goniometer for the Bragg rotation. The main causes of the DCM vibration are found to be the servoing of the direct-drive goniometer and the flexibility in the crystal cage motion stages. It is found that the investigated DCM can offer relative pitch vibration down to 48 nrad RMS (capacitive sensors, 0–5 kHz bandwidth) and absolute pitch vibration down to 82 nrad RMS (laser interferometer, 0–50 kHz bandwidth), with the Bragg axis brake engaged.

  15. Italian panoramic monochromator for the THEMIS telescope: the first results and instrument evaluation

    Science.gov (United States)

    Cavallini, Fabio; Berrilli, Francesco; Caccin, Bruno; Cantarano, Sergio; Ceppatelli, Guido; Egidi, Alberto; Righini, Alberto

    1998-07-01

    We briefly describe the design and the characteristics of the Italian Panoramic Monochromator installed at the focal plane of the THEMIS telescope built in Izana by a joint venture of the French and Italian National Research Councils. The Panoramic Monochromator substantially is a narrow band filter (approximately equals 22 mAngstrom bandwidth) tunable on the visible spectrum for quasi simultaneous bidimensional spectrometry of the solar atmosphere. The narrow bandwidth is obtained by using a non standard birefringent filter and a Fabry Perot interferometer mounted in series. This assembly has the advantage of the spectral purity of one channel of the Fabry Perot interferometer and a very large free spectral range. Moreover the spectral stability depends on the interferometer, the environment of which may be carefully controlled. The design of this instrument is not really new, but, only now it has been possible to build it thanks to the development of servo controlled Fabry Perot interferometers, which are stable in time and may easily be tuned. The system seems to perform well. It is stable in wavelength and the spectral pass band and stray light are within the expected values, as it may be deduced by very preliminary tests performed at the THEMIS Telescope and in Arcetri (Firenze) at the 'G. B. Donati' solar tower.

  16. High-resolution monochromator for iron nuclear resonance vibrational spectroscopy of biological samples

    Science.gov (United States)

    Yoda, Yoshitaka; Okada, Kyoko; Wang, Hongxin; Cramer, Stephen P.; Seto, Makoto

    2016-12-01

    A new high-resolution monochromator for 14.4-keV X-rays has been designed and developed for the Fe nuclear resonance vibrational spectroscopy of biological samples. In addition to high resolution, higher flux and stability are especially important for measuring biological samples, because of the very weak signals produced due to the low concentrations of Fe-57. A 24% increase in flux while maintaining a high resolution better than 0.9 meV is achieved in the calculation by adopting an asymmetric reflection of Ge, which is used as the first crystal of the three-bounce high-resolution monochromator. A 20% increase of the exit beam size is acceptable to our biological applications. The higher throughput of the new design has been experimentally verified. A fine rotation mechanics that combines a weak-link hinge with a piezoelectric actuator was used for controlling the photon energy of the monochromatic beam. The resulting stability is sufficient to preserve the intrinsic resolution.

  17. A coded structured light system based on primary color stripe projection and monochrome imaging.

    Science.gov (United States)

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-10-14

    Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  18. A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging

    Directory of Open Access Journals (Sweden)

    Armando Viviano Razionale

    2013-10-01

    Full Text Available Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.

  19. Optimization of a constrained linear monochromator design for neutral atom beams.

    Science.gov (United States)

    Kaltenbacher, Thomas

    2016-04-01

    A focused ground state, neutral atom beam, exploiting its de Broglie wavelength by means of atom optics, is used for neutral atom microscopy imaging. Employing Fresnel zone plates as a lens for these beams is a well established microscopy technique. To date, even for favorable beam source conditions a minimal focus spot size of slightly below 1μm was reached. This limitation is essentially given by the intrinsic spectral purity of the beam in combination with the chromatic aberration of the diffraction based zone plate. Therefore, it is important to enhance the monochromaticity of the beam, enabling a higher spatial resolution, preferably below 100nm. We propose to increase the monochromaticity of a neutral atom beam by means of a so-called linear monochromator set-up - a Fresnel zone plate in combination with a pinhole aperture - in order to gain more than one order of magnitude in spatial resolution. This configuration is known in X-ray microscopy and has proven to be useful, but has not been applied to neutral atom beams. The main result of this work is optimal design parameters based on models for this linear monochromator set-up followed by a second zone plate for focusing. The optimization was performed for minimizing the focal spot size and maximizing the centre line intensity at the detector position for an atom beam simultaneously. The results presented in this work are for, but not limited to, a neutral helium atom beam.

  20. Inductively coupled plasma-atomic emission spectroscopy: a computer controlled, scanning monochromator system for the rapid determination of the elements

    Energy Technology Data Exchange (ETDEWEB)

    Floyd, M.A.

    1980-03-01

    A computer controlled, scanning monochromator system specifically designed for the rapid, sequential determination of the elements is described. The monochromator is combined with an inductively coupled plasma excitation source so that elements at major, minor, trace, and ultratrace levels may be determined, in sequence, without changing experimental parameters other than the spectral line observed. A number of distinctive features not found in previously described versions are incorporated into the system here described. Performance characteristics of the entire system and several analytical applications are discussed.

  1. A diffracted-beam monochromator for long linear detectors in X-ray diffractometers with Bragg-Brentano parafocusing geometry.

    Science.gov (United States)

    van der Pers, N M; Hendrikx, R W A; Delhez, R; Böttger, A J

    2013-04-01

    A new diffracted-beam monochromator has been developed for Bragg-Brentano X-ray diffractometers equipped with a linear detector. The monochromator consists of a cone-shaped graphite highly oriented pyrolytic graphite crystal oriented out of the equatorial plane such that the parafocusing geometry is preserved over the whole opening angle of the linear detector. In our standard setup a maximum wavelength discrimination of 3% is achieved with an overall efficiency of 20% and a small decrease in angular resolution of only 0.02 °2θ. In principle, an energy resolution as low as 1.5% can be achieved.

  2. Self-healing diffusion quantum Monte Carlo algorithms: methods for direct reduction of the fermion sign error in electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Reboredo, F A; Hood, R Q; Kent, P C

    2009-01-06

    We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant expansion of the fixed node ground state wave function and (ii) to define a cost function that relates the interacting-ground-state-fixed-node and the non-interacting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave function with better nodal structure and (b) we argue that the noise in the fixed-node wave function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust towards the ones of the exact many-body ground state in a simulated annealing-like process. Based on these principles, we propose a method to improve both single determinant and multi-determinant expansions of the trial wave function. The method can be generalized to other wave function forms such as pfaffians. We test the method in a model system where benchmark configuration interaction calculations can be performed and most components of the Hamiltonian are evaluated analytically. Comparing the DMC calculations with the exact solutions, we find that the trial wave function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave functions orthogonal to the exact ground state. Similarly, the DMC total energy and density converges to the exact solutions for the model. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form whose existence was predicted in a previous publication [Phys. Rev. B 77 245110 (2008)]. Tests of the method are

  3. Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter.

    Science.gov (United States)

    Latorre-Carmona, Pedro; Sánchez-Ortiga, Emilio; Xiao, Xiao; Pla, Filiberto; Martínez-Corral, Manuel; Navarro, Héctor; Saavedra, Genaro; Javidi, Bahram

    2012-11-01

    This paper presents an acquisition system and a procedure to capture 3D scenes in different spectral bands. The acquisition system is formed by a monochrome camera, and a Liquid Crystal Tunable Filter (LCTF) that allows to acquire images at different spectral bands in the [480, 680]nm wavelength interval. The Synthetic Aperture Integral Imaging acquisition technique is used to obtain the elemental images for each wavelength. These elemental images are used to computationally obtain the reconstruction planes of the 3D scene at different depth planes. The 3D profile of the acquired scene is also obtained using a minimization of the variance of the contribution of the elemental images at each image pixel. Experimental results show the viability to recover the 3D multispectral information of the scene. Integration of 3D and multispectral information could have important benefits in different areas, including skin cancer detection, remote sensing and pattern recognition, among others.

  4. The sapphire backscattering monochromator at the Dynamics beamline P01 of PETRA III

    Science.gov (United States)

    Alexeev, P.; Asadchikov, V.; Bessas, D.; Butashin, A.; Deryabin, A.; Dill, F.-U.; Ehnes, A.; Herlitschke, M.; Hermann, R. P.; Jafari, A.; Prokhorov, I.; Roshchin, B.; Röhlsberger, R.; Schlage, K.; Sergueev, I.; Siemens, A.; Wille, H.-C.

    2016-12-01

    We report on a high resolution sapphire backscattering monochromator installed at the Dynamics beamline P01 of PETRA III. The device enables nuclear resonance scattering experiments on Mössbauer isotopes with transition energies between 20 and 60 keV with sub-meV to meV resolution. In a first performance test with 119Sn nuclear resonance at a X-ray energy of 23.88 keV an energy resolution of 1.34 meV was achieved. The device extends the field of nuclear resonance scattering at the PETRA III synchrotron light source to many further isotopes like 151Eu, 149Sm, 161Dy, 125Te and 121Sb.

  5. A high-precision cryogenically-cooled crystal monochromator for the APS diagnostics beamline

    Energy Technology Data Exchange (ETDEWEB)

    Rotela, E.; Yang, B.; Sharma, s.; Barcikowski, A.

    2000-07-24

    A high-precision cryogenically-cooled crystal monochromator has been developed for the APS diagnostics beamline. The design permits simultaneous measurements of the particle beam size and divergence. It provides for a large rotation angle, {minus}15{degree} to 180{degree}, with a resolution of 0.0005{degree}. The roll angle of the crystal can be adjusted by up to {+-}3{degree} with a resolution of 0.0001{degree}. A vertical translational stage, with a stroke of {+-}25 mm and resolution of 8 {micro}m, is provided to enable using different parts of the same crystal or to retract the crystal from the beam path. The modular design will allow optimization of cooling schemes to minimize thermal distortions of the crystal under high heat loads.

  6. Flux-enhanced monochromator by ultrasound excitation of annealed Czochralski-grown silicon crystals

    CERN Document Server

    Koehler, S; Seitz, C; Magerl, A; Mashkina, E; Demin, A

    2003-01-01

    The neutron flux from monochromator crystals can be increased by ultrasound excitation or by strain fields. Rocking curves of both a perfect float-zone silicon crystal and an annealed Czochralski silicon crystal with oxygen precipitates were measured at various levels of ultrasound excitation on a cold-neutron backscattering spectrometer. We find that the effects of the dynamic strain field from the ultrasound and the static strain field from the defects are not additive. Rocking curves were also taken at different ultrasound frequencies near resonance of the crystal/ultrasound-transducer system with a time resolution of 1 min. Pronounced effects of crystal heating are observed, which render the conditions for maximum neutron reflectivity delicate. (orig.)

  7. Measuring the criticality of the `magic condition' for a beam-expanding monochromator.

    Science.gov (United States)

    Martinson, Mercedes; Chapman, Dean

    2016-11-01

    It has been established that for cylindrically bent crystals the optimal beam characteristics occur when the geometric and single-ray foci are matched. In the beam-expanding monochromator developed for the BioMedical Imaging and Therapy beamlines at the Canadian Light Source, it was unclear how critical this `magic condition' was for preserving the transverse coherence of the beam. A study was conducted to determine whether misalignments away from the ideal conditions would severely affect the transverse coherence of the beam, thereby limiting phase-based imaging techniques. The results were that the magic condition has enough flexibility to accommodate deviations of about ±1° or ±5 keV.

  8. Adaptive silicon monochromators for high-power wigglers; design, finite-element analysis and laboratory tests.

    Science.gov (United States)

    Quintana, J P; Hart, M

    1995-05-01

    Multipole wigglers in storage rings already produce X-ray power in the range up to a few kilowatts and planned devices at third-generation facilities promise up to 30 kW. Although the power density at the monochromator position is an order of magnitude lower than that from undulators, the thermal strain field in the beam footprint can still cause severe loss of performance in X-ray optical systems. For an optimized adaptive design, the results of finite-element analysis are compared with double-crystal rocking curves obtained with a laboratory X-ray source and, in a second paper [Quintana, Hart, Bilderback, Henderson, Richter, Setterson, White, Hausermann, Krumrey & Schulte-Schrepping (1995). J. Synchotron Rad. 2, 1-5], successful tests at wiggler sources at CHESS and ESRF and in an undulator source at HASYLAB are reported.

  9. Image-quality assessment of monochrome monitors for medical soft copy display

    Science.gov (United States)

    Weibrecht, Martin; Spekowius, Gerhard; Quadflieg, Peter; Blume, Hartwig R.

    1997-05-01

    Soft-copy presentation of medical images is becoming part of the medical routine as more and more health care facilities are converted to digital filmless hospital and radiological information management. To provide optimal image quality, display systems must be incorporated when assessing the overall system image quality. We developed a method to accomplish this. The proper working of the method is demonstrated with the analysis of four different monochrome monitors. We determined display functions and veiling glare with a high-performance photometer. Structure mottle of the CRT screens, point spread functions and images of stochastic structures were acquired by a scientific CCD camera. The images were analyzed with respect to signal transfer characteristics and noise power spectra. We determined the influence of the monitors on the detective quantum efficiency of a simulated digital x-ray imaging system. The method follows a physical approach; nevertheless, the results of the analysis are in good agreement with the subjective impression of human observers.

  10. A Drabkin-type spin resonator as tunable neutron beam monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Piegsa, F.M., E-mail: florian.piegsa@phys.ethz.ch [ETH Zürich, Institute for Particle Physics, CH-8093 Zürich (Switzerland); Ries, D. [ETH Zürich, Institute for Particle Physics, CH-8093 Zürich (Switzerland); Paul Scherrer Institute, CH-5232 Villigen (Switzerland); Filges, U.; Hautle, P. [Paul Scherrer Institute, CH-5232 Villigen (Switzerland)

    2015-09-11

    A Drabkin-type spin resonator was designed and successfully implemented at the multi-purpose beam line BOA at the spallation neutron source SINQ at the Paul Scherrer Institute. The device selectively acts on the magnetic moment of neutrons within an adjustable velocity band and hence can be utilized as a tunable neutron beam monochromator. Several neutron time-of-flight (TOF) spectra have been recorded employing various settings in order to characterize its performance. In a first test application the velocity dependent transmission of a beryllium filter was determined. In addition, we demonstrate that using an exponential current distribution in the spin resonator coil the side-maxima in the TOF spectra usually associated with a Drabkin setup can be strongly suppressed.

  11. Diffraction imaging for in-situ characterization of double-crystal x-ray monochromators

    CERN Document Server

    Stoupin, Stanislav; Heald, Steve M; Brewe, Dale; Meron, Mati

    2015-01-01

    Imaging of the Bragg reflected x-ray beam is proposed and validated as an in-situ method for characterization of performance of double-crystal monochromators under the heat load of intense synchrotron radiation. A sequence of images is collected at different angular positions on the reflectivity curve of the second crystal and analyzed. The method provides rapid evaluation of the wavefront of the exit beam, which relates to local misorientation of the crystal planes along the beam footprint on the thermally distorted first crystal. The measured misorientation can be directly compared to results of finite element analysis. The imaging method offers an additional insight on the local intrinsic crystal quality over the footprint of the incident x-ray beam.

  12. Bragg prism monochromator and analyser for super ultra-small angle neutron scattering studies

    Indian Academy of Sciences (India)

    Apoorva G Wagh; Sohrab Abbas; Markus Strobl; Wolfgang Treimer

    2008-11-01

    We have designed, fabricated and operated a novel Bragg prism monochromator–analyser combination. With a judicious choice of the Bragg reflection, its asymmetry and the apex angle of the silicon single crystal prism, the monochromator has produced a neutron beam with sub-arcsec collimation. A Bragg prism analyser with the opposite asymmetry has been tailored to accept a still sharper angular profile. With this optimized monochromator–analyser pair, we have attained the narrowest and sharpest neutron angular profile to date. At this facility, we have recorded the first SUSANS spectra spanning wave vector transfers ∼ 10−6 Å-1 to characterize samples containing agglomerates up to tens of micrometres in size.

  13. On the sagittal focusing of synchrotron radiation with a double crystal monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Kushnir, V.I.; Quintana, J.P.; Georgopoulos, P. (DUNU Synchrotron Research Center, Robert R. McCormick School of Engineering and Applied Science, Northwestern Univ., Evanston, IL (United States))

    1993-05-01

    A method to avoid the anticlastic bending of the second crystal in a two-crystal monochromator for synchrotron radiation is proposed. It is analytically shown that the anticlastic curvature is zero at the center of the crystal for a simply supported isotropic crystal loaded with a constant moment provided that the crystal's aspect ratio is equal to a 'golden value' dependent on the Poisson coefficient [nu]. For [nu]=0.262 (equal to [nu] in the Si(111) plane) this ratio is 2.360. Finite element results are presented on the case of the clamped crystal and show that there is a similar 'golden value' approximately equal to 1.42 for [nu]=0.262. (orig.).

  14. High-aperture monochromator-reflectometer and its usefulness for CCD calibration

    Science.gov (United States)

    Vishnyakov, Eugene A.; Shcherbakov, Alexander V.; Pertsov, Andrei A.; Polkovnikov, Vladimir N.; Pestov, Alexey E.; Pariev, Dmitry E.; Chkhalo, Nikolai I.

    2017-05-01

    We present a laboratory high-aperture monochromator-reflectometer employing laser-plasma radiation source and three replaceable Schwarzschild objectives for a certain range of applications in the soft X-ray spectral waveband. Three sets of X-ray multilayer mirrors for the Schwarzschild objectives enable operation of the reflectometer at the wavelengths of 135, 171 and 304 Å, while a goniometer with three degrees of freedom allows different measurement modes. We have used the facility for a laboratory CCD calibration at the wavelengths specified. Combined with the results of the CCD sensitivity measurements conducted in the VUV spectral waveband, the total outcome provides a more comprehensive understanding of the CCD effectivity in a wide spectral range.

  15. A water-cooled x-ray monochromator for using off-axis undulator beam.

    Energy Technology Data Exchange (ETDEWEB)

    Khounsary, A.; Maser, J.

    2000-12-11

    Undulator beamlines at third-generation synchrotrons x-ray sources are designed to use the high-brilliance radiation that is contained in the central cone of the generated x-ray beams. The rest of the x-ray beam is often unused. Moreover, in some cases, such as in the zone-plate-based microfocusing beamlines, only a small part of the central radiation cone around the optical axis is used. In this paper, a side-station branch line at the Advanced Photon Source that takes advantage of some of the unused off-axis photons in a microfocusing x-ray beamline is described. Detailed information on the design and analysis of a high-heat-load water-cooled monochromator developed for this beamline is provided.

  16. High-resolution monochromated electron energy-loss spectroscopy of organic photovoltaic materials.

    Science.gov (United States)

    Alexander, Jessica A; Scheltens, Frank J; Drummy, Lawrence F; Durstock, Michael F; Hage, Fredrik S; Ramasse, Quentin M; McComb, David W

    2017-09-01

    Advances in electron monochromator technology are providing opportunities for high energy resolution (10 - 200meV) electron energy-loss spectroscopy (EELS) to be performed in the scanning transmission electron microscope (STEM). The energy-loss near-edge structure in core-loss spectroscopy is often limited by core-hole lifetimes rather than the energy spread of the incident illumination. However, in the valence-loss region, the reduced width of the zero loss peak makes it possible to resolve clearly and unambiguously spectral features at very low energy-losses (photovoltaics (OPVs): poly(3-hexlythiophene) (P3HT), [6,6] phenyl-C61 butyric acid methyl ester (PCBM), copper phthalocyanine (CuPc), and fullerene (C60). Data was collected on two different monochromated instruments - a Nion UltraSTEM 100 MC 'HERMES' and a FEI Titan(3) 60-300 Image-Corrected S/TEM - using energy resolutions (as defined by the zero loss peak full-width at half-maximum) of 35meV and 175meV, respectively. The data was acquired to allow deconvolution of plural scattering, and Kramers-Kronig analysis was utilized to extract the complex dielectric functions. The real and imaginary parts of the complex dielectric functions obtained from the two instruments were compared to evaluate if the enhanced resolution in the Nion provides new opto-electronic information for these organic materials. The differences between the spectra are discussed, and the implications for STEM-EELS studies of advanced materials are considered. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Rotation of X-ray polarization in the glitches of a silicon crystal monochromator.

    Science.gov (United States)

    Sutter, John P; Boada, Roberto; Bowron, Daniel T; Stepanov, Sergey A; Díaz-Moreno, Sofía

    2016-08-01

    EXAFS studies on dilute samples are usually carried out by collecting the fluorescence yield using a large-area multi-element detector. This method is susceptible to the 'glitches' produced by all single-crystal monochromators. Glitches are sharp dips or spikes in the diffracted intensity at specific crystal orientations. If incorrectly compensated, they degrade the spectroscopic data. Normalization of the fluorescence signal by the incident flux alone is sometimes insufficient to compensate for the glitches. Measurements performed at the state-of-the-art wiggler beamline I20-scanning at Diamond Light Source have shown that the glitches alter the spatial distribution of the sample's quasi-elastic X-ray scattering. Because glitches result from additional Bragg reflections, multiple-beam dynamical diffraction theory is necessary to understand their effects. Here, the glitches of the Si(111) four-bounce monochromator of I20-scanning just above the Ni K edge are associated with their Bragg reflections. A fitting procedure that treats coherent and Compton scattering is developed and applied to a sample of an extremely dilute (100 micromolal) aqueous solution of Ni(NO3)2. The depolarization of the wiggler X-ray beam out of the electron orbit is modeled. The fits achieve good agreement with the sample's quasi-elastic scattering with just a few parameters. The X-ray polarization is rotated up to ±4.3° within the glitches, as predicted by dynamical diffraction. These results will help users normalize EXAFS data at glitches.

  18. Background Error Correlation Modeling with Diffusion Operators

    Science.gov (United States)

    2013-01-01

    functions defined on the orthogonal curvilin- ear grid of the Navy Coastal Ocean Model (NCOM) [28] set up in the Monterrey Bay (Fig. 4). The number N...H2 = [1 1; 1−1], the HMs with order N = 2n, n= 1,2... can be easily constructed. HMs with N = 12,20 were constructed ” manually ” more than a century

  19. Thermal and structural finite element analysis of water cooled silicon monochromator for synchrotron radiation comparison of two different cooling schemes

    CERN Document Server

    Artemiev, A I; Busetto, E; Hrdy, J; Mrazek, D; Plesek, I; Savoia, A

    2001-01-01

    The article describes the results of Finite Element Analysis (FEA) of the first Si monochromator crystal distortions due to Synchrotron Radiation (SR) heat load and consequent analysis of the influence of the distortions on a double crystal monochromator performance. Efficiencies of two different cooling schemes are compared. A thin plate of Si crystal is lying on copper cooling support in both cases. There are microchannels inside the cooling support. In the first model the direction of the microchannels is parallel to the diffraction plane. In the second model the direction of the microchannels is perpendicular to the diffraction plane or in other words, it is a conventional cooling scheme. It is shown that the temperature field along the crystal volume is more uniform and more symmetrical in the first model than in the second (conventional) one.

  20. Optimization of bent perfect Si(220)-crystal monochromator for residual strain/stress instrument-Part II

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Myung-Kook [Neutron Beam Application, Korea Atomic Energy Research Institute, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of)]. E-mail: moonmk@kaeri.re.kr; Em, Vyacheslav T. [Neutron Beam Application, Korea Atomic Energy Research Institute, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Lee, Chang-Hee [Neutron Beam Application, Korea Atomic Energy Research Institute, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Mikula, Pavol [Nuclear Physics Institute and Research Centre Rez Ltd., 250 68 Rez (Czech Republic); Hong, Kwang-Pyo [Neutron Beam Application, Korea Atomic Energy Research Institute, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Choi, Young-Hyun [Neutron Beam Application, Korea Atomic Energy Research Institute, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Cheon, Jong-Kyu [Neutron Beam Application, Korea Atomic Energy Research Institute, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Nam, Uk-Won [Nuclear Physics Institute and Research Centre Rez Ltd., 250 68 Rez (Czech Republic); Kong, Kyung-Nam [Nuclear Physics Institute and Research Centre Rez Ltd., 250 68 Rez (Czech Republic); Korea Astronomy Observatory, Yusung, Daejeon 305-348 (Korea, Republic of); Jin, Kyung-Chan [Korea Institute of Industrial Technology, 35-3 Hongchon-Ri, Ipchang-Myun, Chonan-Si, Chungnam, 330-825 (Korea, Republic of)

    2005-11-01

    Optimized diffractometer arrangements for residual strain measurements employing curved crystal monochromators provide good luminosity and a high {delta}d/d resolution in the vicinity of usually used scattering angle 2{theta}{sub S}{approx}+/-90{sup o}. Due to a variety of designs of the diffractometers which could be installed at a constant or different take-off angles, except a few attempts, there is a lack of experimental evidence providing a help in a choice of parameters for an optimum performance. In addition to our earlier investigations with curved Si(311) monochromator employed in different diffraction geometries (see paper I [M.K. Moon et al., Physica B, submitted [1

  1. Design, Build & Test of a Double Crystal Monochromator for Beamlines I09 & I23 at the Diamond Light Source

    Science.gov (United States)

    Kelly, J.; Lee, T.; Alcock, S.; Patel, H.

    2013-03-01

    A high stability Double Crystal Monochromator has been developed at The Diamond Light Source for beamlines I09 and I23. The design specification was a cryogenic, fixed exit, energy scanning monochromator, operating over an energy range of 2.1 - 25 keV using a Si(111) crystal set. The novel design concepts are the direct drive, air bearing Bragg axis, low strain crystal mounts and the cooling scheme. The instrument exhibited superb stability and repeatability on the B16 Test Beamline. A 20 keV Si(555), 1.4 μrad rocking curve was demonstrated. The DCM showed good stability without any evidence of vibration or Bragg angle nonlinearity.

  2. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  3. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  4. All-diamond optical assemblies for a beam-multiplexing X-ray monochromator at the Linac Coherent Light Source

    CERN Document Server

    Stoupin, S; Blank, V D; Shvyd'ko, Yu V; Goetze, K; Assoufid, L; Polyakov, S N; Kuznetsov, M S; Kornilov, N V; Katsoudas, J; Alonso-Mori, R; Chollet, M; Feng, Y; Glownia, J M; Lemke, H; Robert, A; Song, S; Sikorski, M; Zhu, D

    2014-01-01

    A double-crystal diamond (111) monochromator recently implemented at the Linac Coherent Light Source (LCLS) enables splitting of the primary X-ray beam into a pink (transmitted) and a monochromatic (reflected) branch. The first monochromator crystal with a thickness of 100 um provides sufficient X-ray transmittance to enable simultaneous operation of two beamlines. Here we report on the design, fabrication, and X-ray characterization of the first and second (300-um-thick) crystals utilized in the monochromator and the optical assemblies holding these crystals. Each crystal plate has a region of about 5 X 2 mm2 with low defect concentration, sufficient for use in X-ray optics at the LCLS. The optical assemblies holding the crystals were designed to provide mounting on a rigid substrate and to minimize mounting-induced crystal strain. The induced strain was evaluated using double-crystal X-ray topography and was found to be small over the 5 X 2 mm2 working regions of the crystals.

  5. Double-crystal monochromator for a PF 60-period soft x-ray undulator (abstract)

    Science.gov (United States)

    Ishikawa, T.; Maezawa, H.; Nomura, M.; Ando, M.

    1989-07-01

    Since undulator light is sharply collimated itself, it can be effectively monochromatized by a perfect crystal. An x-ray double-crystal monochromator with a fixed exit has been designed and built for the use of undulator light from a 60-period undulator at Photon Factory (beamline 2A). Available Bragg angle ranges from 7° to 80°. Angle scan is made by means of a goniometer outside the vacuum chamber, with the finest step of 0.1 arcsec. Magnetic fluid is used as the vacuum seal of the feedthrough. The fixed exit beam position is kept by translating the second crystal along the two mechanical guides: one for normal and the other for parallel to the crystal surface. Adjustment of the parallelity of two crystals is made manually with flexible wires. Since a total power in the central coherent portion which is limited by a 1×1-mm2 slit is not so much, a stable operation is possible without cooling the crystal. Currently, InSb (111) reflection is used. The diffracting planes of the first cyrstal is 1° off from the surface and the second is the symmetric reflection. At its fifth harmonics, brilliant undulator light of approximately 1012 photons/s mm2 with 1-eV energy resolution is available (E=2 keV).

  6. Quick scanning monochromator for millisecond in situ and in operando X-ray absorption spectroscopy

    Science.gov (United States)

    Müller, O.; Lützenkirchen-Hecht, D.; Frahm, R.

    2015-09-01

    The design and capabilities of a novel Quick scanning Extended X-ray Absorption Fine Structure (QEXAFS) monochromator are presented. The oscillatory movement of the crystal stage is realized by means of a unique open-loop driving scheme operating a direct drive torque motor. The entire drive mechanics are installed inside of a goniometer located on the atmospheric side of the vacuum chamber. This design allows remote adjustment of the oscillation frequency and spectral range, giving complete control of QEXAFS measurements. It also features a real step-scanning mode, which operates without a control loop to prevent induced vibrations. Equipped with Si(111) and Si(311) crystals on a single stage, it facilitates an energy range from 4.0 keV to 43 keV. Extended X-ray absorption fine structure spectra up to k = 14.4 Å-1 have been acquired within 17 ms and X-ray absorption near edge structure spectra covering more than 200 eV within 10 ms. The achieved data quality is excellent as shown by the presented measurements.

  7. Design, Fabrication and Measurement of Ni/Ti Multilayer Used for Neutron Monochromator

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zhong; WANG Zhan-Shan; ZHU Jing-Tao; WU Yong-Rong; MU Bao-Zhong; WANG Feng-Li; QIN Shu-Ji; CHEN Ling-Yan

    2007-01-01

    Ni/Ti multilayers.which can be used for neutron monochromators,are designed,fabricated and measured.Firstly,their reflectivities are simulated based on the Nevot-Croce model.Reflectivities of two Ni/Ti multilayer mirrors with periods d=10.3nm(M1)and d=7.8nm(M2) are calculated.In the calculation,the reflectivity of the Ni/Ti multilayer is taken as a function of the gazing angle with different roughness factors δ=1.0nm and δ=1.5nm.Secondly,these two multilayers are fabricated by the direct current magnetron sputtering technology.Thirdly their structures are characterized by small-angle x-ray diffraction.The roughness factors are fitted to be O.68nm and 1.16nm for M1 and M2.respectively.Finally their reflective performances are measured on the V14 neutron beam line at the Bedin Neutron Scattering Centre(BENSC),Germany.The experimental data show that the grazing angle of the reflected neutron intensity peak increases,but the reflected neutron intensity decreases.with the decreasing periods of the multilayers.

  8. Vibration measurements of high-heat-load monochromators for DESY PETRA III extension

    Science.gov (United States)

    Kristiansen, Paw; Horbach, Jan; Döhrmann, Ralph; Heuer, Joachim

    2015-01-01

    The requirement for vibrational stability of beamline optics continues to evolve rapidly to comply with the demands created by the improved brilliance of the third-generation low-emittance storage rings around the world. The challenge is to quantify the performance of the instrument before it is installed at the beamline. In this article, measurement techniques are presented that directly and accurately measure (i) the relative vibration between the two crystals of a double-crystal monochromator (DCM) and (ii) the absolute vibration of the second-crystal cage of a DCM. Excluding a synchrotron beam, the measurements are conducted under in situ conditions, connected to a liquid-nitrogen cryocooler. The investigated DCM utilizes a direct-drive (no gearing) goniometer for the Bragg rotation. The main causes of the DCM vibration are found to be the servoing of the direct-drive goniometer and the flexibility in the crystal cage motion stages. It is found that the investigated DCM can offer relative pitch vibration down to 48 nrad RMS (capacitive sensors, 0–5 kHz bandwidth) and absolute pitch vibration down to 82 nrad RMS (laser interferometer, 0–50 kHz bandwidth), with the Bragg axis brake engaged. PMID:26134790

  9. Mechanical design aspects of a soft X-ray plane grating monochromator

    CERN Document Server

    Vasina, R; Dolezel, P; Mynar, M; Vondracek, M; Chab, V; Slezak, J A; Comicioli, C; Prince, K C

    2001-01-01

    A plane grating monochromator based on the SX-700 concept has been constructed for the Materials Science Beamline, Elettra, which is attached to a bending magnet. The tuning range is from 35 to 800 eV with calculated spectral resolving power epsilon/DELTA epsilon better than 4000 in the whole range. The optical elements consist of a toroidal prefocusing mirror, polarization aperture, entrance slit, plane pre-mirror, single plane grating (blazed), spherical mirror, exit slit and toroidal refocusing mirror. The plane grating is operated in the fixed focus mode with C sub f sub f =2.4. Energy scanning is performed by rotation of the plane grating and simultaneous translation and rotation of the plane pre-mirror. A novel solution is applied for the motion of the plane pre-mirror, namely by a translation and mechanically coupling the rotation by a cam. The slits have no moving parts in vacuum to reduce cost and increase ruggedness, and can be fully closed without risk of damage. In the first tests, a resolving pow...

  10. 极高分辨变包含角平面光栅单色器关键技术及检测方法研究%Key technologies and the performance measuring methods in variable included angle plane grating monochromator

    Institute of Scientific and Technical Information of China (English)

    卢启鹏; 宋源; 龚学鹏; 马磊

    2016-01-01

    the optical elements to decrease the effect of heat load on monochromator.The results indicated that the slope error of plane mirror declined from 8.1 μrad to 3 μrad.Lastly,we studied the testing methods of the variables-included-angle grating mono-chromator with the resolution of already reaching 5 ×1 04 .And the measuring accuracy of angel is 0.026″. Those studies will provide some help for designing the monochromator with ultra-high resolution in the third generation synchrotron radiation.

  11. Beryllium, zinc and lead single crystals as a thermal neutron monochromators

    Energy Technology Data Exchange (ETDEWEB)

    Adib, M.; Habib, N. [Reactor Physics Department, NRC, Atomic Energy Authority, Cairo (Egypt); Bashter, I.I. [Physics Department, Faculty of Science, Zagazig University (Egypt); Morcos, H.N.; El-Mesiry, M.S. [Reactor Physics Department, NRC, Atomic Energy Authority, Cairo (Egypt); Mansy, M.S., E-mail: drmohamedmansy88@hotmail.com [Physics Department, Faculty of Science, Zagazig University (Egypt)

    2015-03-15

    Highlights: •Monochromatic features of Be, Zn and Pb single crystals. •Calculations of neutron reflectivity using a computer program MONO. •Optimum mosaic spread, thickness and cutting plane of single crystals. -- Abstract: The monochromatic features of Be, Zn and Pb single crystals are discussed in terms of orientation, mosaic spread, and thickness within the wavelength band from 0.04 up to 0.5 nm. A computer program MONO written in “FORTRAN-77”, has been adapted to carry out the required calculations. Calculations show that a 5 mm thick of beryllium (HCP structure) single crystal cut along its (0 0 2) plane having 0.6° FWHM are the optimum parameters when it is used as a monochromator with high reflected neutron intensity from a thermal neutron flux. Furthermore, at wavelengths shorter than 0.16 nm it is free from the accompanying higher order ones. Zinc (HCP structure) has the same parameters, with intensity much less than the latter. The same features are seen with lead (FCC structure) cut along its (3 1 1) plane with less reflectivity than the former. However, Pb (3 1 1) is more preferable than others at neutron wavelengths ⩽ 0.1 nm, since the glancing angle (θ ∼ 20°) is more suitable to carry out diffraction experiments. For a cold neutron flux, the first-order neutrons reflected from beryllium is free from the higher orders up to 0.36 nm. While for Zn single crystal is up to 0.5 nm.

  12. New high-brightness monochrome monitor based on color CRT technology

    Science.gov (United States)

    Spekowius, Gerhard; Weibrecht, Martin; D'Adda, Carlo; Antonini, Antonio; Casale, Carlo; Blume, Hartwig R.

    1997-05-01

    With increasing availability of medical image communication infrastructures, medical images are more and more displayed as soft-copies rather than as hard-copies. Often however, the image viewing environment is characterized by high ambient light, such as in surgery rooms or offices illuminated by daylight. We are describing a very-high- brightness cathode-ray-tube (CRT) monitor which accommodates these viewing conditions without the typical deterioration in resolution due to electron focal spot blooming. The three guns of a standard color CRT are used to create a high brightness monochrome monitor. The CRT has no shadow-mask, and a homogeneous P45 phosphor layer has been deposited instead of the structured red-green-blue color phosphor screen. The electron spots of the three guns are dynamically matched by applying appropriate waveforms to four additional multiple magnetic fields around the gun assembly. We evaluated the image quality of the triple-gun CRT monitor concerning parameters which are especially relevant for medical imaging applications. We have measured characteristic curves, dynamic range, veiling glare, resolution, spot profiles, and screen noise. The monitor can provide a high luminance of more than 200 fL. Due to nearly perfect matching of the three spots, the resolution is mainly determined by the beam profile of a single gun and is remarkably high even at these high luminance values. The P45 phosphor shows very little structure noise, which is an advantage for medical desktop applications. Since all relevant monitor parameters are digitally controlled, the status of the monitor can be fully characterized at any time. This feature particularly facilitates the reproduction of brightness and contrast values and hence allows easy implementation of a display function standard or to return to a desired display function that has been found useful for a given application in the past.

  13. Soil-Structure Interaction Analysis of Jack-up Platforms Subjected to Monochrome and Irregular Waves

    Institute of Scientific and Technical Information of China (English)

    Maziar Gholami KORZANI; Ali Akbar AGHAKOUCHAK

    2015-01-01

    As jack-up platforms have recently been used in deeper and harsher waters, there has been an increasing demand to understand their behaviour more accurately to develop more sophisticated analysis techniques. One of the areas of significant development has been the modelling of spudcan performance, where the load-displacement behaviour of the foundation is required to be included in any numerical model of the structure. In this study, beam on nonlinear winkler foundation (BNWF) modeling—which is based on using nonlinear springs and dampers instead of a continuum soil media—is employed for this purpose. A regular monochrome design wave and an irregular wave representing a design sea state are applied to the platform as lateral loading. By using the BNWF model and assuming a granular soil under spudcans, properties such as soil nonlinear behaviour near the structure, contact phenomena at the interface of soil and spudcan (such as uplifting and rocking), and geometrical nonlinear behaviour of the structure are studied. Results of this study show that inelastic behaviour of the soil causes an increase in the lateral displacement at the hull elevation and permanent unequal settlement in soil below the spudcans, which are increased by decreasing the friction angle of the sandy soil. In fact, spudcans and the underlying soil cause a relative fixity at the platform support, which changes the dynamic response of the structure compared with the case where the structure is assumed to have a fixed support or pinned support. For simulating this behaviour without explicit modelling of soil-structure interaction (SSI), moment-rotation curves at the end of platform legs, which are dependent on foundation dimensions and soil characteristics, are obtained. These curves can be used in a simplified model of the platform for considering the relative fixity at the soil-foundation interface.

  14. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  15. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  16. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    Energy Technology Data Exchange (ETDEWEB)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582, Japan and Department of Radiology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582 (Japan)

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  17. Development of a bent Laue beam-expanding double-crystal monochromator for biomedical X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Martinson, Mercedes, E-mail: mercedes.m@usask.ca [University of Saskatchewan, 116 Science Place, Room 163, Saskatoon, Saskatchewan (Canada); Samadi, Nazanin [University of Saskatchewan, 107 Wiggins Road, Saskatoon, Saskatchewan (Canada); Belev, George [Canadian Light Source, 44 Innovation Boulevard, Saskatoon, Saskatchewan (Canada); Bassey, Bassey [University of Saskatchewan, 116 Science Place, Room 163, Saskatoon, Saskatchewan (Canada); Lewis, Rob [University of Saskatchewan, 107 Wiggins Road, Saskatoon, Saskatchewan (Canada); Monash University, Clayton, Victoria 3800 (Australia); Aulakh, Gurpreet [University of Saskatchewan, 107 Wiggins Road, Saskatoon, Saskatchewan (Canada); Chapman, Dean [University of Saskatchewan, 116 Science Place, Room 163, Saskatoon, Saskatchewan (Canada); University of Saskatchewan, 107 Wiggins Road, Saskatoon, Saskatchewan (Canada)

    2014-03-13

    A bent Laue beam-expanding double-crystal monochromator was developed and tested at the Biomedical Imaging and Therapy beamline at the Canadian Light Source. The expander will reduce scanning time for micro-computed tomography and allow dynamic imaging that has not previously been possible at this beamline. The Biomedical Imaging and Therapy (BMIT) beamline at the Canadian Light Source has produced some excellent biological imaging data. However, the disadvantage of a small vertical beam limits its usability in some applications. Micro-computed tomography (micro-CT) imaging requires multiple scans to produce a full projection, and certain dynamic imaging experiments are not possible. A larger vertical beam is desirable. It was cost-prohibitive to build a longer beamline that would have produced a large vertical beam. Instead, it was proposed to develop a beam expander that would create a beam appearing to originate at a source much farther away. This was accomplished using a bent Laue double-crystal monochromator in a non-dispersive divergent geometry. The design and implementation of this beam expander is presented along with results from the micro-CT and dynamic imaging tests conducted with this beam. Flux (photons per unit area per unit time) has been measured and found to be comparable with the existing flat Bragg double-crystal monochromator in use at BMIT. This increase in overall photon count is due to the enhanced bandwidth of the bent Laue configuration. Whilst the expanded beam quality is suitable for dynamic imaging and micro-CT, further work is required to improve its phase and coherence properties.

  18. Characterization of InGaN/GaN quantum well growth using monochromated valence electron energy loss spectroscopy

    OpenAIRE

    Palisaitis, J.; Lundskog, A.; Forsberg, U.; Janzén, E.; Birch, J.; Hultman, L.; Persson, P. O. Å.

    2014-01-01

    The early stages of InGaN/GaN quantum wells growth for In reduced conditions have been investigated for varying thickness and composition of the wells. The structures were studied by monochromated STEM–VEELS spectrum imaging at high spatial resolution. It is found that beyond a critical well thickness and composition, quantum dots (>20 nm) are formed inside the well. These are buried by compositionally graded InGaN, which is formed as GaN is grown while residual In is incorporated into the...

  19. Replacement of monochromator and proportional gas counter by mercuric iodide detector in X-ray powder diffraction

    Energy Technology Data Exchange (ETDEWEB)

    Nissenbaum, J.; Levi, A.; Burger, A.; Schieber, M. (Hebrew Univ., Jerusalem (Israel). School of Applied Science and Technology)

    1983-02-01

    Low resolution and therefore low-cost mercuric iodide detectors have successfully been applied to replace the combination of a graphite monochromator and a proportional gas radiation counter used in X-ray diffractometers. The mercuric iodide detector requires a lower DC bias of only 200 V rather than the 1500 V bias needed for the proportional gas counter. The much better stopping power of HgI/sub 2/ allows higher counting efficiency and therefore a better signal-to-noise ratio. Results are shown for X-ray powder diffractions of polycrystalline cubic silicon and tetragonal HgI/sub 2/.

  20. Holographically recorded ion-etched varied line spacing grating for a monochromator at the Photon Factory BL19B

    CERN Document Server

    Fujisawa, M; Shin, S

    2001-01-01

    Holographically recorded, ion etched ruled gratings can be obtained for the varied line spacing plane grating (VPG) monochromator at the Photon Factory BL19B. A new holographic recording method makes it possible to manufacture VPGs with large varied line coefficients for reducing the aberration terms in the optical path function. The efficiency at higher photon energies and the quantity of stray light are improved in comparison with mechanically ruled gratings. The calculation shows that the much lower efficiency at higher photon energies is not intrinsic for saw-tooth type gratings. It seems to be caused instead by carbon contamination, radiation damage, deformation at manufacturing and so on.

  1. DNS: Diffuse scattering neutron time-of-flight spectrometer

    Directory of Open Access Journals (Sweden)

    Yixi Su

    2015-08-01

    Full Text Available DNS is a versatile diffuse scattering instrument with polarisation analysis operated by the Jülich Centre for Neutron Science (JCNS, Forschungszentrum Jülich GmbH, outstation at the Heinz Maier-Leibnitz Zentrum (MLZ. Compact design, a large double-focusing PG monochromator and a highly efficient supermirror-based polarizer provide a polarized neutron flux of about 107 n cm-2 s-1. DNS is used for the studies of highly frustrated spin systems, strongly correlated electrons, emergent functional materials and soft condensed matter.

  2. Multipurpose monochromator for the Basic Energy Science Synchrotron Radiation Center Collaborative Access Team beamlines at the Advanced Photon Source x-ray facility

    Science.gov (United States)

    Ramanathan, M.; Beno, M. A.; Knapp, G. S.; Jennings, G.; Cowan, P. L.; Montano, P. A.

    1995-02-01

    The Basic Energy Science Synchrotron Radiation Center (BESSRC) Collaborative Access Team (CAT) will construct x-ray beamlines at two sectors of the Advanced Photon Source facility. In most of the beamlines the first optical element will be a monochromator, so that a standard design for this critical component is advantageous. The monochromator is a double-crystal, fixed exit scheme with a constant offset designed for ultrahigh vacuum windowless operation. In this design, the crystals are mounted on a turntable with the first crystal at the center of rotation. Mechanical linkages are used to correctly position the second crystal and maintain a constant offset. The main drive for the rotary motion is provided by a vacuum compatible Huber goniometer isolated from the main vacuum chamber. The design of the monochromator is such that it can accommodate water, gallium, or liquid-nitrogen cooling for the crystal optics.

  3. Optimization of the bent perfect Si(311)-crystal monochromator for a residual strain/stress instrument at the HANARO reactor-Part I

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Myung-Kook [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of)]. E-mail: moonmk@kaeri.re.kr; Lee, Chang-Hee [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Em, Vyacheslav T. [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Mikula, Pavol [Nuclear Physics Institute and Research Centre Rez, Ltd., 250 68 Rez (Czech Republic); Hong, Kwang-Pyo [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Choi, Young-Hyun [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Cheon, Jong-Kyu [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Choi, Young-Nam [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Kim, Shin-Ae [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Kim, Sung-Kyu [Korea Atomic Energy Research Institute, Neutron Beam Application, 150 Duckjin-Dong, Yusung, Daejon 305-600 (Korea, Republic of); Jin, Kyung-Chan [Korea Institute of Industrial Technology 35-3 Hongchon-Ri, Ipchang-Myun, Chonan-Si, Chungnam 330-825 (Korea, Republic of)

    2005-12-01

    Reflectivity and resolution properties of a variety of optimized focusing monochromator performances based on cylindrically bent perfect Si-crystals were tested with the aim of evaluating their possible use in a strain/stress diffractometer. It has been found that the optimized monochromator performances of the curved Si(311) crystals (for the take-off angle 2{theta}{sub M}=60 deg.) provide a good luminosity and a sufficiently high resolution (full width at half maximum (FWHM) of the instrumental {delta}d/d-profile can be about 2x10{sup -3} in the vicinity of the lattice spacing d=0.117nm for 2{theta}{sub S}{approx}90 deg.) of the strain/stress diffractometer with the figure of merit more than one order of magnitude larger than that related to the conventional flat mosaic Ge(220) monochromator of {eta}=15{sup '}.

  4. Wake monochromator in asymmetric and symmetric Bragg and Laue geometry for self-seeding the European X-ray FEL

    CERN Document Server

    Geloni, Gianluca; Saldin, Evgeni; Serkez, Svitozar; Tolkiehn, Martin

    2013-01-01

    We discuss the use of self-seeding schemes with wake monochromators to produce TW power, fully coherent pulses for applications at the dedicated bio-imaging bealine at the European X-ray FEL, a concept for an upgrade of the facility beyond the baseline previously proposed by the authors. We exploit the asymmetric and symmetric Bragg and Laue reflections (sigma polarization) in diamond crystal. Optimization of the bio-imaging beamline is performed with extensive start-to-end simulations, which also take into account effects such as the spatio-temporal coupling caused by the wake monochromator. The spatial shift is maximal in the range for small Bragg angles. A geometry with Bragg angles close to pi/2 would be a more advantageous option from this viewpoint, albeit with decrease of the spectral tunability. We show that it will be possible to cover the photon energy range from 3 keV to 13 keV by using four different planes of the same crystal with one rotational degree of freedom.

  5. Wake monochromator in asymmetric and symmetric Bragg and Laue geometry for self-seeding the European X-ray FEL

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni; Serkez, Svitozar; Tolkiehn, Martin [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-01-15

    We discuss the use of self-seeding schemes with wake monochromators to produce TW power, fully coherent pulses for applications at the dedicated bio-imaging beamline at the European X-ray FEL, a concept for an upgrade of the facility beyond the baseline previously proposed by the authors. We exploit the asymmetric and symmetric Bragg and Laue reflections (sigma polarization) in diamond crystal. Optimization of the bio-imaging beamline is performed with extensive start-to-end simulations, which also take into account effects such as the spatio-temporal coupling caused by the wake monochromator. The spatial shift is maximal in the range for small Bragg angles. A geometry with Bragg angles close to {pi}/2 would be a more advantageous option from this viewpoint, albeit with decrease of the spectral tunability. We show that it will be possible to cover the photon energy range from 3 keV to 13 keV by using four different planes of the same crystal with one rotational degree of freedom.

  6. Design and analysis of a high heat load pin-post monochromator crystal with an integral water manifold

    Energy Technology Data Exchange (ETDEWEB)

    Schildkamp, W. [Consortium for Advanced Radiation Sources, University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 (United States); Tonnessen, T. [Rocketdyne Albuquerque Operations, 2511 C. Broadbent Parkway, N.E., Albuquerque, NM 87107 (United States)

    1996-09-01

    Conventional minichannel water cooling geometry will not perform satisfactorily for x-radiation from a wiggler source at the Advanced Photon Source. For closed-gap wiggler operation, cryogenic silicon appears to be the only option for crystals in Bragg-Bragg geometry. For operation of the wiggler at more modest critical energies ({lt}17 keV), the first crystal can be cooled by a pin-post cooling scheme, using water at room temperature as a coolant. In order to limit the water consumption to 4 gpm and hence the risk of introducing vibrations to the crystal, the intensely cooled area of the crystal was matched to the footprint of the beam, leaving a less cooled area of the crystal subject to survival in a missteered beam but not to perform as a monochromator. The manifold design avoids large areas of high water pressure that would bow the crystal. We present here the design of a pin-post monochromator consisting of a four-layer silicon manifold system and an integrally bonded 39{percent} nickel-iron alloy base plate. A transparent prototype of the design will be exhibited. Fabrication techniques and design advantages will be discussed. {copyright} {ital 1996 American Institute of Physics.}

  7. A medium-resolution monochromator for 73 keV x-rays - Nuclear resonant scattering of synchrotron radiation from 193-Ir

    CERN Document Server

    Alexeev, Pavel; Wille, Hans-Christian; Sergeev, Ilya; Herlitschke, Marcus; Leupold, Olaf; McMorrow, Desmond F; Röhlsberger, Ralf

    2016-01-01

    We report on the development and characterization of a medium resolution monochromator for synchrotron-based hyperfine spectroscopy at the 73 keV nuclear resonance of 193-Ir. The device provides high throughput of 6*10^8 ph/s in an energy bandwidth of 300(20) meV. We excited the nuclear resonance in 193-Ir at 73.04 keV and observed nuclear fluorescence of 193-Ir in Iridium metal. The monochromator allows for Nuclear Forward Scattering spectroscopy on Ir and its compounds.

  8. Comparison of detectability of a simple object with low contrast displayed on a high-brightness color LCD and a monochrome LCD.

    Science.gov (United States)

    Takahashi, Keita; Morishita, Junji; Hiwasa, Takeshi; Hatanaka, Shiro; Sakai, Shuji; Hashimoto, Noriyuki; Nakamura, Yasuhiko; Toyofuku, Fukai; Higashida, Yoshiharu; Ohki, Masafumi

    2010-07-01

    The goal of this study was to investigate the effect of the different luminance settings of a high-brightness color liquid-crystal display (LCD) on the detectability of a simple grayscale object with low contrast by use of receiver operating characteristic (ROC) analysis. The detectability of a high-brightness color LCD with two maximum-luminance settings (500 and 170 cd/m(2)) was compared with the detectability of a monochrome LCD (500 cd/m(2)). The two LCDs used in this study were calibrated to the grayscale standard display function. The average areas under the ROC curve (AUCs) and the standard deviations for all thirteen observers for the 500 cd/m(2) color LCD, 500 cd/m(2) monochrome LCD, and 170 cd/m(2) color LCD were 0.937 +/- 0.040, 0.924 +/- 0.056, and 0.915 +/- 0.068, respectively. There were no statistically significant differences in the average AUCs among the three LCD monitor conditions. On the other hand, the total observation time for the 170 cd/m(2) color LCD was significantly shorter than that for the 500 cd/m(2) color and monochrome LCDs (p LCD provided a performance comparable to the monochrome LCD for detection of a simple grayscale object with low contrast.

  9. First experimental test of a new monochromated and aberration-corrected 200 kV field-emission scanning transmission electron microscope.

    Science.gov (United States)

    Walther, T; Quandt, E; Stegmann, H; Thesen, A; Benner, G

    2006-01-01

    The first 200 kV scanning transmission electron microscope (STEM) with an imaging energy filter, a monochromator and a corrector for the spherical aberration (Cs-corrector) of the illumination system has been built and tested. The STEM/TEM concept with Koehler illumination allows to switch easily between STEM mode for analytical and TEM mode for high-resolution or in situ studies. The Cs-corrector allows the use of large illumination angles for retaining a sufficiently high beam current despite the intensity loss in the monochromator. With the monochromator on and a 3 microm slit in the dispersion plane that gives 0.26 eV full-width at half-maximum (FWHM) energy resolution we have obtained so far an electron beam smaller than 0.20 nm in diameter (FWHM as measured by scanning the spot quickly over the CCD) which contains 7 pA current and, according to simulations, should be around 0.12 nm in true size. A high-angle annular dark field (ADF) image with isotropic resolution better than 0.28 nm has been recorded with the monochromator in the above configuration and the Cs-corrector on. The beam current is still somewhat low for electron energy-loss spectroscopy (EELS) but is expected to increase substantially by optimising the condenser set-up and using a somewhat larger condenser aperture.

  10. First experimental test of a new monochromated and aberration-corrected 200 kV field-emission scanning transmission electron microscope

    Energy Technology Data Exchange (ETDEWEB)

    Walther, T. [Center of Advanced European Studies and Research (caesar), Ludwig-Erhard-Allee 2, D-53175 Bonn (Germany)]. E-mail: walther@caesar.de; Quandt, E. [Center of Advanced European Studies and Research (caesar), Ludwig-Erhard-Allee 2, D-53175 Bonn (Germany); Stegmann, H. [Carl Zeiss Nano Technology Systems GmbH, Carl-Zeiss-Str. 56, D-73447 Oberkochen (Germany); Thesen, A. [Carl Zeiss Nano Technology Systems GmbH, Carl-Zeiss-Str. 56, D-73447 Oberkochen (Germany); Benner, G. [Carl Zeiss Nano Technology Systems GmbH, Carl-Zeiss-Str. 56, D-73447 Oberkochen (Germany)

    2006-10-15

    The first 200 kV scanning transmission electron microscope (STEM) with an imaging energy filter, a monochromator and a corrector for the spherical aberration (C {sub s}-corrector) of the illumination system has been built and tested. The STEM/TEM concept with Koehler illumination allows to switch easily between STEM mode for analytical and TEM mode for high-resolution or in situ studies. The C{sub s}-corrector allows the use of large illumination angles for retaining a sufficiently high beam current despite the intensity loss in the monochromator. With the monochromator on and a 3 {mu}m slit in the dispersion plane that gives 0.26 eV full-width at half-maximum (FWHM) energy resolution we have obtained so far an electron beam smaller than 0.20 nm in diameter (FWHM as measured by scanning the spot quickly over the CCD) which contains 7 pA current and, according to simulations, should be around 0.12 nm in true size. A high-angle annular dark field (ADF) image with isotropic resolution better than 0.28 nm has been recorded with the monochromator in the above configuration and the C {sub s}-corrector on. The beam current is still somewhat low for electron energy-loss spectroscopy (EELS) but is expected to increase substantially by optimising the condenser set-up and using a somewhat larger condenser aperture.

  11. Interface of the transport systems research vehicle monochrome display system to the digital autonomous terminal access communication data bus

    Science.gov (United States)

    Easley, W. C.; Tanguy, J. S.

    1986-01-01

    An upgrade of the transport systems research vehicle (TSRV) experimental flight system retained the original monochrome display system. The original host computer was replaced with a Norden 11/70, a new digital autonomous terminal access communication (DATAC) data bus was installed for data transfer between display system and host, while a new data interface method was required. The new display data interface uses four split phase bipolar (SPBP) serial busses. The DATAC bus uses a shared interface ram (SIR) for intermediate storage of its data transfer. A display interface unit (DIU) was designed and configured to read from and write to the SIR to properly convert the data from parallel to SPBP serial and vice versa. It is found that separation of data for use by each SPBP bus and synchronization of data tranfer throughout the entire experimental flight system are major problems which require solution in DIU design. The techniques used to accomplish these new data interface requirements are described.

  12. Vibratory response of a precision double-multi-layer monochromator positioning system using a generic modeling program with experimental verification.

    Energy Technology Data Exchange (ETDEWEB)

    Barraza, J.

    1998-07-29

    A generic vibratory response-modeling program has been developed as a tool for designing high-precision optical positioning systems. The systems are modeled as rigid-body structures connected by linear non-rigid elements such as complex actuators and bearings. The full dynamic properties of each non-rigid element are determined experimentally or theoretically, then integrated into the program as inertial and stiffness matrices. Thus, it is possible to have a suite of standardize structural elements for modeling many different positioning systems that use standardized components. This paper will present the application of this program to a double-multi-layer monochromator positioning system that utilizes standardized components. Calculated results are compared to experimental modal analysis results.

  13. Diffusion MRI

    Science.gov (United States)

    Fukuyama, Hidenao

    Recent advances of magnetic resonance imaging have been described, especially stressed on the diffusion sequences. We have recently applied the diffusion sequence to functional brain imaging, and found the appropriate results. In addition to the neurosciences fields, diffusion weighted images have improved the accuracies of clinical diagnosis depending upon magnetic resonance images in stroke as well as inflammations.

  14. Measurement of vibrational spectrum of liquid using monochromated scanning transmission electron microscopy-electron energy loss spectroscopy.

    Science.gov (United States)

    Miyata, Tomohiro; Fukuyama, Mao; Hibara, Akihide; Okunishi, Eiji; Mukai, Masaki; Mizoguchi, Teruyasu

    2014-10-01

    Investigations on the dynamic behavior of molecules in liquids at high spatial resolution are greatly desired because localized regions, such as solid-liquid interfaces or sites of reacting molecules, have assumed increasing importance with respect to improving material performance. In application to liquids, electron energy loss spectroscopy (EELS) observed with transmission electron microscopy (TEM) is a promising analytical technique with the appropriate resolutions. In this study, we obtained EELS spectra from an ionic liquid, 1-ethyl-3-methylimidazolium bis (trifluoromethyl-sulfonyl) imide (C2mim-TFSI), chosen as the sampled liquid, using monochromated scanning TEM (STEM). The molecular vibrational spectrum and the highest occupied molecular orbital (HOMO)-lowest unoccupied molecular orbital (LUMO) gap of the liquid were investigated. The HOMO-LUMO gap measurement coincided with that obtained from the ultraviolet-visible spectrum. A shoulder in the spectrum observed ∼0.4 eV is believed to originate from the molecular vibration. From a separately performed infrared observation and first-principles calculations, we found that this shoulder coincided with the vibrational peak attributed to the C-H stretching vibration of the [C2mim(+)] cation. This study demonstrates that a vibrational peak for a liquid can be observed using monochromated STEM-EELS, and leads one to expect observations of chemical reactions or aids in the analysis of the dynamic behavior of molecules in liquid. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  16. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal [Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2016-05-15

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  17. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    Science.gov (United States)

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  18. Spatial Mapping of Translational Diffusion Coefficients Using Diffusion Tensor Imaging: A Mathematical Description.

    Science.gov (United States)

    Shetty, Anil N; Chiang, Sharon; Maletic-Savatic, Mirjana; Kasprian, Gregor; Vannucci, Marina; Lee, Wesley

    2014-01-01

    In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal-Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain.

  19. Spatial Mapping of Translational Diffusion Coefficients Using Diffusion Tensor Imaging: A Mathematical Description

    Science.gov (United States)

    SHETTY, ANIL N.; CHIANG, SHARON; MALETIC-SAVATIC, MIRJANA; KASPRIAN, GREGOR; VANNUCCI, MARINA; LEE, WESLEY

    2016-01-01

    In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal–Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain. PMID:27441031

  20. Backscattering analyzer geometry as a straightforward and precise method for monochromator characterization at third-generation synchrotron-radiation sources (abstract)

    Science.gov (United States)

    Snigirev, A. A.; Lequien, S.; Suvorov, A. Yu.

    1995-02-01

    With the assessment of the third generation of synchrotron-radiation sources, insertion devices (ID) are going to become extensively used. The choice of the ID field configuration allows the optimization of the photon flux at the desired energy. This attractive situation results in a much higher flux on optical elements, mainly on monochromators for which new cooling schemes have to be developed. These latter must be characterized under operating conditions and generally, the figure of merit for monochromators is the rocking curve (RC) measurement. By varying the ID field, the monochromator may be fully characterized with regard to the heat load. To achieve this aim, we have proposed and tested a double-crystal setup where a Si analyzer crystal installed in backscattering geometry (BSG) is coupled with a silicon p-i-n photodiode as the detection system (Fig. 1). The analyzer was a standard Si wafer (111) orientation, from which we used the following Bragg reflections: 333, 444, 555, 777, 888, 999, ... to measure the RCs of monochromators keeping the analyzer fixed. We were then able to probe the formers at the respective energies 5.9, 7.9, 9.9, 13.8, 15.8, 17.8 keV, etc. Setting the analyzer crystal in BSG, we get several-fold benefits from the method: (1) A very good angular resolution (˜10-6 rad) when one combines the BSG analyzer with narrow slits (˜100 μm). (2) A high-energy resolution yielding to a calibration of the monochromator with an accuracy better than 1 eV. (3) The analyzer crystal attenuates the reflected intensity which avoids the use of any scatterer foil to count the number of photons. We directly used photodiodes which are well known to respond linearly to radiation intensities and to have a high dynamic range (more than 6 decades). (4) No fine mechanics is needed for the analyzer, just a simple manual turntable can be used to set the analyzer in BSG through the utilization of a laser beam. Results on different tests for operating liquid-N2

  1. Performance of synchrotron X-ray monochromators under heat load Part 1 finite element modeling

    CERN Document Server

    Zhang, L; Migliore, J S; Mocella, V; Ferrero, C; Freund, A K

    2001-01-01

    In this paper we present the details of the finite element modeling (FEM) procedure used to calculate the thermal deformation generated by the X-ray power absorbed in silicon crystals. Different parameters were varied systematically such as the beam footprint on the crystal, the reflection order and the white beam slit settings. Moreover, the influence of various cooling parameters such as the cooling coefficient and the temperature of the coolant were studied. The finite element meshing was carefully optimized to generate a deformation output that could be easily read by a diffraction simulation code. Comparison with the experiments shows that the peak-to-valley slope error calculated by the FEM is an excellent approximation of the rocking curve width for a liquid nitrogen cooled silicon (3 3 3) crystal, and a quite good approximation for significantly deformed silicon (1 1 1) crystals.

  2. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  3. Minimizing Grating Slope Errors in the IEX Monochromato at the Advanced Photon Source

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, M. V.; Assoufid, L.; McChesney, J.; Qian, J.; Reininger, R.; Rodolakis, F

    2016-01-01

    The IEX beamline at the APS is currently in the commissioning phase. The energy resolution of the beamline was not meeting original specifications by several orders of magnitude. The monochromator, an in-focus VLS-PGM, is currently configured with a high and a medium-line-density grating. Experimental results indicated that both gratings were contributing to the poor energy resolution and this led to venting the monochromator to investigate. The initial suspicion was that a systematic error had occurred in the ruling process on the VLS gratings, but that proved to not be the case. Instead the problem was isolated to mechanical constraints used to mount the gratings into their respective side-cooled holders. Modifications were made to the holders to eliminate problematic constraints without compromising the rest of the design. Metrology performed on the gratings in the original and modified holders demonstrated a 20-fold improvement in the surface profile error which was consistent with finite element analysis performed in support of the modifications. Two gratings were successfully reinstalled and subsequent measurements with beam show a dramatic improvement in energy resolution.

  4. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  5. Application of inductively coupled plasma atomic emission spectroscopy analysis with a polychromator/monochromator combination the byproducts of coal-fired power stations

    Science.gov (United States)

    Weers, C. A.

    The by-products of coal-fired power plants may be hazardous for the environment. Good analysis methods are therefore required in order to establish either a possible usage of the by-products or their possible storage. Preliminary experiments performed with inductively coupled plasma atomic emission spectroscopy have proven very successful. Moreover, the method is cost-effective. A short description is given of the optimized system for routine analysis. The system consists of a 2- and a 15-channel polychromator in combination with a monochromator. The opportunities is provides are also described. Use of the monochromator to analyze coal and run-off water from the flue-gases desulphurization, and of the polychromators to analyze coal fly-ash is described separately.

  6. A possibility of parallel and anti-parallel diffraction measurements on neutron diffractometer employing bent perfect crystal monochromator at the monochromatic focusing condition

    Indian Academy of Sciences (India)

    Yong Nam Choi; Shin Ae Kim; Sung Kyu Kim; Sung Baek Kim; Chang-Hee Lee; Pivel Mikula

    2004-07-01

    In a conventional diffractometer having single monochromator, only one position, parallel position, is used for the diffraction experiment (i.e. detection) because the resolution property of the other one, anti-parallel position, is very poor. However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the / measured on three diffraction geometries (symmetric, asymmetric compression and asymmetric expansion), we can conclude that the simultaneous diffraction measurement in both parallel and anti-parallel positions can be achieved.

  7. 中子单色器模拟分析研究%Simulation and Analysis of Spectrum Selection Affected by Neutron Monochromator's Parameters

    Institute of Scientific and Technical Information of China (English)

    霍合勇; 唐科; 唐彬; 刘斌; 曹超

    2014-01-01

    为研究单色器对中子能谱的选择规律,本文利用MCSTAS程序模拟分析了机械速度选择器与晶体单色器几个特征参数对中子能量选择影响。分析结果显示经机械速度选择器单色选择中子注量率要下降1~2个量级,而晶体单色器要下降2~3个量级。因此,对于单色化要求比较高选用晶体单色器,对于实验时间要求较高的选用机械速度选择器。%To comprehend the selective rule of monochromator for neutron spectrum , the paper analyzes the effects of several characteristic parameters on neutron energy selection .The simulated results indicate that ve-locity selector could get high neutron flux , whose energy width -broadening becomes larger along with selected neutron peak wavelength , and crystal monochromator could get high energy resolution , whose energy width -broadening becomes narrower along with selected neutron peak wavelength .So it is suggested that crystal mono-chromator can be selected if high energy resolution is required , and mechanical velocity selector can be used if high neutron flux is required .

  8. Vaneless diffusers

    Science.gov (United States)

    Senoo, Y.

    The influence of vaneless diffusers on flow in centrifugal compressors, particularly on surge, is discussed. A vaneless diffuser can demonstrate stable operation in a wide flow range only if it is installed with a backward leaning blade impeller. The circumferential distortion of flow in the impeller disappears quickly in the vaneless diffuser. The axial distortion of flow at the diffuser inlet does not decay easily. In large specific speed compressors, flow out of the impeller is distorted axially. Pressure recovery of diffusers at distorted inlet flow is considerably improved by half guide vanes. The best height of the vanes is a little 1/2 diffuser width. In small specific speed compressors, flow out of the impeller is not much distorted and pressure recovery can be predicted with one-dimensional flow analysis. Wall friction loss is significant in narrow diffusers. The large pressure drop at a small flow rate can cause the positive gradient of the pressure-flow rate characteristic curve, which may cause surging.

  9. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  10. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  11. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  12. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  13. Noite e dia e alguns monocromos psíquicos Night and day - and some psychical monochromes

    Directory of Open Access Journals (Sweden)

    Edson Luiz André de Sousa

    2006-06-01

    Full Text Available O artigo apresenta uma leitura do conto de Jack London "A sombra e o brilho" mostrando o funcionamento do princípio da mímesis no processo de identificação. Propõe-se a expressão monocromos psíquicos para esses espaços mentais de indiferenciação entre o eu e o Outro. Adota-se a tese de Caillois, que afirma que o eu é permeável ao espaço. Nessa perspectiva, o tema do duplo, amplamente desenvolvido por Freud, é fundamental. Partindo-se de notas sobre o trabalho do fotógrafo cego Bavcar, procura-se mostrar alguns traços da estrutura do olhar. O artigo finaliza mostrando as conexões possíveis dessas reflexões para a prática psicanalítica.The paper presents a reading of Jack London's tale "The Shadow and the brightness", showing how the principle of mimesis works in the process of the identification. We propose to call psychical monochromes the spaces of mental indifference between the self and the other. We follow the thesis of Roger Caillois: "the self is permeable in the space". In this perspective, the subject of the double, developped by Freud is essential. We try to show the dialectic of the structure of the look based in some notes about the work of the blind photographer Bavcar. The article finish with showing the possibles connections of all these points with the clinical work.

  14. Diffuse scattering

    Energy Technology Data Exchange (ETDEWEB)

    Kostorz, G. [Eidgenoessische Technische Hochschule, Angewandte Physik, Zurich (Switzerland)

    1996-12-31

    While Bragg scattering is characteristic for the average structure of crystals, static local deviations from the average lattice lead to diffuse elastic scattering around and between Bragg peaks. This scattering thus contains information on the occupation of lattice sites by different atomic species and on static local displacements, even in a macroscopically homogeneous crystalline sample. The various diffuse scattering effects, including those around the incident beam (small-angle scattering), are introduced and illustrated by typical results obtained for some Ni alloys. (author) 7 figs., 41 refs.

  15. Relativistic diffusion.

    Science.gov (United States)

    Haba, Z

    2009-02-01

    We discuss relativistic diffusion in proper time in the approach of Schay (Ph.D. thesis, Princeton University, Princeton, NJ, 1961) and Dudley [Ark. Mat. 6, 241 (1965)]. We derive (Langevin) stochastic differential equations in various coordinates. We show that in some coordinates the stochastic differential equations become linear. We obtain momentum probability distribution in an explicit form. We discuss a relativistic particle diffusing in an external electromagnetic field. We solve the Langevin equations in the case of parallel electric and magnetic fields. We derive a kinetic equation for the evolution of the probability distribution. We discuss drag terms leading to an equilibrium distribution. The relativistic analog of the Ornstein-Uhlenbeck process is not unique. We show that if the drag comes from a diffusion approximation to the master equation then its form is strongly restricted. The drag leading to the Tsallis equilibrium distribution satisfies this restriction whereas the one of the Jüttner distribution does not. We show that any function of the relativistic energy can be the equilibrium distribution for a particle in a static electric field. A preliminary study of the time evolution with friction is presented. It is shown that the problem is equivalent to quantum mechanics of a particle moving on a hyperboloid with a potential determined by the drag. A relation to diffusions appearing in heavy ion collisions is briefly discussed.

  16. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  17. A belief-propagation-based decoding method for two-dimensional barcodes with monochrome auxiliary lines robust against non-uniform geometric distortion

    Science.gov (United States)

    Kamizuru, Kohei; Nakamura, Kazuya; Kawasaki, Hiroshi; Ono, Satoshi

    2017-03-01

    Two-dimensional (2D) codes are widely used for various fields such as production, logistics, and marketing thanks to their larger capacity than one-dimensional barcodes. However, they are subject to distortion when printed on non-rigid materials, such as papers and clothes. Although general 2D code decoders correct uniform distortion such as perspective distortion, it is difficult to correct non-uniform and irregular distortion of the 2D code itself. This paper proposes a decoding method for the 2D code, which models monochrome auxiliary line recognition as Markov random field, and solves it using belief propagation.

  18. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  19. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  20. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  1. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  2. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  3. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  4. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  5. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  6. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  7. Synchrotron X-ray adaptative monochromator: study and realization of a prototype; Monochromateur adaptatif pour rayonnement X synchrotron: etude et realisation d`un prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dezoret, D.

    1995-12-12

    This work presents a study of a prototype of a synchrotron X-ray monochromator. The spectral qualities of this optic are sensitive to the heat loads which are particularly important on third synchrotron generation like ESRF. Indeed, powers generated by synchrotron beams can reach few kilowatts and power densities about a few tens watts per square millimeters. The mechanical deformations of the optical elements of the beamlines issue issue of the heat load can damage their spectral efficiencies. In order to compensate the deformations, wa have been studying the transposition of the adaptive astronomical optics technology to the x-ray field. First, we have considered the modifications of the spectral characteristics of a crystal induced by x-rays. We have established the specifications required to a technological realisation. Then, thermomechanical and technological studies have been required to transpose the astronomical technology to an x-ray technology. After these studies, we have begun the realisation of a prototype. This monochromator is composed by a crystal of silicon (111) bonded on a piezo-electric structure. The mechanical control is a loop system composed by a infrared light, a Shack-Hartmann CDD and wave front analyser. This system has to compensate the deformations of the crystal in the 5 kcV to 60 kcV energy range with a power density of 1 watt per square millimeters. (authors).

  8. A spherical grating monochromator and beamline optimised for the provision of polarised synchrotron radiation in the photon energy range 20-200 eV

    Energy Technology Data Exchange (ETDEWEB)

    Finetti, P.; Holland, D.M.P. E-mail: d.m.p.holland@dl.ac.uk; Latimer, C.J.; Binns, C.; Quinn, F.M.; Bowler, M.A.; Grant, A.F.; Mythen, C.S

    2001-12-01

    The design and performance of a spherical grating monochromator and beamline optimised for experiments requiring polarised radiation are described. The beamline is mounted on a bending magnet source at the Synchrotron Radiation Source at Daresbury Laboratory, and the monochromator incorporates three gratings to cover the photon energy range 20-200 eV. The relative first- and higher-order grating efficiencies have been measured by means of photoelectron spectroscopy and have been compared to theoretical predictions. A movable aperture, placed in the optical path between the source and the first mirror, defines the photon emission directions of the beam entering the beamline. The polarisation of the radiation leaving the beamline is determined both by the vertical position of this aperture and by the modifications introduced by the beamline geometry and the optical components. The modification to the polarisation is difficult to calculate analytically, and a satisfactory quantitative assessment can only be accomplished through a combination of reflectivity and ray-tracing analysis. A reflection polarimeter has been used to obtain a full characterisation of the polarisation in the energy range 20-40 eV. These measurements have enabled the Stokes parameters to be deduced. The degree of linear polarisation has also been investigated through angle resolved photoelectron spectroscopy measurements.

  9. Cryogenically cooled bent double-Laue monochromator for high-energy undulator X-rays (50-200 keV).

    Science.gov (United States)

    Shastri, S D; Fezzaa, K; Mashayekhi, A; Lee, W K; Fernandez, P B; Lee, P L

    2002-09-01

    A liquid-nitrogen-cooled monochromator for high-energy X-rays consisting of two bent Si(111) Laue crystals adjusted to sequential Rowland conditions has been in operation for over two years at the SRI-CAT sector 1 undulator beamline of the Advanced Photon Source (APS). It delivers over ten times more flux than a flat-crystal monochromator does at high energies, without any increase in energy width (DeltaE/E approximately 10(-3)). Cryogenic cooling permits optimal flux, avoiding a sacrifice from the often employed alternative technique of filtration - a technique less effective at sources like the 7 GeV APS, where considerable heat loads can be deposited by high-energy photons, especially at closed undulator gaps. The fixed-offset geometry provides a fully tunable in-line monochromatic beam. In addition to presenting the optics performance, unique crystal design and stable bending mechanism for a cryogenically cooled crystal under high heat load, the bending radii adjustment procedures are described.

  10. Calculations and surface quality measurements of high-asymmetry angle x-ray crystal monochromators for advanced x-ray imaging and metrological applications

    Science.gov (United States)

    Zápražný, Zdenko; Korytár, Dušan; Jergel, Matej; Šiffalovič, Peter; Dobročka, Edmund; Vagovič, Patrik; Ferrari, Claudio; Mikulík, Petr; Demydenko, Maksym; Mikloška, Marek

    2015-03-01

    We present the numerical optimization and the technological development progress of x-ray optics based on asymmetric germanium crystals. We show the results of several basic calculations of diffraction properties of germanium x-ray crystal monochromators and of an analyzer-based imaging method for various asymmetry factors using an x-ray energy range from 8 to 20 keV. The important parameter of highly asymmetric monochromators as image magnifiers or compressors is the crystal surface quality. We have applied several crystal surface finishing methods, including advanced nanomachining using single-point diamond turning (SPDT), conventional mechanical lapping, chemical polishing, and chemomechanical polishing, and we have evaluated these methods by means of atomic force microscopy, diffractometry, reciprocal space mapping, and others. Our goal is to exclude the chemical etching methods as the final processing technique because it causes surface undulations. The aim is to implement very precise deterministic methods with a control of surface roughness down to 0.1 nm. The smallest roughness (˜0.3 nm), best planarity, and absence of the subsurface damage were observed for the sample which was machined using an SPDT with a feed rate of 1 mm/min and was consequently polished using a fine polishing 15-min process with a solution containing SiO2 nanoparticles (20 nm).

  11. A point-focusing small angle x-ray scattering camera using a doubly curved monochromator of a W/Si multilayer

    Science.gov (United States)

    Sasanuma, Yuji; Law, Robert V.; Kobayashi, Yuji

    1996-03-01

    A point-focusing small angle x-ray scattering (SAXS) camera using a doubly curved monochromator of a W/Si multilayer has been designed, constructed, and tested. The two radii of curvature of the monochromator are 20 400 and 7.6 mm. The reflectivity of its first-order Bragg reflection for CuKα radiation was calculated to be 0.82, being comparable to that (0.81) of its total reflection. By only 10 s x-ray exposure, scattering from a high-density polyethylene film was detected on an imaging plate (IP). A rotating-anode x-ray generator operated at 40 kV and 30 mA was used. Diffraction from rat-tail collagen has shown that the optical arrangement gives the Bragg spacing up to, at least, 30 nm for CuKα radiation. Combined with IPs, the camera may permit us to carry out time-resolved SAXS measurements for phase behaviors of liquid crystals, lipids, polymer alloys, etc., on conventional x-ray generators available in laboratories.

  12. Hereditary Diffuse Gastric Cancer

    Science.gov (United States)

    ... Hereditary Diffuse Gastric Cancer Request Permissions Hereditary Diffuse Gastric Cancer Approved by the Cancer.Net Editorial Board , 11/2015 What is hereditary diffuse gastric cancer? Hereditary diffuse gastric cancer (HDGC) is an inherited ...

  13. Error Decomposition and Adaptivity for Response Surface Approximations from PDEs with Parametric Uncertainty

    KAUST Repository

    Bryant, C. M.

    2015-01-01

    In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.

  14. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  15. Connectionist and diffusion models of reaction time.

    Science.gov (United States)

    Ratcliff, R; Van Zandt, T; McKoon, G

    1999-04-01

    Two connectionist frameworks, GRAIN (J. L. McClelland, 1993) and brain-state-in-a-box (J. A. Anderson, 1991), and R. Ratcliff's (1978) diffusion model were evaluated using data from a signal detection task. Dependent variables included response probabilities, reaction times for correct and error responses, and shapes of reaction-time distributions. The diffusion model accounted for all aspects of the data, including error reaction times that had previously been a problem for all response-time models. The connectionist models accounted for many aspects of the data adequately, but each failed to a greater or lesser degree in important ways except for one model that was similar to the diffusion model. The findings advance the development of the diffusion model and show that the long tradition of reaction-time research and theory is a fertile domain for development and testing of connectionist assumptions about how decisions are generated over time.

  16. Diffusion coefficient in photon diffusion theory

    NARCIS (Netherlands)

    Graaff, R; Ten Bosch, JJ

    2000-01-01

    The choice of the diffusion coefficient to be used in photon diffusion theory has been a subject of discussion in recent publications on tissue optics. We compared several diffusion coefficients with the apparent diffusion coefficient from the more fundamental transport theory, D-app. Application to

  17. Diffusion coefficient in photon diffusion theory

    NARCIS (Netherlands)

    Graaff, R; Ten Bosch, JJ

    2000-01-01

    The choice of the diffusion coefficient to be used in photon diffusion theory has been a subject of discussion in recent publications on tissue optics. We compared several diffusion coefficients with the apparent diffusion coefficient from the more fundamental transport theory, D-app. Application to

  18. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  19. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  20. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  1. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  2. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  3. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  4. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  5. Design,Simulation and Test for Double Focusing Si Monochromator of Neutron Residual Stress Diff ractometer%中子应力谱仪双聚焦 Si 单色器设计、模拟与测试

    Institute of Scientific and Technical Information of China (English)

    胡瑞; 刘蕴韬; 王玮; 刘中孝; 李峻宏; 高建波; 王洪立; 陈东风

    2015-01-01

    The double focusing Si monochromator was designed ,simulated and tested for the neutron residual stress diffractometer on China Advanced Research Reactor .T he optimal vertical curvature and the optimal thickness of Si wafers were obtained by SIMRES simulation program .In addition ,the figure of merit in dependence on the scattering angle ,monochromator horizontal curvature and wavelength was also deter‐mined by this program . The neutron beam test results indicate that the intensity of neutron increases by 15 times by using double focusing Si monochromator in comparison with Cu monochromator .%本文对中国先进研究堆中子应力谱仪使用的双聚焦 Si单色器进行了设计、模拟和测试。采用SIM RES模拟程序确定了单色器垂直曲率及Si片厚度的最优值,并得到品质因数与散射角、单色器水平曲率和波长的依赖关系。实际测试结果表明,与平板Cu单色器相比,使用双聚焦Si单色器样品处中子强度提高了15倍。

  6. High-Pressure-Hydrogen-Induced Spin Reconfiguration in GdFe2 Observed by 57Fe-Polarized Synchrotron Radiation Mössbauer Spectroscopy with Nuclear Bragg Monochromator

    Science.gov (United States)

    Mitsui, Takaya; Imai, Yasuhiko; Hirao, Naohisa; Matsuoka, Takahiro; Nakamura, Yumiko; Sakaki, Kouji; Enoki, Hirotoshi; Ishimatsu, Naoki; Masuda, Ryo; Seto, Makoto

    2016-12-01

    57Fe-polarized synchrotron radiation Mössbauer spectroscopy (PSRMS) with an X-ray phase plate and a nuclear Bragg monochromator was used to study ferrimagnetic GdFe2 in high-pressure hydrogen. The pressure-dependent spectra clearly showed a two-step magnetic transition of GdFe2. 57Fe-PSRMS with circular polarization gave direct evidence that the Fe moment was directed parallel to the net magnetization of the GdFe2 hydride at 20 GPa. This spin configuration was opposite to that of the initial GdFe2, suggesting an extreme weakening of the antiferromagnetic interaction between Fe and Gd. 57Fe-PSRMS enables the characterization of the nonuniform properties of iron-based polycrystalline powder alloys. The excellent applicability of 57Fe-PSRMS covers a wide range of scientific fields.

  7. A table-top monochromator for tunable femtosecond XUV pulses generated in a semi-infinite gas cell: Experiment and simulations.

    Science.gov (United States)

    von Conta, A; Huppert, M; Wörner, H J

    2016-07-01

    We present a new design of a time-preserving extreme-ultraviolet (XUV) monochromator using a semi-infinite gas cell as a source. The performance of this beamline in the photon-energy range of 20 eV-42 eV has been characterized. We have measured the order-dependent XUV pulse durations as well as the flux and the spectral contrast. XUV pulse durations of ≤40 fs using 32 fs, 800 nm driving pulses were measured on the target. The spectral contrast was better than 100 over the entire energy range. A simple model based on the strong-field approximation is presented to estimate different contributions to the measured XUV pulse duration. On-axis phase-matching calculations are used to rationalize the variation of the photon flux with pressure and intensity.

  8. A compact low cost “master–slave” double crystal monochromator for x-ray cameras calibration of the Laser MégaJoule Facility

    Energy Technology Data Exchange (ETDEWEB)

    Hubert, S., E-mail: sebastien.hubert@cea.fr; Prévot, V.

    2014-12-21

    The Alternative Energies and Atomic Energy Commission (CEA-CESTA, France) built a specific double crystal monochromator (DCM) to perform calibration of x-ray cameras (CCD, streak and gated cameras) by means of a multiple anode diode type x-ray source for the MégaJoule Laser Facility. This DCM, based on pantograph geometry, was specifically modeled to respond to relevant engineering constraints and requirements. The major benefits are mechanical drive of the second crystal on the first one, through a single drive motor, as well as compactness of the entire device. Designed for flat beryl or Ge crystals, this DCM covers the 0.9–10 keV range of our High Energy X-ray Source. In this paper we present the mechanical design of the DCM, its features quantitatively measured and its calibration to finally provide monochromatized spectra displaying spectral purities better than 98%.

  9. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  10. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  11. Diffusion archeology for diffusion progression history reconstruction.

    Science.gov (United States)

    Sefer, Emre; Kingsford, Carl

    2016-11-01

    Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring - perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on submodularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than the existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.

  12. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  13. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  14. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  15. Error Threshold for Spatially Resolved Evolution in the Quasispecies Model

    Energy Technology Data Exchange (ETDEWEB)

    Altmeyer, S.; McCaskill, J. S.

    2001-06-18

    The error threshold for quasispecies in 1, 2, 3, and {infinity} dimensions is investigated by stochastic simulation and analytically. The results show a monotonic decrease in the maximal sustainable error probability with decreasing diffusion coefficient, independently of the spatial dimension. It is thereby established that physical interactions between sequences are necessary in order for spatial effects to enhance the stabilization of biological information. The analytically tractable behavior in an {infinity} -dimensional (simplex) space provides a good guide to the spatial dependence of the error threshold in lower dimensional Euclidean space.

  16. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  17. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  18. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  19. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  20. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  1. Error bars in experimental biology

    OpenAIRE

    2007-01-01

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...

  2. Measurement errors with low-cost citizen science radiometers

    OpenAIRE

    Bardají, Raúl; Piera, Jaume

    2016-01-01

    The KdUINO is a Do-It-Yourself buoy with low-cost radiometers that measure a parameter related to water transparency, the diffuse attenuation coefficient integrated into all the photosynthetically active radiation. In this contribution, we analyze the measurement errors of a novel low-cost multispectral radiometer that is used with the KdUINO. Peer Reviewed

  3. Susceptibility of biallelic haplotype and genotype frequencies to genotyping error.

    Science.gov (United States)

    Moskvina, Valentina; Schmidt, Karl Michael

    2006-12-01

    With the availability of fast genotyping methods and genomic databases, the search for statistical association of single nucleotide polymorphisms with a complex trait has become an important methodology in medical genetics. However, even fairly rare errors occurring during the genotyping process can lead to spurious association results and decrease in statistical power. We develop a systematic approach to study how genotyping errors change the genotype distribution in a sample. The general M-marker case is reduced to that of a single-marker locus by recognizing the underlying tensor-product structure of the error matrix. Both method and general conclusions apply to the general error model; we give detailed results for allele-based errors of size depending both on the marker locus and the allele present. Multiple errors are treated in terms of the associated diffusion process on the space of genotype distributions. We find that certain genotype and haplotype distributions remain unchanged under genotyping errors, and that genotyping errors generally render the distribution more similar to the stable one. In case-control association studies, this will lead to loss of statistical power for nondifferential genotyping errors and increase in type I error for differential genotyping errors. Moreover, we show that allele-based genotyping errors do not disturb Hardy-Weinberg equilibrium in the genotype distribution. In this setting we also identify maximally affected distributions. As they correspond to situations with rare alleles and marker loci in high linkage disequilibrium, careful checking for genotyping errors is advisable when significant association based on such alleles/haplotypes is observed in association studies.

  4. Diagnostic errors in pediatric radiology

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)

    2011-03-15

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  5. Finite-difference schemes for anisotropic diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Es, Bram van, E-mail: es@cwi.nl [Centrum Wiskunde and Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM (Netherlands)

    2014-09-01

    In fusion plasmas diffusion tensors are extremely anisotropic due to the high temperature and large magnetic field strength. This causes diffusion, heat conduction, and viscous momentum loss, to effectively be aligned with the magnetic field lines. This alignment leads to different values for the respective diffusive coefficients in the magnetic field direction and in the perpendicular direction, to the extent that heat diffusion coefficients can be up to 10{sup 12} times larger in the parallel direction than in the perpendicular direction. This anisotropy puts stringent requirements on the numerical methods used to approximate the MHD-equations since any misalignment of the grid may cause the perpendicular diffusion to be polluted by the numerical error in approximating the parallel diffusion. Currently the common approach is to apply magnetic field-aligned coordinates, an approach that automatically takes care of the directionality of the diffusive coefficients. This approach runs into problems at x-points and at points where there is magnetic re-connection, since this causes local non-alignment. It is therefore useful to consider numerical schemes that are tolerant to the misalignment of the grid with the magnetic field lines, both to improve existing methods and to help open the possibility of applying regular non-aligned grids. To investigate this, in this paper several discretization schemes are developed and applied to the anisotropic heat diffusion equation on a non-aligned grid.

  6. Transient Error Data Analysis.

    Science.gov (United States)

    1979-05-01

    Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99

  7. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  8. A transformation approach to modelling multi-modal diffusions

    DEFF Research Database (Denmark)

    Forman, Julie Lyng; Sørensen, Michael

    2014-01-01

    when the diffusion is observed with additional measurement error. The new approach is applied to molecular dynamics data in the form of a reaction coordinate of the small Trp-zipper protein, from which the folding and unfolding rates of the protein are estimated. Because the diffusion coefficient...... is state-dependent, the new models provide a better fit to this type of protein folding data than the previous models with a constant diffusion coefficient, particularly when the effect of errors with a short time-scale is taken into account....

  9. Errors in CT colonography.

    Science.gov (United States)

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  10. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  11. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  12. Error bounds for set inclusions

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Xiyin(郑喜印)

    2003-01-01

    A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.

  13. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  14. Feature Referenced Error Correction Apparatus.

    Science.gov (United States)

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  15. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  17. Beta systems error analysis

    Science.gov (United States)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  18. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  19. Converting Multi-Shell and Diffusion Spectrum Imaging to High Angular Resolution Diffusion Imaging.

    Science.gov (United States)

    Yeh, Fang-Cheng; Verstynen, Timothy D

    2016-01-01

    Multi-shell and diffusion spectrum imaging (DSI) are becoming increasingly popular methods of acquiring diffusion MRI data in a research context. However, single-shell acquisitions, such as diffusion tensor imaging (DTI) and high angular resolution diffusion imaging (HARDI), still remain the most common acquisition schemes in practice. Here we tested whether multi-shell and DSI data have conversion flexibility to be interpolated into corresponding HARDI data. We acquired multi-shell and DSI data on both a phantom and in vivo human tissue and converted them to HARDI. The correlation and difference between their diffusion signals, anisotropy values, diffusivity measurements, fiber orientations, connectivity matrices, and network measures were examined. Our analysis result showed that the diffusion signals, anisotropy, diffusivity, and connectivity matrix of the HARDI converted from multi-shell and DSI were highly correlated with those of the HARDI acquired on the MR scanner, with correlation coefficients around 0.8~0.9. The average angular error between converted and original HARDI was 20.7° at voxels with signal-to-noise ratios greater than 5. The network topology measures had less than 2% difference, whereas the average nodal measures had a percentage difference around 4~7%. In general, multi-shell and DSI acquisitions can be converted to their corresponding single-shell HARDI with high fidelity. This supports multi-shell and DSI acquisitions over HARDI acquisition as the scheme of choice for diffusion acquisitions.

  20. Converting Multi-Shell and Diffusion Spectrum Imaging to High Angular Resolution Diffusion Imaging

    Science.gov (United States)

    Yeh, Fang-Cheng; Verstynen, Timothy D.

    2016-01-01

    Multi-shell and diffusion spectrum imaging (DSI) are becoming increasingly popular methods of acquiring diffusion MRI data in a research context. However, single-shell acquisitions, such as diffusion tensor imaging (DTI) and high angular resolution diffusion imaging (HARDI), still remain the most common acquisition schemes in practice. Here we tested whether multi-shell and DSI data have conversion flexibility to be interpolated into corresponding HARDI data. We acquired multi-shell and DSI data on both a phantom and in vivo human tissue and converted them to HARDI. The correlation and difference between their diffusion signals, anisotropy values, diffusivity measurements, fiber orientations, connectivity matrices, and network measures were examined. Our analysis result showed that the diffusion signals, anisotropy, diffusivity, and connectivity matrix of the HARDI converted from multi-shell and DSI were highly correlated with those of the HARDI acquired on the MR scanner, with correlation coefficients around 0.8~0.9. The average angular error between converted and original HARDI was 20.7° at voxels with signal-to-noise ratios greater than 5. The network topology measures had less than 2% difference, whereas the average nodal measures had a percentage difference around 4~7%. In general, multi-shell and DSI acquisitions can be converted to their corresponding single-shell HARDI with high fidelity. This supports multi-shell and DSI acquisitions over HARDI acquisition as the scheme of choice for diffusion acquisitions. PMID:27683539

  1. The Non-Classical Boltzmann Equation, and Diffusion-Based Approximations to the Boltzmann Equation

    CERN Document Server

    Frank, Martin; Larsen, Edward W; Vasques, Richard

    2014-01-01

    We show that several diffusion-based approximations (classical diffusion or SP1, SP2, SP3) to the linear Boltzmann equation can (for an infinite, homogeneous medium) be represented exactly by a non-classical transport equation. As a consequence, we indicate a method to solve diffusion-based approximations to the Boltzmann equation via Monte Carlo, with only statistical errors - no truncation errors.

  2. Experimental repetitive quantum error correction.

    Science.gov (United States)

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  3. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  4. Impact of spherical diffusion on labile trace metal speciation by electrochemical stripping techniques

    NARCIS (Netherlands)

    Pinheiro, J.P.; Domingos, R.F.

    2005-01-01

    The impact of the spherical diffusion contribution in labile trace metal speciation by stripping techniques was studied. It was shown that the relative error in the calculation of the stability constants caused by assuming linear diffusion varies with the efficiency of stirring, the diffusion coeffi

  5. High-throughput ab-initio dilute solute diffusion database

    Science.gov (United States)

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-01

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  6. Method for characterization of a spherically bent crystal for K.alpha. X-ray imaging of laser plasmas using a focusing monochromator geometry

    Science.gov (United States)

    Kugland, Nathan; Doeppner, Tilo; Glenzer, Siegfried; Constantin, Carmen; Niemann, Chris; Neumayer, Paul

    2015-04-07

    A method is provided for characterizing spectrometric properties (e.g., peak reflectivity, reflection curve width, and Bragg angle offset) of the K.alpha. emission line reflected narrowly off angle of the direct reflection of a bent crystal and in particular of a spherically bent quartz 200 crystal by analyzing the off-angle x-ray emission from a stronger emission line reflected at angles far from normal incidence. The bent quartz crystal can therefore accurately image argon K.alpha. x-rays at near-normal incidence (Bragg angle of approximately 81 degrees). The method is useful for in-situ calibration of instruments employing the crystal as a grating by first operating the crystal as a high throughput focusing monochromator on the Rowland circle at angles far from normal incidence (Bragg angle approximately 68 degrees) to make a reflection curve with the He-like x-rays such as the He-.alpha. emission line observed from a laser-excited plasma.

  7. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  8. Prediction of discretization error using the error transport equation

    Science.gov (United States)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  9. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  10. Sentinel-2 diffuser on-ground calibration

    Science.gov (United States)

    Mazy, E.; Camus, F.; Chorvalli, V.; Domken, I.; Laborie, A.; Marcotte, S.; Stockman, Y.

    2013-10-01

    The Sentinel-2 multi-spectral instrument (MSI) will provide Earth imagery in the frame of the Global Monitoring for Environment and Security (GMES) initiative which is a joint undertaking of the European Commission and the Agency. MSI instrument, under Astrium SAS responsibility, is a push-broom spectro imager in 13 spectral channels in VNIR and SWIR. The instrument radiometric calibration is based on in-flight calibration with sunlight through a quasi Lambertian diffuser. The diffuser covers the full pupil and the full field of view of the instrument. The on-ground calibration of the diffuser BRDF is mandatory to fulfil the in-flight performances. The diffuser is a 779 x 278 mm2 rectangular flat area in Zenith-A material. It is mounted on a motorised door in front of the instrument optical system entrance. The diffuser manufacturing and calibration is under the Centre Spatial of Liege (CSL) responsibility. The CSL has designed and built a completely remote controlled BRDF test bench able to handle large diffusers in their mount. As the diffuser is calibrated directly in its mount with respect to a reference cube, the error budget is significantly improved. The BRDF calibration is performed directly in MSI instrument spectral bands by using dedicated band-pass filters (VNIR and SWIR up to 2200 nm). Absolute accuracy is better than 0.5% in VNIR spectral bands and 1% in SWIR spectral bands. Performances were cross checked with other laboratories. The first MSI diffuser for flight model was calibrated mid 2013 on CSL BRDF measurement bench. The calibration of the diffuser consists mainly in thermal vacuum cycles, BRDF uniformity characterisation and BRDF angular characterisation. The total amount of measurement for the first flight model diffuser corresponds to more than 17500 BRDF acquisitions. Performance results are discussed in comparison with requirements.

  11. Quantifying soil CO2 respiration measurement error across instruments

    Science.gov (United States)

    Creelman, C. A.; Nickerson, N. R.; Risk, D. A.

    2010-12-01

    A variety of instrumental methodologies have been developed in an attempt to accurately measure the rate of soil CO2 respiration. Among the most commonly used are the static and dynamic chamber systems. The degree to which these methods misread or perturb the soil CO2 signal, however, is poorly understood. One source of error in particular is the introduction of lateral diffusion due to the disturbance of the steady-state CO2 concentrations. The addition of soil collars to the chamber system attempts to address this perturbation, but may induce additional errors from the increased physical disturbance. Using a numerical 3D soil-atmosphere diffusion model, we are undertaking a comprehensive comparative study of existing static and dynamic chambers, as well as a solid-state CTFD probe. Specifically, we are examining the 3D diffusion errors associated with each method and opportunities for correction. In this study, the impact of collar length, chamber geometry, chamber mixing and diffusion parameters on the magnitude of lateral diffusion around the instrument are quantified in order to provide insight into obtaining more accurate soil respiration estimates. Results suggest that while each method can approximate the true flux rate under idealized conditions, the associated errors can be of a high magnitude and may vary substantially in their sensitivity to these parameters. In some cases, factors such as the collar length and chamber exchange rate used are coupled in their effect on accuracy. Due to the widespread use of these instruments, it is critical that the nature of their biases and inaccuracies be understood in order to inform future development, ensure the accuracy of current measurements and to facilitate inter-comparison between existing datasets.

  12. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    Science.gov (United States)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  13. Improved Error Thresholds for Measurement-Free Error Correction

    Science.gov (United States)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  14. Influence of Boundary Condition and Diffusion Coefficient on the Accuracy of Diffusion Theory in Steady-State Spatially Resolved Diffuse Reflectance of Biological Tissues

    Institute of Scientific and Technical Information of China (English)

    张连顺; 张春平; 王新宇; 祁胜文; 许棠; 田建国; 张光寅

    2002-01-01

    The applicability of diffusion theory for the determination of tissue optical properties from steady-state diffuse reflectance is investigated. Analytical expressions from diffusion theory using the two most commonly assumed boundary conditions at the air-tissue interface and the two definitions of the diffusion coefficient are compared with Monte Carlo simulations. The effects of the choice of the boundary conditions and diffusion coefficients on the accuracy of the findings for the optical parameters are quantified, and criteria for accurate curve-fitting algorithms are developed. It is shown that the error in deriving the optical coefficients is considerably smaller for the solution which uses the extrapolated boundary condition and the diffusion coefficient independence of absorption coefficient, compared to the other three solutions.

  15. Diffusion on spatial network

    Science.gov (United States)

    Hui, Zi; Tang, Xiaoyue; Li, Wei; Greneche, Jean-Marc; Wang, Qiuping A.

    2015-04-01

    In this work, we study the problem of diffusing a product (idea, opinion, disease etc.) among agents on spatial network. The network is constructed by random addition of nodes on the planar. The probability for a previous node to be connected to the new one is inversely proportional to their spatial distance to the power of α. The diffusion rate between two connected nodes is inversely proportional to their spatial distance to the power of β as well. Inspired from the Fick's first law, we introduce the diffusion coefficient to measure the diffusion ability of the spatial network. Using both theoretical analysis and Monte Carlo simulation, we get the fact that the diffusion coefficient always decreases with the increasing of parameter α and β, and the diffusion sub-coefficient follows the power-law of the spatial distance with exponent equals to -α-β+2. Since both short-range diffusion and long-range diffusion exist, we use anomalous diffusion method in diffusion process. We get the fact that the slope index δ in anomalous diffusion is always smaller that 1. The diffusion process in our model is sub-diffusion.

  16. PREVENTABLE ERRORS: NEVER EVENTS

    Directory of Open Access Journals (Sweden)

    Narra Gopal

    2014-07-01

    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  17. Comparison of analytical error and sampling error for contaminated soil.

    Science.gov (United States)

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  18. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  19. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  20. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  1. Real diffusion-weighted MRI enabling true signal averaging and increased diffusion contrast.

    Science.gov (United States)

    Eichner, Cornelius; Cauley, Stephen F; Cohen-Adad, Julien; Möller, Harald E; Turner, Robert; Setsompop, Kawin; Wald, Lawrence L

    2015-11-15

    This project aims to characterize the impact of underlying noise distributions on diffusion-weighted imaging. The noise floor is a well-known problem for traditional magnitude-based diffusion-weighted MRI (dMRI) data, leading to biased diffusion model fits and inaccurate signal averaging. Here, we introduce a total-variation-based algorithm to eliminate shot-to-shot phase variations of complex-valued diffusion data with the intention to extract real-valued dMRI datasets. The obtained real-valued diffusion data are no longer superimposed by a noise floor but instead by a zero-mean Gaussian noise distribution, yielding dMRI data without signal bias. We acquired high-resolution dMRI data with strong diffusion weighting and, thus, low signal-to-noise ratio. Both the extracted real-valued and traditional magnitude data were compared regarding signal averaging, diffusion model fitting and accuracy in resolving crossing fibers. Our results clearly indicate that real-valued diffusion data enables idealized conditions for signal averaging. Furthermore, the proposed method enables unbiased use of widely employed linear least squares estimators for model fitting and demonstrates an increased sensitivity to detect secondary fiber directions with reduced angular error. The use of phase-corrected, real-valued data for dMRI will therefore help to clear the way for more detailed and accurate studies of white matter microstructure and structural connectivity on a fine scale. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Can post-error dynamics explain sequential reaction time patterns?

    Directory of Open Access Journals (Sweden)

    Stephanie eGoldfarb

    2012-07-01

    Full Text Available We investigate human error dynamics in sequential two-alternative choice tasks. When subjects repeatedly discriminate between two stimuli, their error rates and mean reaction times (RTs systematically depend on prior sequences of stimuli. We analyze these sequential effects on RTs, separating error and correct responses, and identify a sequential RT tradeoff: a sequence of stimuli which yields a relatively fast RT on error trials will produce a relatively slow RT on correct trials and vice versa. We reanalyze previous data and acquire and analyze new data in a choice task with stimulus sequences generated by a first-order Markov process having unequal probabilities of repetitions and alternations. We then show that relationships among these stimulus sequences and the corresponding RTs for correct trials, error trials, and averaged over all trials are significantly influenced by the probability of alternations; these relationships have not been captured by previous models. Finally, we show that simple, sequential updates to the initial condition and thresholds of a pure drift diffusion model can account for the trends in RT for correct and error trials. Our results suggest that error-based parameter adjustments are critical to modeling sequential effects.

  3. A Student Diffusion Activity

    Science.gov (United States)

    Kutzner, Mickey; Pearson, Bryan

    2017-01-01

    Diffusion is a truly interdisciplinary topic bridging all areas of STEM education. When biomolecules are not being moved through the body by fluid flow through the circulatory system or by molecular motors, diffusion is the primary mode of transport over short distances. The direction of the diffusive flow of particles is from high concentration…

  4. Acoustic diffusers III

    Science.gov (United States)

    Bidondo, Alejandro

    2002-11-01

    This acoustic diffusion research presents a pragmatic view, based more on effects than causes and 15 very useful in the project advance control process, where the sound field's diffusion coefficient, sound field diffusivity (SFD), for its evaluation. Further research suggestions are presented to obtain an octave frequency resolution of the SFD for precise design or acoustical corrections.

  5. A Student Diffusion Activity

    Science.gov (United States)

    Kutzner, Mickey; Pearson, Bryan

    2017-02-01

    Diffusion is a truly interdisciplinary topic bridging all areas of STEM education. When biomolecules are not being moved through the body by fluid flow through the circulatory system or by molecular motors, diffusion is the primary mode of transport over short distances. The direction of the diffusive flow of particles is from high concentration toward low concentration.

  6. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  7. Multipath error in range rate measurement by PLL-transponder/GRARR/TDRS

    Science.gov (United States)

    Sohn, S. J.

    1970-01-01

    Range rate errors due to specular and diffuse multipath are calculated for a tracking and data relay satellite (TDRS) using an S band Goddard range and range rate (GRARR) system modified with a phase-locked loop transponder. Carrier signal processing in the coherent turn-around transponder and the GRARR reciever is taken into account. The root-mean-square (rms) range rate error was computed for the GRARR Doppler extractor and N-cycle count range rate measurement. Curves of worst-case range rate error are presented as a function of grazing angle at the reflection point. At very low grazing angles specular scattering predominates over diffuse scattering as expected, whereas for grazing angles greater than approximately 15 deg, the diffuse multipath predominates. The range rate errors at different low orbit altutudes peaked between 5 and 10 deg grazing angles.

  8. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  9. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  10. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  11. Adaptive computation for convection dominated diffusion problems

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhiming; JI Guanghua

    2004-01-01

    We derive sharp L∞(L1) a posteriori error estimate for the convection dominated diffusion equations of the form αu/αt+div(vu)-εΔu=g. The derived estimate is insensitive to the diffusionparameter ε→0. The problem is discretized implicitly in time via the method of characteristics and in space via continuous piecewise linear finite elements. Numerical experiments are reported to show the competitive behavior of the proposed adaptive method.

  12. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  13. Spatial frequency domain error budget

    Energy Technology Data Exchange (ETDEWEB)

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  14. Reducing errors in emergency surgery.

    Science.gov (United States)

    Watters, David A K; Truskett, Philip G

    2013-06-01

    Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  15. Diffusion of gallium in cadmium telluride

    Energy Technology Data Exchange (ETDEWEB)

    Blackmore, G.W. (Royal Signals and Radar Establishment, Malvern (United Kingdom)); Jones, E.D. (Coventry Polytechnic (United Kingdom)); Mullin, J.B. (Electronics Materials Consultant, West Malvern (United Kingdom)); Stewart, N.M. (BT Labs., Martlesham Heath, Ipswich (United Kingdom))

    1993-01-30

    The diffusion of Ga into bulk-grown, single crystal slices of CdTe was studied in the temperature range 350-811degC where the diffusion anneals were carried out in sealed silica capsules using three different types of diffusion sources. These were: excess Ga used alone, or with either excess Cd or excess Te added to the Ga. Each of the three sets of conditions resulted in different types of concentration profile. At temperatures above 470degC, a function composed of the sum of two complementary error functions gave the best fit to the profiles, whereas below this temperature a function composed of the sum of one or more exponentials of the form exp(-ax) gave the best fit. The behaviour of the diffusion of Ga in CdTe is complex, but it can be seen that two diffusion mechanisms are operating. The first is where D appears to decrease with Cd partial pressure, which implies that the diffusion mechanism may involve Cd vacancies, and a second which is independent of Cd partial pressure. The moderate values of D obtained, confirms that CdTe buffer layers may be useful in reducing Ga contamination in (Hg[sub x]Cd[sub 1-x])Te epitaxial devices grown on GaAs substrates. (orig.).

  16. Modified nonlinear complex diffusion filter (MNCDF).

    Science.gov (United States)

    Saini, Kalpana; Dewal, M L; Rohit, Manojkumar

    2012-06-01

    Speckle noise removal is the most important step in the processing of echocardiographic images. A speckle-free image produces useful information to diagnose heart-related diseases. Images which contain low noise and sharp edges are more easily analyzed by the clinicians. This noise removal stage is also a preprocessing stage in segmentation techniques. A new formulation has been proposed for a well-known nonlinear complex diffusion filter (NCDF). Its diffusion coefficient and the time step size are modified to give fast processing and better results. An investigation has been performed among nine patients suffering from mitral regurgitation. Images have been taken with 2D echo in apical and parasternal views. The peak signal-to-noise ratio (PSNR), universal quality index (Qi), mean absolute error (MAE), mean square error (MSE), and root mean square error (RMSE) have been calculated, and the results show that the proposed method is much better than the previous filters for echocardiographic images. The proposed method, modified nonlinear complex diffusion filter (MNCDF), smooths the homogeneous area and enhances the fine details.

  17. Error Analysis in English Language Learning

    Institute of Scientific and Technical Information of China (English)

    杜文婷

    2009-01-01

    Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.

  18. Error Analysis And Second Language Acquisition

    Institute of Scientific and Technical Information of China (English)

    王惠丽

    2016-01-01

    Based on the theories of error and error analysis, the article is trying to explore the effect of error and error analysis on SLA. Thus give some advice to the language teachers and language learners.

  19. Wavelet-Based Diffusion Approach for DTI Image Restoration

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xiang-fen; CHEN Wu-fan; TIAN Wei-feng; YE Hong

    2008-01-01

    The Rician noise introduced into the diffusion tensor images (DTIs) can bring serious impacts on tensor calculation and fiber tracking. To decrease the effects of the Rician noise, we propose to consider the wavelet-based diffusion method to denoise multichannel typed diffusion weighted (DW) images. The presented smoothing strategy, which utilizes anisotropic nonlinear diffusion in wavelet domain, successfully removes noise while preserving both texture and edges. To evaluate quantitatively the efficiency of the presented method in accounting for the Rician noise introduced into the DW images, the peak-to-peak signal-to-noise ratio (PSNR) and signal-to-mean squared error ratio (SMSE) metrics are adopted. Based on the synthetic and real data, we calculated the apparent diffusion coefficient (ADC) and tracked the fibers. We made comparisons between the presented model,the wave shrinkage and regularized nonlinear diffusion smoothing method. All the experiment results prove quantitatively and visually the better performance of the presented filter.

  20. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  1. Discretization error of Stochastic Integrals

    CERN Document Server

    Fukasawa, Masaaki

    2010-01-01

    Asymptotic error distribution for approximation of a stochastic integral with respect to continuous semimartingale by Riemann sum with general stochastic partition is studied. Effective discretization schemes of which asymptotic conditional mean-squared error attains a lower bound are constructed. Two applications are given; efficient delta hedging strategies with transaction costs and effective discretization schemes for the Euler-Maruyama approximation are constructed.

  2. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  3. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  4. Onorbit IMU alignment error budget

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  5. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  6. Binary Error Correcting Network Codes

    CERN Document Server

    Wang, Qiwen; Li, Shuo-Yen Robert

    2011-01-01

    We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.

  7. Error Propagation in the Hypercycle

    CERN Document Server

    Campos, P R A; Stadler, P F

    1999-01-01

    We study analytically the steady-state regime of a network of n error-prone self-replicating templates forming an asymmetric hypercycle and its error tail. We show that the existence of a master template with a higher non-catalyzed self-replicative productivity, a, than the error tail ensures the stability of chains in which merror tail is guaranteed for catalytic coupling strengths (K) of order of a. We find that the hypercycle becomes more stable than the chains only for K of order of a2. Furthermore, we show that the minimal replication accuracy per template needed to maintain the hypercycle, the so-called error threshold, vanishes like sqrt(n/K) for large K and n<=4.

  8. FPU-Supported Running Error Analysis

    OpenAIRE

    T. Zahradnický; R. Lórencz

    2010-01-01

    A-posteriori forward rounding error analyses tend to give sharper error estimates than a-priori ones, as they use actual data quantities. One of such a-posteriori analysis – running error analysis – uses expressions consisting of two parts; one generates the error and the other propagates input errors to the output. This paper suggests replacing the error generating term with an FPU-extracted rounding error estimate, which produces a sharper error bound.

  9. Accurate barrier heights using diffusion Monte Carlo

    CERN Document Server

    Krongchon, Kittithat; Wagner, Lucas K

    2016-01-01

    Fixed node diffusion Monte Carlo (DMC) has been performed on a test set of forward and reverse barrier heights for 19 non-hydrogen-transfer reactions, and the nodal error has been assessed. The DMC results are robust to changes in the nodal surface, as assessed by using different mean-field techniques to generate single determinant wave functions. Using these single determinant nodal surfaces, DMC results in errors of 1.5(5) kcal/mol on barrier heights. Using the large data set of DMC energies, we attempted to find good descriptors of the fixed node error. It does not correlate with a number of descriptors including change in density, but does correlate with the gap between the highest occupied and lowest unoccupied orbital energies in the mean-field calculation.

  10. Back diffusion from thin low permeability zones.

    Science.gov (United States)

    Yang, Minjune; Annable, Michael D; Jawitz, James W

    2015-01-06

    Aquitards can serve as long-term contaminant sources to aquifers when contaminant mass diffuses from the aquitard following aquifer source mass depletion. This study describes analytical and experimental approaches to understand reactive and nonreactive solute transport in a thin aquitard bounded by an adjacent aquifer. A series of well-controlled laboratory experiments were conducted in a two-dimensional flow chamber to quantify solute diffusion from a high-permeability sand into and subsequently out of kaolinite clay layers of vertical thickness 15 mm, 20 mm, and 60 mm. One-dimensional analytical solutions were developed for diffusion in a finite aquitard with mass exchange with an adjacent aquifer using the method of images. The analytical solutions showed very good agreement with measured breakthrough curves and aquitard concentration distributions measured in situ by light reflection visualization. Solutes with low retardation accumulated more stored mass with greater penetration distance in the aquitard compared to high-retardation solutes. However, because the duration of aquitard mass release was much longer, high-retardation solutes have a greater long-term back diffusion risk. The error associated with applying a semi-infinite domain analytical solution to a finite diffusion domain increases as a function of the system relative diffusion length scale, suggesting that the solutions using image sources should be applied in cases with rapid solute diffusion and/or thin clay layers. The solutions presented here can be extended to multilayer aquifer/low-permeability systems to assess the significance of back diffusion from thin layers.

  11. Effective Potential Theory for Diffusion in Binary Ionic Mixtures

    CERN Document Server

    Shaffer, Nathaniel R; Daligault, Jérôme

    2016-01-01

    Self-diffusion and interdiffusion coefficients of binary ionic mixtures are evaluated using the Effective Potential Theory (EPT), and the predictions are compared with the results of molecular dynamics simulations. We find that EPT agrees with molecular dynamics from weak coupling well into the strong coupling regime, which is a similar range of coupling strengths as previously observed in comparisons with the one-component plasma. Within this range, typical relative errors of approximately 20% and worst-case relative errors of approximately 40% are observed. We also examine the Darken model, which approximates the interdiffusion coefficients based on the self-diffusion coefficients.

  12. Metric diffusion along foliations

    CERN Document Server

    Walczak, Szymon M

    2017-01-01

    Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

  13. An efficient plane-grating monochromator based on conical diffraction for continuous tuning in the entire soft X-ray range including tender X-rays (2-8 keV).

    Science.gov (United States)

    Jark, Werner

    2016-01-01

    Recently it was verified that the diffraction efficiency of reflection gratings with rectangular profile, when illuminated at grazing angles of incidence with the beam trajectory along the grooves and not perpendicular to them, remains very high for tender X-rays of several keV photon energy. This very efficient operation of a reflection grating in the extreme off-plane orientation, i.e. in conical diffraction, offers the possibility of designing a conical diffraction monochromator scheme that provides efficient continuous photon energy tuning over rather large tuning ranges. For example, the tuning could cover photon energies from below 1000 eV up to 8 keV. The expected transmission of the entire instrument is high as all components are always operated below the critical angle for total reflection. In the simplest version of the instrument a plane grating is preceded by a plane mirror rotating simultaneously with it. The photon energy selection will then be made using the combination of a focusing mirror and exit slit. As is common for grating monochromators for soft X-ray radiation, the minimum spectral bandwidth is source-size-limited, while the bandwidth can be adjusted freely to any larger value. As far as tender X-rays (2-8 keV) are concerned, the minimum bandwidth is at least one and up to two orders of magnitude larger than the bandwidth provided by Si(111) double-crystal monochromators in a collimated beam. Therefore the instrument will provide more flux, which can even be increased at the expense of a bandwidth increase. On the other hand, for softer X-rays with photon energies below 1 keV, competitive relative spectral resolving powers of the order of 10000 are possible.

  14. Diffusion formalism and applications

    CERN Document Server

    Dattagupta, Sushanta

    2013-01-01

    Within a unifying framework, Diffusion: Formalism and Applications covers both classical and quantum domains, along with numerous applications. The author explores the more than two centuries-old history of diffusion, expertly weaving together a variety of topics from physics, mathematics, chemistry, and biology. The book examines the two distinct paradigms of diffusion-physical and stochastic-introduced by Fourier and Laplace and later unified by Einstein in his groundbreaking work on Brownian motion. The author describes the role of diffusion in probability theory and stochastic calculus and

  15. Inpainting using airy diffusion

    Science.gov (United States)

    Lorduy Hernandez, Sara

    2015-09-01

    One inpainting procedure based on Airy diffusion is proposed, implemented via Maple and applied to some digital images. Airy diffusion is a partial differential equation with spatial derivatives of third order in contrast with the usual diffusion with spatial derivatives of second order. Airy diffusion generates the Airy semigroup in terms of the Airy functions which can be rewritten in terms of Bessel functions. The Airy diffusion can be used to smooth an image with the corresponding noise elimination via convolution. Also the Airy diffusion can be used to erase objects from an image. We build an algorithm using the Maple package ImageTools and such algorithm is tested using some images. Our results using Airy diffusion are compared with the similar results using standard diffusion. We observe that Airy diffusion generates powerful filters for image processing which could be incorporated in the usual packages for image processing such as ImageJ and Photoshop. Also is interesting to consider the possibility to incorporate the Airy filters as applications for smartphones and smart-glasses.

  16. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  17. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  18. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  19. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Radatz, Hendrik

    1979-01-01

    Five types of errors in an information-processing classification are discussed: language difficulties; difficulties in obtaining spatial information; deficient mastery of prerequisite skills, facts, and concepts; incorrect associations; and application of irrelevant rules. (MP)

  20. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  1. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  2. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  3. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  4. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  5. Error image aware content restoration

    Science.gov (United States)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  6. Quantum error correction for beginners.

    Science.gov (United States)

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  7. Dominant modes via model error

    Science.gov (United States)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  8. Diffusion Based Photon Mapping

    DEFF Research Database (Denmark)

    Schjøth, Lars; Olsen, Ole Fogh; Sporring, Jon

    2006-01-01

    . To address this problem we introduce a novel photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way we preserve the important illumination features......, while eliminating noise. We call our method diffusion based photon mapping....

  9. Diffusion Based Photon Mapping

    DEFF Research Database (Denmark)

    Schjøth, Lars; Fogh Olsen, Ole; Sporring, Jon

    2007-01-01

    . To address this problem we introduce a novel photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way we preserve the important illumination features......, while eliminating noise. We call our method diffusion based photon mapping....

  10. Bronnen van diffuse bodembelasting

    NARCIS (Netherlands)

    Lijzen JPA; Ekelenkamp A; LBG; DGM/BO

    1995-01-01

    Ten behoeve van het preventieve bodembeleid was onvoldoende duidelijk welke bijdrage diverse bronnen leveren aan diffuse bodembelasting. Doel van deze inventarisatie was beschikbare kennis over diffuse bodembelasting te bundelen en kennis-lacunes aan te geven. Nevendoel is het beschrijven van de

  11. Distributed Control Diffusion

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2007-01-01

    , self-reconfigurable robots, we present the concept of distributed control diffusion: distributed queries are used to identify modules that play a specific role in the robot, and behaviors that implement specific control strategies are diffused throughout the robot based on these role assignments...... perform simple obstacle avoidance in a wide range of different car-like robots constructed using ATRON modules...

  12. Affine diffusions and related processes simulation, theory and applications

    CERN Document Server

    Alfonsi, Aurélien

    2015-01-01

    This book gives an overview of affine diffusions, from Ornstein-Uhlenbeck processes to Wishart processes and it considers some related diffusions such as Wright-Fisher processes. It focuses on different simulation schemes for these processes, especially second-order schemes for the weak error. It also presents some models, mostly in the field of finance, where these methods are relevant and provides some numerical experiments. The book explains the mathematical background to understand affine diffusions and analyze the accuracy of the schemes.  

  13. Atomic diffusion in stars

    CERN Document Server

    Michaud, Georges; Richer, Jacques

    2015-01-01

    This book gives an overview of atomic diffusion, a fundamental physical process, as applied to all types of stars, from the main sequence to neutron stars. The superficial abundances of stars as well as their evolution can be significantly affected. The authors show where atomic diffusion plays an essential role and how it can be implemented in modelling.  In Part I, the authors describe the tools that are required to include atomic diffusion in models of stellar interiors and atmospheres. An important role is played by the gradient of partial radiative pressure, or radiative acceleration, which is usually neglected in stellar evolution. In Part II, the authors systematically review the contribution of atomic diffusion to each evolutionary step. The dominant effects of atomic diffusion are accompanied by more subtle effects on a large number of structural properties throughout evolution. One of the goals of this book is to provide the means for the astrophysicist or graduate student to evaluate the importanc...

  14. Signal window minimum average error algorithm for multi-phase level computer-generated holograms

    Science.gov (United States)

    El Bouz, Marwa; Heggarty, Kevin

    2000-06-01

    This paper extends the article "Signal window minimum average error algorithm for computer-generated holograms" (JOSA A 1998) to multi-phase level CGHs. We show that using the same rule for calculating the complex error diffusion weights, iterative-algorithm-like low-error signal windows can be obtained for any window shape or position (on- or off-axis) and any number of CGH phase levels. Important algorithm parameters such as amplitude normalisation level and phase freedom diffusers are described and investigated to optimize the algorithm. We show that, combined with a suitable diffuser, the algorithm makes feasible the calculation of high performance CGHs far larger than currently practical with iterative algorithms yet now realisable with modern fabrication techniques. Preliminary experimental optical reconstructions are presented.

  15. Harmless error analysis: How do judges respond to confession errors?

    Science.gov (United States)

    Wallace, D Brian; Kassin, Saul M

    2012-04-01

    In Arizona v. Fulminante (1991), the U.S. Supreme Court opened the door for appellate judges to conduct a harmless error analysis of erroneously admitted, coerced confessions. In this study, 132 judges from three states read a murder case summary, evaluated the defendant's guilt, assessed the voluntariness of his confession, and responded to implicit and explicit measures of harmless error. Results indicated that judges found a high-pressure confession to be coerced and hence improperly admitted into evidence. As in studies with mock jurors, however, the improper confession significantly increased their conviction rate in the absence of other evidence. On the harmless error measures, judges successfully overruled the confession when required to do so, indicating that they are capable of this analysis.

  16. Explaining errors in children's questions.

    Science.gov (United States)

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  17. Pauli Exchange Errors in Quantum Computation

    CERN Document Server

    Ruskai, M B

    2000-01-01

    We argue that a physically reasonable model of fault-tolerant computation requires the ability to correct a type of two-qubit error which we call Pauli exchange errors as well as one qubit errors. We give an explicit 9-qubit code which can handle both Pauli exchange errors and all one-bit errors.

  18. Correction of spin diffusion during iterative automated NOE assignment.

    Science.gov (United States)

    Linge, Jens P; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus beta-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  19. Helium diffusion in carbonates

    Science.gov (United States)

    Amidon, W. H.; Cherniak, D. J.; Watson, E. B.; Hobbs, D.

    2013-12-01

    The abundance and large grain size of carbonate minerals make them a potentially attractive target for 4He thermochronology and 3He cosmogenic dating, although the diffusive properties of helium in carbonates remain poorly understood. This work characterizes helium diffusion in calcite and dolomite to better understand the crystal-chemical factors controlling He transport and retentivity. Slabs of cleaved natural calcite and dolomite, and polished sections of calcite cut parallel or normal to c, were implanted with 3He at 3 MeV with a dose of 5x1015/cm2. Implanted carbonates were heated in 1-atm furnaces, and 3He distributions following diffusion anneals were profiled with Nuclear Reaction Analysis using the reaction 3He(d,p)4He. For 3He transport normal to cleavage surfaces in calcite, we obtain the following Arrhenius relation over the temperature range 78-300°C: Dcalcite = 9.0x10-9exp(-55 × 6 kJ mol-1/RT) m2sec-1. Diffusion in calcite exhibits marked anisotropy, with diffusion parallel to c about two orders of magnitude slower than diffusion normal to cleavage faces. He diffusivities for transport normal to the c-axis are similar in value to those normal to cleavage surfaces. Our findings are broadly consistent with helium diffusivities from step-heating measurements of calcite by Copeland et al. (2007); these bulk degassing data may reflect varying effects of diffusional anisotropy. Helium diffusion normal to cleavage surfaces in dolomite is significantly slower than diffusion in calcite, and has a much higher activation energy for diffusion. For dolomite, we obtain the following Arrhenius relation for He diffusion over the temperature range 150-400°C: Ddolomite = 9.0x10-8exp(-92 × 9 kJ mol-1/RT) m2sec-1. The role of crystallographic structure in influencing these differences among diffusivities was evaluated using the maximum aperture approach of Cherniak and Watson (2011), in which crystallographic structures are sectioned along possible diffusion

  20. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  1. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  2. High-efficiency B₄C/Mo₂C alternate multilayer grating for monochromators in the photon energy range from 0.7 to 3.4 keV.

    Science.gov (United States)

    Choueikani, Fadi; Lagarde, Bruno; Delmotte, Franck; Krumrey, Michael; Bridou, Françoise; Thomasset, Muriel; Meltchakov, Evgueni; Polack, François

    2014-04-01

    An alternate multilayer (AML) grating has been prepared by coating an ion etched lamellar grating with a B4C/Mo2C multilayer (ML) having a layer thickness close to the groove depth. Such a structure behaves as a 2D synthetic crystal and can reach very high efficiencies when the Bragg condition is satisfied. This AML coated grating has been characterized at the SOLEIL Metrology and Tests Beamline between 0.7 and 1.7 keV and at the four-crystal monochromator beamline of Physikalisch-Technische Bundesanstalt (PTB) at BESSY II between 1.75 and 3.4 keV. A peak diffraction efficiency of nearly 27% was measured at 2.2 keV. The measured efficiencies are well reproduced by numerical simulations made with the electromagnetic propagation code CARPEM. Such AML gratings, paired with a matched ML mirror, constitute efficient monochromators for intermediate energy photons. They will extend the accessible energy for many applications as x-ray absorption spectroscopy or x-ray magnetic circular dichroism experiments.

  3. Tungsten diffusion in olivine

    Science.gov (United States)

    Cherniak, D. J.; Van Orman, J. A.

    2014-03-01

    Diffusion of tungsten has been characterized in synthetic forsterite and natural olivine (Fo90) under dry conditions. The source of diffusant was a mixture of magnesium tungstate and olivine powders. Experiments were prepared by sealing the source material and polished olivine under vacuum in silica glass ampoules with solid buffers to buffer at NNO or IW. Prepared capsules were annealed in 1 atm furnaces for times ranging from 45 min to several weeks, at temperatures from 1050 to 1450 °C. Tungsten distributions in the olivine were profiled by Rutherford Backscattering Spectrometry (RBS). The following Arrhenius relation is obtained for W diffusion in forsterite: D=1.0×10-8exp(-365±28 kJ mol/RT) m s Diffusivities for the synthetic forsterite and natural Fe-bearing olivine are similar, and tungsten diffusion in olivine shows little dependence on crystallographic orientation or oxygen fugacity. The slow diffusivities measured for W in olivine indicate that Hf-W ages in olivine-metal systems will close to diffusive exchange at higher temperatures than other chronometers commonly used in cosmochronology, and that tungsten isotopic signatures will be less likely to be reset by subsequent thermal events.

  4. Cosmology with matter diffusion

    CERN Document Server

    Calogero, Simone

    2013-01-01

    We construct a viable cosmological model based on velocity diffusion of matter particles. In order to ensure the conservation of the total energy-momentum tensor in the presence of diffusion, we include a cosmological scalar field $\\phi$ which we identify with the dark energy component of the Universe. The model is characterized by only one new degree of freedom, the diffusion parameter $\\sigma$. The standard $\\Lambda$CDM model can be recovered by setting $\\sigma=0$. If diffusion takes place ($\\sigma >0$) the dynamics of the matter and of the dark energy fields are coupled. We argue that the existence of a diffusion mechanism in the Universe can serve as a theoretical motivation for interacting models. We constrain the background dynamics of the diffusion model with Supernovae, H(z) and BAO data. We also perform a perturbative analysis of this model in order to understand structure formation in the Universe. We calculate the impact of diffusion both on the CMB spectrum, with particular attention to the integr...

  5. POSITION ERROR IN STATION-KEEPING SATELLITE

    Science.gov (United States)

    of an error in satellite orientation and the sun being in a plane other than the equatorial plane may result in errors in position determination. The nature of the errors involved is described and their magnitudes estimated.

  6. Orbit IMU alignment: Error analysis

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  7. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  8. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  9. Redundant measurements for controlling errors

    Energy Technology Data Exchange (ETDEWEB)

    Ehinger, M. H.; Crawford, J. M.; Madeen, M. L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program.

  10. The numerical simulation of convection delayed dominated diffusion equation

    Directory of Open Access Journals (Sweden)

    Mohan Kumar P. Murali

    2016-01-01

    Full Text Available In this paper, we propose a fitted numerical method for solving convection delayed dominated diffusion equation. A fitting factor is introduced and the model equation is discretized by cubic spline method. The error analysis is analyzed for the consider problem. The numerical examples are solved using the present method and compared the result with the exact solution.

  11. Toward a cognitive taxonomy of medical errors.

    OpenAIRE

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of e...

  12. Robust Quantum Error Correction via Convex Optimization

    CERN Document Server

    Kosut, R L; Lidar, D A

    2007-01-01

    Quantum error correction procedures have traditionally been developed for specific error models, and are not robust against uncertainty in the errors. Using a semidefinite program optimization approach we find high fidelity quantum error correction procedures which present robust encoding and recovery effective against significant uncertainty in the error system. We present numerical examples for 3, 5, and 7-qubit codes. Our approach requires as input a description of the error channel, which can be provided via quantum process tomography.

  13. Errors depending on costs in sample surveys

    OpenAIRE

    Marella, Daniela

    2007-01-01

    "This paper presents a total survey error model that simultaneously treats sampling error, nonresponse error and measurement error. The main aim for developing the model is to determine the optimal allocation of the available resources for the total survey error reduction. More precisely, the paper is concerned with obtaining the best possible accuracy in survey estimate through an overall economic balance between sampling and nonsampling error." (author's abstract)

  14. Error-tolerant Tree Matching

    CERN Document Server

    Oflazer, K

    1996-01-01

    This paper presents an efficient algorithm for retrieving from a database of trees, all trees that match a given query tree approximately, that is, within a certain error tolerance. It has natural language processing applications in searching for matches in example-based translation systems, and retrieval from lexical databases containing entries of complex feature structures. The algorithm has been implemented on SparcStations, and for large randomly generated synthetic tree databases (some having tens of thousands of trees) it can associatively search for trees with a small error, in a matter of tenths of a second to few seconds.

  15. Immediate error correction process following sleep deprivation

    National Research Council Canada - National Science Library

    HSIEH, SHULAN; CHENG, I‐CHEN; TSAI, LING‐LING

    2007-01-01

    ...) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event‐related potentials (ERPs...

  16. Theories on diffusion of technology

    DEFF Research Database (Denmark)

    Munch, Birgitte

    Tracing the body of the diffusion proces by analysing the diffusion process from historical, sociological, economic and technical approaches. Discussing central characteristics of the proces of diffusion og CAD/CAM in Denmark.......Tracing the body of the diffusion proces by analysing the diffusion process from historical, sociological, economic and technical approaches. Discussing central characteristics of the proces of diffusion og CAD/CAM in Denmark....

  17. Diffuse Ceiling Ventilation

    DEFF Research Database (Denmark)

    Zhang, Chen; Yu, Tao; Heiselberg, Per Kvols

    -cooling period and night cooling potential. The investment cost of this ventilation system is about 5-10% lower than the conventional ones, because the acoustic ceiling could be directly applied as air diffuser and the use of plenum to distribute air reduces the cost of ductwork. There is a growing interest...... is not well structured with this system. These become the motivations in developing the design guide. This design guide aims to establish a systematic understanding of diffuse ceiling ventilation and provide assistance in designing of such a system. The guide is targeted at design engineers, architects...... and manufacturers and the users of diffuse ceiling technology. The design guide introduces the principle and key characteristics of room air distribution with diffuse ceiling ventilation. It provides an overview of potential benefit and limitations of this technology. The benefits include high thermal comfort, high...

  18. Diffuse Ceiling Ventilation

    DEFF Research Database (Denmark)

    Zhang, Chen; Yu, Tao; Heiselberg, Per Kvols

    with conventional ventilation systems (mixing or displacement ventilation), diffuse ceiling ventilation can significantly reduce or even eliminate draught risk in the occupied zone. Moreover, this ventilation system presents a promising opportunity for energy saving, because of the low pressure loss, extended free......Diffuse ceiling ventilation is an innovative ventilation concept where the suspended ceiling serves as an air diffuser to supply fresh air into the room. Due to the large opening area, air is delivered to the room with very low velocity and no fixed direction, therefore the name ‘diffuse’. Compared......-cooling period and night cooling potential. The investment cost of this ventilation system is about 5-10% lower than the conventional ones, because the acoustic ceiling could be directly applied as air diffuser and the use of plenum to distribute air reduces the cost of ductwork. There is a growing interest...

  19. Seismic Fault Preserving Diffusion

    CERN Document Server

    Lavialle, Olivier; Germain, Christian; Donias, Marc; Guillon, Sebastien; Keskes, Naamen; Berthoumieu, Yannick

    2007-01-01

    This paper focuses on the denoising and enhancing of 3-D reflection seismic data. We propose a pre-processing step based on a non linear diffusion filtering leading to a better detection of seismic faults. The non linear diffusion approaches are based on the definition of a partial differential equation that allows us to simplify the images without blurring relevant details or discontinuities. Computing the structure tensor which provides information on the local orientation of the geological layers, we propose to drive the diffusion along these layers using a new approach called SFPD (Seismic Fault Preserving Diffusion). In SFPD, the eigenvalues of the tensor are fixed according to a confidence measure that takes into account the regularity of the local seismic structure. Results on both synthesized and real 3-D blocks show the efficiency of the proposed approach.

  20. Seismic fault preserving diffusion

    Science.gov (United States)

    Lavialle, Olivier; Pop, Sorin; Germain, Christian; Donias, Marc; Guillon, Sebastien; Keskes, Naamen; Berthoumieu, Yannick

    2007-02-01

    This paper focuses on the denoising and enhancing of 3-D reflection seismic data. We propose a pre-processing step based on a non-linear diffusion filtering leading to a better detection of seismic faults. The non-linear diffusion approaches are based on the definition of a partial differential equation that allows us to simplify the images without blurring relevant details or discontinuities. Computing the structure tensor which provides information on the local orientation of the geological layers, we propose to drive the diffusion along these layers using a new approach called SFPD (Seismic Fault Preserving Diffusion). In SFPD, the eigenvalues of the tensor are fixed according to a confidence measure that takes into account the regularity of the local seismic structure. Results on both synthesized and real 3-D blocks show the efficiency of the proposed approach.

  1. Isomorphism, Diffusion and Decoupling

    DEFF Research Database (Denmark)

    Boxenbaum, Eva; Jonsson, Stefan

    2017-01-01

    This chapter traces the evolution of the core theoretical constructs of isomorphism, decoupling and diffusion in organizational institutionalism. We first review the original theoretical formulations of these constructs and then examine their evolution in empirical research conducted over the past...

  2. Diffusing Best Practices

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Baskerville, Richard

    2014-01-01

    approach. The study context is a design case in which an organization desires to diffuse its best practices across different groups. The design goal is embodied in organizational mechanisms to achieve this diffusion. The study used Theory of Planned Behavior (TPB) as a kernel theory. The artifacts...... resulting from the design were two-day training workshops conceptually anchored to TBP. The design theory was evaluated through execution of eight diffusion workshops involving three different groups in the same company. The findings indicate that the match between the practice and the context materialized...... that the behavior will be effective). These two factors were especially critical if the source context of the best practice is qualitatively different from the target context into which the organization is seeking to diffuse the best practice....

  3. On Diffusion and Permeation

    KAUST Repository

    Peppin, Stephen S. L.

    2009-01-01

    Diffusion and permeation are discussed within the context of irreversible thermodynamics. A new expression for the generalized Stokes-Einstein equation is obtained which links the permeability to the diffusivity of a two-component solution and contains the poroelastic Biot-Willis coefficient. The theory is illustrated by predicting the concentration and pressure profiles during the filtration of a protein solution. At low concentrations the proteins diffuse independently while at higher concentrations they form a nearly rigid porous glass through which the fluid permeates. The theoretically determined pressure drop is nonlinear in the diffusion regime and linear in the permeation regime, in quantitative agreement with experimental measurements. © 2009 Walter de Gruyter, Berlin, New York.

  4. Diffusion of Wilson Loops

    CERN Document Server

    Brzoska, A M; Negele, J W; Thies, M

    2004-01-01

    A phenomenological analysis of the distribution of Wilson loops in SU(2) Yang-Mills theory is presented in which Wilson loop distributions are described as the result of a diffusion process on the group manifold. It is shown that, in the absence of forces, diffusion implies Casimir scaling and, conversely, exact Casimir scaling implies free diffusion. Screening processes occur if diffusion takes place in a potential. The crucial distinction between screening of fundamental and adjoint loops is formulated as a symmetry property related to the center symmetry of the underlying gauge theory. The results are expressed in terms of an effective Wilson loop action and compared with various limits of SU(2) Yang-Mills theory.

  5. Understanding Limitations in the Determination of the Diffuse Galactic Gamma-ray Emission

    Energy Technology Data Exchange (ETDEWEB)

    Moskalenko, Igor V.; /Stanford U., HEPL /KIPAC, Menlo Park; Digel, S.W.; /SLAC /KIPAC, Menlo Park; Porter, T.A.; /UC, Santa Cruz; Reimer, O.; /Stanford U., HEPL /KIPAC,; Strong, A.W.; /Garching, Max Planck Inst., MPE

    2006-10-03

    We discuss uncertainties and possible sources of errors associated with the determination of the diffuse Galactic {gamma}-ray emission using the EGRET data. Most of the issues will be relevant also in the GLAST era. The focus here is on issues that impact evaluation of dark matter annihilation signals against the diffuse {gamma}-ray emission of the Milky Way.

  6. Drift-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    K. Banoo

    1998-01-01

    equation in the discrete momentum space. This is shown to be similar to the conventional drift-diffusion equation except that it is a more rigorous solution to the Boltzmann equation because the current and carrier densities are resolved into M×1 vectors, where M is the number of modes in the discrete momentum space. The mobility and diffusion coefficient become M×M matrices which connect the M momentum space modes. This approach is demonstrated by simulating electron transport in bulk silicon.

  7. CO diffusion capacity

    Energy Technology Data Exchange (ETDEWEB)

    Mielke, U.

    1979-01-01

    We measured in 287 persons the pulmonary CO diffusion capacity with the steady-state and the single breath methods, applying apnoeic periods of 4 and 10 seconds duration. The aspects methodical significance, polyclinical applicability and pathognostic relevance with respect to other approved pulmonary functional tests are discussed. Differing pulmonary diffusion capacity values found in normal persons or in patients suffering from silicosis, pulmonary fibrosis, Boeck's disease or rheumatoid arthritis, were investigated and critically evaluated.

  8. Diffusion in nanocrystalline solids

    OpenAIRE

    Chadwick, Alan V.

    2016-01-01

    Enhanced atomic migration was an early observation from experimental studies into nanocrystalline solids. This contribution presents an overview of the available diffusion data for simple metals and ionic materials in nanocrystalline form. It will be shown that enhanced diffusion can be interpreted in terms of atomic transport along the interfaces, which are comparable to grain boundaries in coarse-grained analogues. However, the method of sample preparation is seen to play a major role in...

  9. The error of our ways

    Science.gov (United States)

    Swartz, Clifford E.

    1999-10-01

    In Victorian literature it was usually some poor female who came to see the error of her ways. How prescient of her! How I wish that all writers of manuscripts for The Physics Teacher would come to similar recognition of this centerpiece of measurement. For, Brothers and Sisters, we all err.

  10. Measurement error in geometric morphometrics.

    Science.gov (United States)

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.

  11. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  12. Having Fun with Error Analysis

    Science.gov (United States)

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  13. Typical errors of ESP users

    Science.gov (United States)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  14. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  15. A brief history of error.

    Science.gov (United States)

    Murray, Andrew W

    2011-10-03

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it.

  16. Error processing in Huntington's disease.

    Directory of Open Access Journals (Sweden)

    Christian Beste

    Full Text Available BACKGROUND: Huntington's disease (HD is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related negativity (Ne/ERN, a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD. Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. METHODOLOGY/PRINCIPLE FINDINGS: We assessed the error negativity (Ne in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. CONCLUSIONS/SIGNIFICANCE: The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated "cognitive" biomarker in HD.

  17. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  18. Input/output error analyzer

    Science.gov (United States)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  19. Amplify Errors to Minimize Them

    Science.gov (United States)

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  20. Advanced manufacturing: Technology diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Tesar, A.

    1995-12-01

    In this paper we examine how manufacturing technology diffuses rom the developers of technology across national borders to those who do not have the capability or resources to develop advanced technology on their own. None of the wide variety of technology diffusion mechanisms discussed in this paper are new, yet the opportunities to apply these mechanisms are growing. A dramatic increase in technology diffusion occurred over the last decade. The two major trends which probably drive this increase are a worldwide inclination towards ``freer`` markets and diminishing isolation. Technology is most rapidly diffusing from the US In fact, the US is supplying technology for the rest of the world. The value of the technology supplied by the US more than doubled from 1985 to 1992 (see the Introduction for details). History shows us that technology diffusion is inevitable. It is the rates at which technologies diffuse to other countries which can vary considerably. Manufacturers in these countries are increasingly able to absorb technology. Their manufacturing efficiency is expected to progress as technology becomes increasingly available and utilized.

  1. Toward a cognitive taxonomy of medical errors.

    Science.gov (United States)

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  2. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  3. Color Histogram Diffusion for Image Enhancement

    Science.gov (United States)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  4. Diffusion in natural ilmenite

    Science.gov (United States)

    Stenhouse, Iona; O'Neill, Hugh; Lister, Gordon

    2010-05-01

    Diffusion rates in natural ilmenite of composition Fe0.842+ Fe0.163+Mn0.07Mg0.01Ti 0.92O3 from the Vishnevye Mountains (Urals, Russia) have been measured at 1000° C. Experiments were carried out in a one atmosphere furnace with oxygen fugacity controlled by flow of a CO-CO2 gas mixture, over a period of four hours. The diffusant source was a synthetic ilmenite (FeTiO3) powder doped with trace amounts of Mg, Co, Ni, Zr, Hf, V, Nb, Ta, Al, Cr, Ga and Y. Since, the natural ilmenite crystal contained Mn it was also possible to study diffusion of Mn from the ilmenite crystal. The experiments were analysed using the electron microprobe and scanning laser ablation ICP-MS. Diffusion profiles were measured for Al, Mg, Mn, Co, Ni, Ga, and Y. Diffusion of Cr, Hf, Zr, V, Nb and Ta was too slow to allow diffusion profiles to be accurately measured for the times and temperatures studied so far. The preliminary results show that diffusion in ilmenite is fast, with the diffusivity determined in this study on the order of 10-13 to 10-16 m2s-1. For comparison, Chakraborty (1997) found interdiffusion of Fe and Mg in olivine at 1000° C on the order of 10-17 to 10-18m2s-1 and Dieckmann (1998) found diffusivity of Fe, Mg, Co in magnetite at 1200° C to be on the order of 10-13 to 10-14 m2s-1. The order in which the diffusivity of the elements decreases is Mn > Co > Mg ≥ Ni > Al ≥ Y ≥ Ga, that is to say that Mn diffuses the fastest and Ga the slowest. Overall, this study intends to determine diffusion parameters such as frequency factor, activation energy and activation volume as a function of temperature and oxygen fugacity. This research is taking place in the context of a larger study focusing on the use of the garnet-ilmenite system as a geospeedometer. Examination of the consequences of simultaneous diffusion of multiple elements is a necessity if we are to develop an understanding of the crystal-chemical controls on diffusion (cf Spandler & O'Neill, in press). Chakraborty

  5. Closed-ampoule diffusion of sulfur into Cd-doped InP substrates - Dependence of S profiles on diffusion temperature and time

    Science.gov (United States)

    Faur, Mircea; Faur, Maria; Honecy, Frank; Goradia, Chandra; Goradia, Manju; Jayne, Douglas; Clark, Ralph

    1992-01-01

    In order to optimize the fabrication of n(+)-p InP solar cells made by closed-ampoule diffusion of sulfur into p-InP:Cd substrates, we have investigated the influence of diffusion conditions on sulfur diffusion profiles. We show that S diffusion in InP is dominated by the P vacancy mechanism and is not characterized by a complementary error function as expected for an infinite source diffusion. The S diffusion mechanism in p-InP is qualitatively explained by examining the depth profiles of S, P, and In in the emitter layer and by taking into account the presence and composition of different compounds found to form in the In-P-S-O-Cd system as a result of diffusion.

  6. A Mapping method for mixing with diffusion

    Science.gov (United States)

    Schlick, Conor P.; Christov, Ivan C.; Umbanhowar, Paul B.; Ottino, Julio M.; Lueptow, Richard M.

    2012-11-01

    We present an accurate and efficient computational method for solving the advection-diffusion equation in time-periodic chaotic flows. The method uses operator splitting which allows advection and diffusion steps to be treated independently. Taking advantage of flow periodicity, the advection step is solved with a mapping method, and diffusion is added discretely after each iteration of the advection map. This approach allows for a ``composite'' mapping matrix to be constructed for an entire period of a chaotic advection-diffusion process, which provides a natural approach to the spectral analysis of mixing. To test the approach, we consider the two-dimensional time-periodic sine flow. When compared to the exact solution for this simple velocity field, the operator splitting method exhibits qualitative agreement (overall concentration structure) for large time steps and is quantitatively accurate (average and maximum error) for small time steps. We extend the operator splitting approach to three-dimensional chaotic flows. Funded by NSF Grant CMMI-1000469. Present affiliation: Princeton University. Supported by NSF Grant DMS-1104047.

  7. Analytical method for coupled transmission error of helical gear system with machining errors, assembly errors and tooth modifications

    Science.gov (United States)

    Lin, Tengjiao; He, Zeyin

    2017-07-01

    We present a method for analyzing the transmission error of helical gear system with errors. First a finite element method is used for modeling gear transmission system with machining errors, assembly errors, modifications and the static transmission error is obtained. Then the bending-torsional-axial coupling dynamic model of the transmission system based on the lumped mass method is established and the dynamic transmission error of gear transmission system is calculated, which provides error excitation data for the analysis and control of vibration and noise of gear system.

  8. A Specification Test of Stochastic Diffusion Models

    Institute of Scientific and Technical Information of China (English)

    Shu-lin ZHANG; Zheng-hong WEI; Qiu-xiang BI

    2013-01-01

    In this paper,we propose a hypothesis testing approach to checking model mis-specification in continuous-time stochastic diffusion model.The key idea behind the development of our test statistic is rooted in the generalized information equality in the context of martingale estimating equations.We propose a bootstrap resampling method to implement numerically the proposed diagnostic procedure.Through intensive simulation studies,we show that our approach is well performed in the aspects of type Ⅰ error control,power improvement as well as computational efficiency.

  9. Detailed measurement on a HESCO diffuser

    DEFF Research Database (Denmark)

    Jensen, Rasmus Lund; Holm, Dorte; Nielsen, Peter V.

    2007-01-01

    the inlet velocity is a very important boundary condition both in CFD calculation and general flow measurements. If only the volume flow and the geometrical area are used, a relatively large error in the inlet velocity may result. From the detailed measurements it was possible to establish an expression......This paper focuses on measuring the inlet velocity from a HESCO diffuser used in the IEA Annex 20 work as a function of the volume flow it provides. The aim of the present work is to establish a relation between the inlet velocity, the effective area and the airflow. This is important because...

  10. Analytic evaluation of diffuse flux at a refractive index discontinuity in forward-biased scattering media

    CERN Document Server

    Selden, Adrian C

    2011-01-01

    A simple analytic method of estimating the error involved in using an approximate boundary condition for diffuse radiation in two adjoining scattering media with differing refractive index is presented. The method is based on asymptotic planar fluxes and enables the error to be readily evaluated analytically without recourse to Monte Carlo simulation. The analysis is extended to multi-layer media, for which the cumulative error can exceed 100% when an approximate boundary condition is used.

  11. Space Saving Statistics: An Introduction to Constant Error, Variable Error, and Absolute Error.

    Science.gov (United States)

    Guth, David

    1990-01-01

    Article discusses research on orientation and mobility (O&M) for individuals with visual impairments, examining constant, variable, and absolute error (descriptive statistics that quantify fundamentally different characteristics of distributions of spatially directed behavior). It illustrates the statistics with examples, noting their…

  12. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  13. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  14. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  15. Error Analysis of Band Matrix Method

    OpenAIRE

    Taniguchi, Takeo; Soga, Akira

    1984-01-01

    Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.

  16. Error Correction in Oral Classroom English Teaching

    Science.gov (United States)

    Jing, Huang; Xiaodong, Hao; Yu, Liu

    2016-01-01

    As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…

  17. 5 CFR 1601.34 - Error correction.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  18. STRUCTURED BACKWARD ERRORS FOR STRUCTURED KKT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Xin-xiu Li; Xin-guo Liu

    2004-01-01

    In this paper we study structured backward errors for some structured KKT systems.Normwise structured backward errors for structured KKT systems are defined, and computable formulae of the structured backward errors are obtained. Simple numerical examples show that the structured backward errors may be much larger than the unstructured ones in some cases.

  19. Error analysis of the quartic nodal expansion method for slab geometry

    Energy Technology Data Exchange (ETDEWEB)

    Penland, R.C.; Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States); Azmy, Y.Y. [Oak Ridge National Lab., TN (United States)

    1995-02-01

    This paper presents an analysis of the quartic polynomial Nodal Expansion Method (NEM) for one-dimensional neutron diffusion calculations. As part of an ongoing effort to develop an adaptive mesh refinement strategy for use in state-of-the-art nodal kinetics codes, we derive a priori error bounds on the computed solution for uniform meshes and validate them using a simple test problem. Predicted error bounds are found to be greater than computed maximum absolute errors by no more than a factor of six allowing mesh size selection to reflect desired accuracy. We also quantify the rapid convergence in the NEM computed solution as a function of mesh size.

  20. Error estimation and adaptivity for transport problems with uncertain parameters

    Science.gov (United States)

    Sahni, Onkar; Li, Jason; Oberai, Assad

    2016-11-01

    Stochastic partial differential equations (PDEs) with uncertain parameters and source terms arise in many transport problems. In this study, we develop and apply an adaptive approach based on the variational multiscale (VMS) formulation for discretizing stochastic PDEs. In this approach we employ finite elements in the physical domain and generalize polynomial chaos based spectral basis in the stochastic domain. We demonstrate our approach on non-trivial transport problems where the uncertain parameters are such that the advective and diffusive regimes are spanned in the stochastic domain. We show that the proposed method is effective as a local error estimator in quantifying the element-wise error and in driving adaptivity in the physical and stochastic domains. We will also indicate how this approach may be extended to the Navier-Stokes equations. NSF Award 1350454 (CAREER).

  1. Error analysis of flux limiter schemes at extrema

    Science.gov (United States)

    Kriel, A. J.

    2017-01-01

    Total variation diminishing (TVD) schemes have been an invaluable tool for the solution of hyperbolic conservation laws. One of the major shortcomings of commonly used TVD methods is the loss of accuracy near extrema. Although large amounts of anti-diffusion usually benefit the resolution of discontinuities, a balanced limiter such as Van Leer's performs better at extrema. Reliable criteria, however, for the performance of a limiter near extrema are not readily apparent. This work provides theoretical quantitative estimates for the local truncation errors of flux limiter schemes at extrema for a uniform grid. Moreover, the component of the error attributed to the flux limiter was obtained. This component is independent of the problem and grid spacing, and may be considered a property of the limiter that reflects the performance at extrema. Numerical test problems validate the results.

  2. IMPACT OF ERROR FILTERS ON SHARES IN HALFTONE VISUAL CRYPTOGRAPHY

    Directory of Open Access Journals (Sweden)

    Sunil Agrawal

    2012-05-01

    Full Text Available Visual cryptography encodes a secret binary image (SI into shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the shares, however, have no visual meaning and hinder the objectives of visual cryptography. Halftone visual cryptography encodes a secret binary image into n halftone shares (images carrying significant visual information. When secrecy is important factor rather than the quality of recovered image the shares must be of better visual quality. Different filters such as Floyd-Steinberg, Jarvis, Stuki, Burkes, Sierra, and Stevenson’s-Arce are used and their impact on visual quality of shares is seen. The simulation shows that error filters used in error diffusion lays a great impact on the visual quality of the shares.

  3. Multidimensional diffusion MRI

    Science.gov (United States)

    Topgaard, Daniel

    2017-02-01

    Principles from multidimensional NMR spectroscopy, and in particular solid-state NMR, have recently been transferred to the field of diffusion MRI, offering non-invasive characterization of heterogeneous anisotropic materials, such as the human brain, at an unprecedented level of detail. Here we revisit the basic physics of solid-state NMR and diffusion MRI to pinpoint the origin of the somewhat unexpected analogy between the two fields, and provide an overview of current diffusion MRI acquisition protocols and data analysis methods to quantify the composition of heterogeneous materials in terms of diffusion tensor distributions with size, shape, and orientation dimensions. While the most advanced methods allow estimation of the complete multidimensional distributions, simpler methods focus on various projections onto lower-dimensional spaces as well as determination of means and variances rather than actual distributions. Even the less advanced methods provide simple and intuitive scalar parameters that are directly related to microstructural features that can be observed in optical microscopy images, e.g. average cell eccentricity, variance of cell density, and orientational order - properties that are inextricably entangled in conventional diffusion MRI. Key to disentangling all these microstructural features is MRI signal acquisition combining isotropic and directional dimensions, just as in the field of multidimensional solid-state NMR from which most of the ideas for the new methods are derived.

  4. Sailing On Diffusion

    Science.gov (United States)

    Allshouse, Michael; Barad, Mike; Peacock, Thomas

    2009-11-01

    When a density-stratified fluid encounters a sloping boundary, diffusion alters the fluid density adjacent to the boundary, producing spontaneous along-slope flow. Since stratified fluids are ubiquitous in nature, this phenomenon plays a vital role in environmental transport processes, including salt transport in rock fissures and ocean-boundary mixing. Here we show that diffusion-driven flow can be harnessed as a remarkable means of propulsion, acting as a diffusion-engine that extracts energy from microscale diffusive processes to propel macroscale objects. Like a sailboat tacking into the wind, forward motion results from fluid flow around an object, creating a region of low pressure at the front relative to the rear. In this case, however, the flow is driven by molecular diffusion and the pressure variations arise due to the resulting small changes in the fluid density. This mechanism has implications for a number of important systems, including environmental and biological transport processes at locations of strong stratification, such as pycnoclines in oceans and lakes. There is also a strong connection with other prevalent buoyancy-driven flows, such as valley and glacier winds, significantly broadening the scope of these results and opening up a new avenue for propulsion research.

  5. Primary diffuse leptomeningeal gliosarcomatosis.

    Science.gov (United States)

    Moon, Ju Hyung; Kim, Se Hoon; Kim, Eui Hyun; Kang, Seok-Gu; Chang, Jong Hee

    2015-04-01

    Primary diffuse leptomeningeal gliomatosis (PDLG) is a rare condition with a fatal outcome, characterized by diffuse infiltration of the leptomeninges by neoplastic glial cells without evidence of primary tumor in the brain or spinal cord parenchyma. In particular, PDLG histologically diagnosed as gliosarcoma is extremely rare, with only 2 cases reported to date. We report a case of primary diffuse leptomeningeal gliosarcomatosis. A 68-year-old man presented with fever, chilling, headache, and a brief episode of mental deterioration. Initial T1-weighted post-contrast brain magnetic resonance imaging (MRI) showed diffuse leptomeningeal enhancement without a definite intraparenchymal lesion. Based on clinical and imaging findings, antiviral treatment was initiated. Despite the treatment, the patient's neurologic symptoms and mental status progressively deteriorated and follow-up MRI showed rapid progression of the disease. A meningeal biopsy revealed gliosarcoma and was conclusive for the diagnosis of primary diffuse leptomeningeal gliosarcomatosis. We suggest the inclusion of PDLG in the potential differential diagnosis of patients who present with nonspecific neurologic symptoms in the presence of leptomeningeal involvement on MRI.

  6. Managing human error in aviation.

    Science.gov (United States)

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  7. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  8. Manson’s triple error

    Directory of Open Access Journals (Sweden)

    Delaporte F.

    2008-09-01

    Full Text Available The author discusses the significance, implications and limitations of Manson’s work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error.

  9. Offset Error Compensation in Roundness Measurement

    Institute of Scientific and Technical Information of China (English)

    朱喜林; 史俊; 李晓梅

    2004-01-01

    This paper analyses three causes of offset error in roundness measurement and presents corresponding compensation methods.The causes of offset error include excursion error resulting from the deflection of the sensor's line of measurement from the rotational center in measurement (datum center), eccentricity error resulting from the variance between the workpiece's geometrical center and the rotational center, and tilt error resulting from the tilt between the workpiece's geometrical axes and the rotational centerline.

  10. Error Filtering Schemes for Color Images in Visual Cryptography

    Directory of Open Access Journals (Sweden)

    Shiny Malar F.R

    2011-11-01

    Full Text Available The color visual cryptography methods are free from the limitations of randomness on color images. The two basic ideas used are error diffusion and pixel synchronization. Error diffusion is a simple method, in which the quantization error at each pixel level is filtered and fed as the input to the next pixel. In this way low frequency that is obtained between the input and output image is minimized which in turn give quality images. Degradation of colors are avoided with the help of pixel synchronization. The proposal of this work presents an efficient color image visual cryptic filtering scheme to improve the image quality on restored original image from visual cryptic shares. The proposed color image visual cryptic filtering scheme presents a deblurring effect on the non-uniform distribution of visual cryptic share pixels. After eliminating blurring effects on the pixels, Fourier transformation is applied to normalize the unevenly transformed share pixels on the original restored image. This in turn improves the quality of restored visual cryptographic image to its optimality. In addition the overlapping portions of the two or multiple visual cryptic shares are filtered out with homogeneity of pixel texture property on the restored original image. Experimentation are conducted with standard synthetic and real data set images, which shows better performance of proposed color image visual cryptic filtering scheme measured in terms of PSNR value (improved to 3 times and share pixel error rate (reduced to nearly 11% with existing grey visual cryptic filters. The results showed that the noise effects such as blurring on the restoration of original image are removed completely.

  11. FAKTOR PENYEBAB MEDICATION ERROR DI INSTALASI RAWAT DARURAT FACTORS AFFECTING MEDICATION ERRORS AT EMERGENCY UNIT

    OpenAIRE

    2014-01-01

    Background: Incident of medication errors is an importantindicator in patient safety and medication error is most commonmedical errors. However, most of medication errors can beprevented and efforts to reduce such errors are available.Due to high number of medications errors in the emergencyunit, understanding of the causes is important for designingsuccessful intervention. This research aims to identify typesand causes of medication errors.Method: Qualitative study was used and data were col...

  12. Error-resilient DNA computation

    Energy Technology Data Exchange (ETDEWEB)

    Karp, R.M.; Kenyon, C.; Waarts, O. [Univ. of California, Berkeley, CA (United States)

    1996-12-31

    The DNA model of computation, with test tubes of DNA molecules encoding bit sequences, is based on three primitives, Extract-A-Bit, which splits a test tube into two test tubes according to the value of a particular bit x, Merge-Two-Tubes and Detect-Emptiness. Perfect operations can test the satisfiability of any boolean formula in linear time. However, in reality the Extract operation is faulty; it misclassifies a certain proportion of the strands. We consider the following problem: given an algorithm based on perfect Extract, Merge and Detect operations, convert it to one that works correctly with high probability when the Extract operation is faulty. The fundamental problem in such a conversion is to construct a sequence of faulty Extracts and perfect Merges that simulates a highly reliable Extract operation. We first determine (up to a small constant factor) the minimum number of faulty Extract operations inherently required to simulate a highly reliable Extract operation. We then go on to derive a general method for converting any algorithm based on error-free operations to an error-resilient one, and give optimal error-resilient algorithms for realizing simple n-variable boolean functions such as Conjunction, Disjunction and Parity.

  13. Cesium diffusion in graphite

    Energy Technology Data Exchange (ETDEWEB)

    Evans, R.B. III; Davis, W. Jr.; Sutton, A.L. Jr.

    1980-05-01

    Experiments on diffusion of /sup 137/Cs in five types of graphite were performed. The document provides a completion of the report that was started and includes a presentation of all of the diffusion data, previously unpublished. Except for data on mass transfer of /sup 137/Cs in the Hawker-Siddeley graphite, analyses of experimental results were initiated but not completed. The mass transfer process of cesium in HS-1-1 graphite at 600 to 1000/sup 0/C in a helium atmosphere is essentially pure diffusion wherein values of (E/epsilon) and ..delta..E of the equation D/epsilon = (D/epsilon)/sub 0/ exp (-..delta..E/RT) are about 4 x 10/sup -2/ cm/sup 2//s and 30 kcal/mole, respectively.

  14. Diffusion and mass transfer

    CERN Document Server

    Vrentas, James S

    2013-01-01

    The book first covers the five elements necessary to formulate and solve mass transfer problems, that is, conservation laws and field equations, boundary conditions, constitutive equations, parameters in constitutive equations, and mathematical methods that can be used to solve the partial differential equations commonly encountered in mass transfer problems. Jump balances, Green’s function solution methods, and the free-volume theory for the prediction of self-diffusion coefficients for polymer–solvent systems are among the topics covered. The authors then use those elements to analyze a wide variety of mass transfer problems, including bubble dissolution, polymer sorption and desorption, dispersion, impurity migration in plastic containers, and utilization of polymers in drug delivery. The text offers detailed solutions, along with some theoretical aspects, for numerous processes including viscoelastic diffusion, moving boundary problems, diffusion and reaction, membrane transport, wave behavior, sedime...

  15. Extension of self-seeding scheme with single crystal monochromator to lower energy < 5 keV as a way to generate multi-TW scale pulses at the European XFEL

    CERN Document Server

    Geloni, Gianluca; Saldin, Evgeni

    2012-01-01

    We propose a use of the self-seeding scheme with single crystal monochromator to produce high power, fully-coherent pulses for applications at a dedicated bio-imaging beamline at the European X-ray FEL in the photon energy range between 3.5 keV and 5 keV. We exploit the C(111) Bragg reflection (pi-polarization) in diamond crystals with a thickness of 0.1 mm, and we show that, by tapering the 40 cells of the SASE3 type undulator the FEL power can reach up to 2 TW in the entire photon energy range. The present design assumes the use of a nominal electron bunch with charge 0.1 nC at nominal electron beam energy 17.5 GeV. The main application of the scheme proposed in this work is for single shot imaging of individual protein molecules.

  16. Structure Design and Accuracy Testing of Monochromator in a Soft X-Ray Spectromicroscopic Beamline%软X射线谱学显微光束线单色器结构设计及精度测试

    Institute of Scientific and Technical Information of China (English)

    龚学鹏; 卢启鹏; 彭忠琦

    2013-01-01

    In order to satisfy the technical requirement of soft X-ray microscopy beamline in Shanghai Synchrotron Radiation Facility (SSRF), whose key assembly monochromator is designed. Wavelength scanning movement principle of monochromator is described. Design scheme of wavelength scanning mechanism is discussed, and factors affecting the angular repeatability of plane mirror and plane grating are analyzed in detail; switching mechanism of plane grating is described, and horizontal deviation, vertical deviation, roll angle precision, yaw angle precision and pitch angle precision are analyzed in detail; six-bar parallel mechanism is used for adjusting the UHV-chamber, and adjusting range and resolution of the bar are analyzed. The entire structure of monochromator is presented, and its precision testing is performed. Results show that the angular repeatability of plane mirror and plane grating are 0.166" and 0.149", and roll, yaw and pitch angular repeatability of plane grating switching mechanism are 0. 08", 0.12" and 0.05", indicating that structure design and precision of monochromator satisfy the technical demand.%针对上海光源谱学显微光束线站的性能要求,对其核心部件单色器进行结构设计.阐述了单色器的扫描运动原理,论述了波长扫描机构的设计方案,具体分析平面镜和光栅的转角重复精度影响因素;描述光栅切换机构,着重分析其水平偏差、垂直偏差、滚角、摆角和投角的精度问题;采用六杆并联机构的方案完成镜箱调节机构的设计,分析其支杆的调节范围和分辨力情况.给出了单色器的结构,并且对其精度进行了测试.测试结果表明,平面镜和光栅的转角重复精度分别为0.166″和0.149″;光栅切换机构的滚角、摆角和投角的重复精度分别为0.08″、0.12″和0.05″.这说明了单色器的结构设计方案和机械精度满足技术要求.

  17. GRAI N-BOUNDARY DIFFUSION

    OpenAIRE

    Peterson, N.

    1982-01-01

    The more useful experimental techniques for determining grain-boundary diffusion are briefly described followed by a presentation of results that shed light on the models and mechanisms of grain-boundary and dislocation diffusion. Studies of the following grain-boundary diffusion phenomena will be considered ; anisotropy in grain-boundary diffusion, effect of orientation relationship on grain-boundary diffusion, effect of boundary type and dislocation dissociation, lattice structure, correlat...

  18. APPLICATION OF TRIZ METHODOLOGY IN DIFFUSION WELDING SYSTEM OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. RAVINDER REDDY

    2017-10-01

    Full Text Available Welding is tremendously used in metal joining processes in the manufacturing process. In recent years, diffusion welding method has significantly increased the quality of a weld. Nevertheless, diffusion welding has some extent short research and application progress. Therefore, diffusion welding has a lack of relevant information, concerned with the joining of thick and thin materials with or without interlayers, on welding design such as fixture, parameters selection and inte-grated design. This article intends to combine innovative methods in the application of diffusion welding design. This will help to decrease trial and error or failure risks in the welding process being guided by the theory of inventive problem solving (TRIZ design method. This article hopes to provide welding design personnel with innovative design ideas under research and for practical application.

  19. Thermal Diffusivity Identification of Distributed Parameter Systems to Sea Ice

    Directory of Open Access Journals (Sweden)

    Liqiong Shi

    2013-01-01

    Full Text Available A method of optimal control is presented as a numerical tool for solving the sea ice heat transfer problem governed by a parabolic partial differential equation. Taken the deviation between the calculated ice temperature and the measurements as the performance criterion, an optimal control model of distributed parameter systems with specific constraints of thermal properties of sea ice was proposed to determine the thermal diffusivity of sea ice. Based on sea ice physical processes, the parameterization of the thermal diffusivity was derived through field data. The simulation results illustrated that the identified parameterization of the thermal diffusivity is reasonably effective in sea ice thermodynamics. The direct relation between the thermal diffusivity of sea ice and ice porosity is physically significant and can considerably reduce the computational errors. The successful application of this method also explained that the optimal control model of distributed parameter systems in conjunction with the engineering background has great potential in dealing with practical problems.

  20. A POSTERIORI ERROR ESTIMATE OF THE DSD METHOD FOR FIRST-ORDER HYPERBOLIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    康彤; 余德浩

    2002-01-01

    A posteriori error estimate of the discontinuous-streamline diffusion method for first-order hyperbolic equations was presented, which can be used to adjust space mesh reasonably. A numerical example is given to illustrate the accuracy and feasibility of this method.

  1. Nonlocal electrical diffusion equation

    Science.gov (United States)

    Gómez-Aguilar, J. F.; Escobar-Jiménez, R. F.; Olivares-Peregrino, V. H.; Benavides-Cruz, M.; Calderón-Ramón, C.

    2016-07-01

    In this paper, we present an analysis and modeling of the electrical diffusion equation using the fractional calculus approach. This alternative representation for the current density is expressed in terms of the Caputo derivatives, the order for the space domain is 0numerical methods based on Fourier variable separation. The case with spatial fractional derivatives leads to Levy flight type phenomena, while the time fractional equation is related to sub- or super diffusion. We show that the mathematical concept of fractional derivatives can be useful to understand the behavior of semiconductors, the design of solar panels, electrochemical phenomena and the description of anomalous complex processes.

  2. Phase transformation and diffusion

    CERN Document Server

    Kale, G B; Dey, G K

    2008-01-01

    Given that the basic purpose of all research in materials science and technology is to tailor the properties of materials to suit specific applications, phase transformations are the natural key to the fine-tuning of the structural, mechanical and corrosion properties. A basic understanding of the kinetics and mechanisms of phase transformation is therefore of vital importance. Apart from a few cases involving crystallographic martensitic transformations, all phase transformations are mediated by diffusion. Thus, proper control and understanding of the process of diffusion during nucleation, g

  3. Hydrogen diffusion in Zircon

    Science.gov (United States)

    Ingrin, Jannick; Zhang, Peipei

    2016-04-01

    Hydrogen mobility in gem quality zircon single crystals from Madagascar was investigated through H-D exchange experiments. Thin slices were annealed in a horizontal furnace flushed with a gas mixture of Ar/D2(10%) under ambient pressure between 900 ° C to 1150 ° C. FTIR analyses were performed on oriented slices before and after each annealing run. H diffusion along [100] and [010] follow the same diffusion law D = D0exp[-E /RT], with log D0 = 2.24 ± 1.57 (in m2/s) and E = 374 ± 39 kJ/mol. H diffusion along [001] follows a slightly more rapid diffusion law, with log D0 = 1.11 ± 0.22 (in m2/s) and E = 334 ± 49 kJ/mol. H diffusion in zircon has much higher activation energy and slower diffusivity than other NAMs below 1150 ° C even iron-poor garnets which are known to be among the slowest (Blanchard and Ingrin, 2004; Kurka et al. 2005). During H-D exchange zircon incorporates also deuterium. This hydration reaction involves uranium reduction as it is shown from the exchange of U5+ and U4+ characteristic bands in the near infrared region during annealing. It is the first time that a hydration reaction U5+ + OH- = U4+ + O2- + 1/2H2, is experimentally reported. The kinetics of deuterium incorporation is slightly slower than hydrogen diffusion, suggesting that the reaction is limited by hydrogen mobility. Hydrogen isotopic memory of zircon is higher than other NAMs. Zircons will be moderately retentive of H signatures at mid-crustal metamorphic temperatures. At 500 ° C, a zircon with a radius of 300 μm would retain its H isotopic signature over more than a million years. However, a zircon is unable to retain this information for geologically significant times under high-grade metamorphism unless the grain size is large enough. Refrences Blanchard, M. and Ingrin, J. (2004) Hydrogen diffusion in Dora Maira pyrope. Physics and Chemistry of Minerals, 31, 593-605. Kurka, A., Blanchard, M. and Ingrin, J. (2005) Kinetics of hydrogen extraction and deuteration in

  4. Nonlinear diffusion equations

    CERN Document Server

    Wu Zhuo Qun; Li Hui Lai; Zhao Jun Ning

    2001-01-01

    Nonlinear diffusion equations, an important class of parabolic equations, come from a variety of diffusion phenomena which appear widely in nature. They are suggested as mathematical models of physical problems in many fields, such as filtration, phase transition, biochemistry and dynamics of biological groups. In many cases, the equations possess degeneracy or singularity. The appearance of degeneracy or singularity makes the study more involved and challenging. Many new ideas and methods have been developed to overcome the special difficulties caused by the degeneracy and singularity, which

  5. The Trouble with Diffusion

    Directory of Open Access Journals (Sweden)

    R.T. DeHoff

    2002-09-01

    Full Text Available The phenomenological formalism, which yields Fick's Laws for diffusion in single phase multicomponent systems, is widely accepted as the basis for the mathematical description of diffusion. This paper focuses on problems associated with this formalism. This mode of description of the process is cumbersome, defining as it does matrices of interdiffusion coefficients (the central material properties that require a large experimental investment for their evaluation in three component systems, and, indeed cannot be evaluated for systems with more than three components. It is also argued that the physical meaning of the numerical values of these properties with respect to the atom motions in the system remains unknown. The attempt to understand the physical content of the diffusion coefficients in the phenomenological formalism has been the central fundamental problem in the theory of diffusion in crystalline alloys. The observation by Kirkendall that the crystal lattice moves during diffusion led Darken to develop the concept of intrinsic diffusion, i.e., atom motion relative to the crystal lattice. Darken and his successors sought to relate the diffusion coefficients computed for intrinsic fluxes to those obtained from the motion of radioactive tracers in chemically homogeneous samples which directly report the jump frequencies of the atoms as a function of composition and temperature. This theoretical connection between tracer, intrinsic and interdiffusion behavior would provide the basis for understanding the physical content of interdiffusion coefficients. Definitive tests of the resulting theoretical connection have been carried out for a number of binary systems for which all three kinds of observations are available. In a number of systems predictions of intrinsic coefficients from tracer data do not agree with measured values although predictions of interdiffusion coefficients appear to give reasonable agreement. Thus, the complete

  6. Diffusion in advanced materials

    CERN Document Server

    Murch, Graeme; Belova, Irina

    2014-01-01

    In the first chapter Prof. Kozubski and colleagues present atomisticsimulations of superstructure transformations of intermetallic nanolayers.In Chapter 2, Prof. Danielewski and colleagues discuss a formalism for themorphology of the diffusion zone in ternary alloys. In Chapter 3, ProfessorsSprengel and Koiwa discuss the classical contributions of Boltzmann andMatano for the analysis of concentration-dependent diffusion. This isfollowed by Chapter 4 by Professor Cserháti and colleagues on the use of Kirkendall porosity for fabricating hollow hemispheres. In Chapter 5,Professor Morton-Blake rep

  7. Drift in Diffusion Gradients

    Directory of Open Access Journals (Sweden)

    Fabio Marchesoni

    2013-08-01

    Full Text Available The longstanding problem of Brownian transport in a heterogeneous quasi one-dimensional medium with space-dependent self-diffusion coefficient is addressed in the overdamped (zero mass limit. A satisfactory mesoscopic description is obtained in the Langevin equation formalism by introducing an appropriate drift term, which depends on the system macroscopic observables, namely the diffuser concentration and current. The drift term is related to the microscopic properties of the medium. The paradoxical existence of a finite drift at zero current suggests the possibility of designing a Maxwell demon operating between two equilibrium reservoirs at the same temperature.

  8. 变包含角平面光栅单色器及其关键技术%The variable included angle plane grating monochromator and the key technology

    Institute of Scientific and Technical Information of China (English)

    陈家华; 薛松; 卢启鹏; 彭忠琦; 邰仁忠; 王勇; 陈明; 吴坤

    2011-01-01

    This article discusses the design of a variable included angle plane grating monochromator (VAPGM) on the soft X-ray spectromicroscopy beam-line at Shanghai Synchrotron Radiation Facility (SSRF).The precision scanning system of sin-bar meets the requirements through resolving the high precision repeatability of mechanical transmission system; the inner path water cooling structure of the plane mirror controls the thermal deformation of the mirror surface; the huge dimension and quadrate flange chamber ensures the ultra high vacuum (UHV) which the VAPGM requires.Finally, the main capabilities of the monochromator, including the energy range, energy resolution and energy repeatability, reach the design requirements completely.%分析设计并研制了上海光源软X射线谱学显微光束线站的变包含角平面光栅单色器,经过精密加工调试,保证了扫描系统的转角重复精度;采用多孔腔内部通道水冷方法,控制了镜子表面热变形;完成真空箱体大尺寸方法兰加工与密封,达到了单色器工作所需的超高真空.通过上述关键部件的精确掌控,确保了单色器主要性能--光子能量范围、能量分辨率和能量重复性,均优于设计指标.

  9. Brownian yet non-Gaussian diffusion: from superstatistics to subordination of diffusing diffusivities

    CERN Document Server

    Chechkin, A V; Metzler, R; Sokolov, I M

    2016-01-01

    A growing number of biological, soft, and active matter systems are observed to exhibit normal diffusive dynamics with a linear growth of the mean squared displacement, yet with a non-Gaussian distribution of increments. Based on the Chubinsky-Slater idea of a diffusing diffusivity we here establish and analyse a complete minimal model framework of diffusion processes with fluctuating diffusivity. In particular, we demonstrate the equivalence of the diffusing diffusivity process in the short time limit with a superstatistical approach based on a distribution of diffusivities. Moreover, we establish a subordination picture of Brownian but non-Gaussian diffusion processes, that can be used for a wide class of diffusivity fluctuation statistics. Our results are shown to be in excellent agreement with simulations and numerical evaluations.

  10. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    Science.gov (United States)

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  11. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    Science.gov (United States)

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  12. 对流占优扩散问题流线扩散法的最优误差估计-线性三角元%Analysis of Linear Triangular Elements for Convection-diffusion Problems by Streamline Diffusion Finite Element Methods

    Institute of Scientific and Technical Information of China (English)

    周俊明; 金大永; 张书华

    2007-01-01

    This paper is devoted to studying the superconvergence of streamline diffusion finite element methods for convection-diffusion problems. In [8], under the condition that ε≤ h2 the optimal finite element error estimate was obtained in L2-norm. In the present paper, however, the same error estimate result is gained under the weaker condition that ε≤h.

  13. Diffusion Based Photon Mapping

    DEFF Research Database (Denmark)

    Schjøth, Lars; Sporring, Jon; Fogh Olsen, Ole

    2008-01-01

    . To address this problem, we introduce a photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way, we preserve important illumination features, while...

  14. Diffusing Best Practices

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Baskerville, Richard

    2014-01-01

    Both the practice and the research literature on information systems attach great value to the identification and dissemination of information on “best practices”. In the philosophy of science, this type of knowledge is regarded as technological knowledge because it becomes manifest in the succes......Both the practice and the research literature on information systems attach great value to the identification and dissemination of information on “best practices”. In the philosophy of science, this type of knowledge is regarded as technological knowledge because it becomes manifest...... in the successful techniques in one context. While the value for other contexts is unproven, knowledge of best practices circulates under an assumption that the practices will usefully self-diffuse through innovation and adoption in other contexts. We study diffusion of best practices using a design science...... approach. The study context is a design case in which an organization desires to diffuse its best practices across different groups. The design goal is embodied in organizational mechanisms to achieve this diffusion. The study used Theory of Planned Behavior (TPB) as a kernel theory. The artifacts...

  15. Model of information diffusion

    CERN Document Server

    Lande, D V

    2008-01-01

    The system of cellular automata, which expresses the process of dissemination and publication of the news among separate information resources, has been described. A bell-shaped dependence of news diffusion on internet-sources (web-sites) coheres well with a real behavior of thematic data flows, and at local time spans - with noted models, e.g., exponential and logistic ones.

  16. DEVELOPMENT, DIFFUSION, AND EVALUATION.

    Science.gov (United States)

    GUBA, EGON G.

    THE KNOWLEDGE GAP BETWEEN INITIAL RESEARCH AND FINAL USE IS DISCUSSED IN TERMS OF THE FOUR STATES OF THE THEORY-PRACTICE CONTINUUM (RESEARCH, DEVELOPMENT, DIFFUSION, AND ADOPTION). THE TWO MIDDLE STAGES ARE EMPHASIZED. RESEARCH AND DEVELOPMENT CENTERS, REGIONAL EDUCATIONAL LABORATORIES, AND TITLE III PROJECTS ARE SUGGESTED AS AGENCIES RESPONSIBLE…

  17. Osmosis and Diffusion

    Science.gov (United States)

    Sack, Jeff

    2005-01-01

    OsmoBeaker is a CD-ROM designed to enhance the learning of diffusion and osmosis by presenting interactive experimentation to the student. The software provides several computer simulations that take the student through different scenarios with cells, having different concentrations of solutes in them.

  18. Diffusion in ceramics

    CERN Document Server

    Pelleg, Joshua

    2016-01-01

    This textbook provides an introduction to changes that occur in solids such as ceramics, mainly at high temperatures, which are diffusion controlled, as well as presenting research data. Such changes are related to the kinetics of various reactions such as precipitation, oxidation and phase transformations, but are also related to some mechanical changes, such as creep. The book is composed of two parts, beginning with a look at the basics of diffusion according to Fick's Laws. Solutions of Fick’s second law for constant D, diffusion in grain boundaries and dislocations are presented along with a look at the atomistic approach for the random motion of atoms. In the second part, the author discusses diffusion in several technologically important ceramics. The ceramics selected are monolithic single phase ones, including: A12O3, SiC, MgO, ZrO2 and Si3N4. Of these, three refer to oxide ceramics (alumina, magnesia and zirconia). Carbide based ceramics are represented by the technologically very important Si-ca...

  19. Nanocrystal diffusion doping.

    Science.gov (United States)

    Vlaskin, Vladimir A; Barrows, Charles J; Erickson, Christian S; Gamelin, Daniel R

    2013-09-25

    A diffusion-based synthesis of doped colloidal semiconductor nanocrystals is demonstrated. This approach involves thermodynamically controlled addition of both impurity cations and host anions to preformed seed nanocrystals under equilibrium conditions, rather than kinetically controlled doping during growth. This chemistry allows thermodynamic crystal compositions to be prepared without sacrificing other kinetically trapped properties such as shape, size, or crystallographic phase. This doping chemistry thus shares some similarities with cation-exchange reactions, but proceeds without the loss of host cations and excels at the introduction of relatively unreactive impurity ions that have not been previously accessible using cation exchange. Specifically, we demonstrate the preparation of Cd(1-x)Mn(x)Se (0 ≤ x ≤ ∼0.2) nanocrystals with narrow size distribution, unprecedentedly high Mn(2+) content, and very large magneto-optical effects by diffusion of Mn(2+) into seed CdSe nanocrystals grown by hot injection. Controlling the solution and lattice chemical potentials of Cd(2+) and Mn(2+) allows Mn(2+) diffusion into the internal volumes of the CdSe nanocrystals with negligible Ostwald ripening, while retaining the crystallographic phase (wurtzite or zinc blende), shape anisotropy, and ensemble size uniformity of the seed nanocrystals. Experimental results for diffusion doping of other nanocrystals with other cations are also presented that indicate this method may be generalized, providing access to a variety of new doped semiconductor nanostructures not previously attainable by kinetic routes or cation exchange.

  20. Diffuse ceiling ventilation

    DEFF Research Database (Denmark)

    Zhang, Chen

    both thermal comfort and energy efficient aspects. The present study aims to characterize the air distribution and thermal comfort in the rooms with diffuse ceiling ventilation. Both the stand-alone ventilation system and its integration with a radiant ceiling system are investigated. This study also...

  1. Diffusion in aggregated soil.

    NARCIS (Netherlands)

    Rappoldt, C.

    1992-01-01

    The structure of an aggregated soil is characterized by the distribution of the distance from an arbitrary point in the soil to the nearest macropore or crack. From this distribution an equivalent model system is derived to which a diffusion model can be more easily applied. The model system consist

  2. Nonmonotonic diffusion in crowded environments

    Science.gov (United States)

    Putzel, Gregory Garbès; Tagliazucchi, Mario; Szleifer, Igal

    2015-01-01

    We study the diffusive motion of particles among fixed spherical crowders. The diffusers interact with the crowders through a combination of a hard-core repulsion and a short-range attraction. The long-time effective diffusion coefficient of the diffusers is found to depend non-monotonically on the strength of their attraction to the crowders. That is, for a given concentration of crowders, a weak attraction to the crowders enhances diffusion. We show that this counterintuitive fact can be understood in terms of the mesoscopic excess chemical potential landscape experienced by the diffuser. The roughness of this excess chemical potential landscape quantitatively captures the nonmonotonic dependence of the diffusion rate on the strength of crowder-diffuser attraction; thus it is a purely static predictor of dynamic behavior. The mesoscopic view given here provides a unified explanation for enhanced diffusion effects that have been found in various systems of technological and biological interest. PMID:25302920

  3. SENSITIVE ERROR ANALYSIS OF CHAOS SYNCHRONIZATION

    Institute of Scientific and Technical Information of China (English)

    HUANG XIAN-GAO; XU JIAN-XUE; HUANG WEI; L(U) ZE-JUN

    2001-01-01

    We study the synchronizing sensitive errors of chaotic systems for adding other signals to the synchronizing signal.Based on the model of the Henon map masking, we examine the cause of the sensitive errors of chaos synchronization.The modulation ratio and the mean square error are defined to measure the synchronizing sensitive errors by quality.Numerical simulation results of the synchronizing sensitive errors are given for masking direct current, sinusoidal and speech signals, separately. Finally, we give the mean square error curves of chaos synchronizing sensitivity and threedimensional phase plots of the drive system and the response system for masking the three kinds of signals.

  4. Error signals driving locomotor adaptation

    DEFF Research Database (Denmark)

    Choi, Julia T; Jensen, Peter; Nielsen, Jens Bo

    2016-01-01

    perturbations. Forces were applied to the ankle joint during the early swing phase using an electrohydraulic ankle-foot orthosis. Repetitive 80 Hz electrical stimulation was applied to disrupt cutaneous feedback from the superficial peroneal nerve (foot dorsum) and medial plantar nerve (foot sole) during...... anaesthesia (n = 5) instead of repetitive nerve stimulation. Foot anaesthesia reduced ankle adaptation to external force perturbations during walking. Our results suggest that cutaneous input plays a role in force perception, and may contribute to the 'error' signal involved in driving walking adaptation when...

  5. (Errors in statistical tests3

    Directory of Open Access Journals (Sweden)

    Kaufman Jay S

    2008-07-01

    Full Text Available Abstract In 2004, Garcia-Berthou and Alcaraz published "Incongruence between test statistics and P values in medical papers," a critique of statistical errors that received a tremendous amount of attention. One of their observations was that the final reported digit of p-values in articles published in the journal Nature departed substantially from the uniform distribution that they suggested should be expected. In 2006, Jeng critiqued that critique, observing that the statistical analysis of those terminal digits had been based on comparing the actual distribution to a uniform continuous distribution, when digits obviously are discretely distributed. Jeng corrected the calculation and reported statistics that did not so clearly support the claim of a digit preference. However delightful it may be to read a critique of statistical errors in a critique of statistical errors, we nevertheless found several aspects of the whole exchange to be quite troubling, prompting our own meta-critique of the analysis. The previous discussion emphasized statistical significance testing. But there are various reasons to expect departure from the uniform distribution in terminal digits of p-values, so that simply rejecting the null hypothesis is not terribly informative. Much more importantly, Jeng found that the original p-value of 0.043 should have been 0.086, and suggested this represented an important difference because it was on the other side of 0.05. Among the most widely reiterated (though often ignored tenets of modern quantitative research methods is that we should not treat statistical significance as a bright line test of whether we have observed a phenomenon. Moreover, it sends the wrong message about the role of statistics to suggest that a result should be dismissed because of limited statistical precision when it is so easy to gather more data. In response to these limitations, we gathered more data to improve the statistical precision, and

  6. Errors associated with outpatient computerized prescribing systems

    Science.gov (United States)

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  7. Error detection and reduction in blood banking.

    Science.gov (United States)

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  8. Antenna motion errors in bistatic SAR imagery

    Science.gov (United States)

    Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.

    2015-06-01

    Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.

  9. Macroscopic model and truncation error of discrete Boltzmann method

    Science.gov (United States)

    Hwang, Yao-Hsin

    2016-10-01

    A derivation procedure to secure the macroscopically equivalent equation and its truncation error for discrete Boltzmann method is proffered in this paper. Essential presumptions of two time scales and a small parameter in the Chapman-Enskog expansion are disposed of in the present formulation. Equilibrium particle distribution function instead of its original non-equilibrium form is chosen as key variable in the derivation route. Taylor series expansion encompassing fundamental algebraic manipulations is adequate to realize the macroscopically differential counterpart. A self-contained and comprehensive practice for the linear one-dimensional convection-diffusion equation is illustrated in details. Numerical validations on the incurred truncation error in one- and two-dimensional cases with various distribution functions are conducted to verify present formulation. As shown in the computational results, excellent agreement between numerical result and theoretical prediction are found in the test problems. Straightforward extensions to more complicated systems including convection-diffusion-reaction, multi-relaxation times in collision operator as well as multi-dimensional Navier-Stokes equations are also exposed in the Appendix to point out its expediency in solving complicated flow problems.

  10. Medication errors: hospital pharmacist perspective.

    Science.gov (United States)

    Guchelaar, Henk-Jan; Colen, Hadewig B B; Kalmeijer, Mathijs D; Hudson, Patrick T W; Teepe-Twiss, Irene M

    2005-01-01

    In recent years medication error has justly received considerable attention, as it causes substantial mortality, morbidity and additional healthcare costs. Risk assessment models, adapted from commercial aviation and the oil and gas industries, are currently being developed for use in clinical pharmacy. The hospital pharmacist is best placed to oversee the quality of the entire drug distribution chain, from prescribing, drug choice, dispensing and preparation to the administration of drugs, and can fulfil a vital role in improving medication safety. Most elements of the drug distribution chain can be optimised; however, because comparative intervention studies are scarce, there is little scientific evidence available demonstrating improvements in medication safety through such interventions. Possible interventions aimed at reducing medication errors, such as developing methods for detection of patients with increased risk of adverse drug events, performing risk assessment in clinical pharmacy and optimising the drug distribution chain are discussed. Moreover, the specific role of the clinical pharmacist in improving medication safety is highlighted, both at an organisational level and in individual patient care.

  11. Cosine tuning minimizes motor errors.

    Science.gov (United States)

    Todorov, Emanuel

    2002-06-01

    Cosine tuning is ubiquitous in the motor system, yet a satisfying explanation of its origin is lacking. Here we argue that cosine tuning minimizes expected errors in force production, which makes it a natural choice for activating muscles and neurons in the final stages of motor processing. Our results are based on the empirically observed scaling of neuromotor noise, whose standard deviation is a linear function of the mean. Such scaling predicts a reduction of net force errors when redundant actuators pull in the same direction. We confirm this prediction by comparing forces produced with one versus two hands and generalize it across directions. Under the resulting neuromotor noise model, we prove that the optimal activation profile is a (possibly truncated) cosine--for arbitrary dimensionality of the workspace, distribution of force directions, correlated or uncorrelated noise, with or without a separate cocontraction command. The model predicts a negative force bias, truncated cosine tuning at low muscle cocontraction levels, and misalignment of preferred directions and lines of action for nonuniform muscle distributions. All predictions are supported by experimental data.

  12. Rapid innovation diffusion in social networks.

    Science.gov (United States)

    Kreindler, Gabriel E; Young, H Peyton

    2014-07-22

    Social and technological innovations often spread through social networks as people respond to what their neighbors are doing. Previous research has identified specific network structures, such as local clustering, that promote rapid diffusion. Here we derive bounds that are independent of network structure and size, such that diffusion is fast whenever the payoff gain from the innovation is sufficiently high and the agents' responses are sufficiently noisy. We also provide a simple method for computing an upper bound on the expected time it takes for the innovation to become established in any finite network. For example, if agents choose log-linear responses to what their neighbors are doing, it takes on average less than 80 revision periods for the innovation to diffuse widely in any network, provided that the error rate is at least 5% and the payoff gain (relative to the status quo) is at least 150%. Qualitatively similar results hold for other smoothed best-response functions and populations that experience heterogeneous payoff shocks.

  13. 180 diffusion through amorphous SiOs and cristobalite

    OpenAIRE

    Rodríguez Viejo , Javier; Sibieude, F.; Clavaguera-Mora, M. T.; Monty, C.

    1993-01-01

    Secondary ion mass spectrometry was used to profile the diffusion of oxygen in polycrystalline β‐cristobalite and vitreous SiO2. The tracer concentration profiles of cristobalite are consistent with a model based on two mechanisms: bulk and short‐circuit diffusion. The profiles of partially crystallized samples containing vitreous SiO2 and β‐cristobalite were fitted using the sum of two complementary error functions and taking account of some interstitial‐network exchange. The bulk oxygen dif...

  14. Energetics of lateral eddy diffusion/advection:Part III. Energetics of horizontal and isopycnal diffusion/advection

    Institute of Scientific and Technical Information of China (English)

    HUANG Rui Xin

    2014-01-01

    Gravitational Potential Energy (GPE) change due to horizontal/isopycnal eddy diffusion and advection is examined. Horizontal/isopycnal eddy diffusion is conceptually separated into two steps:stirring and sub-scale diffusion. GPE changes associated with these two steps are analyzed. In addition, GPE changes due to stirring and subscale diffusion associated with horizontal/isopycnal advection in the Eulerian coordinates are analyzed. These formulae are applied to the SODA data for the world oceans. Our analysis indicates that horizontal/isopycnal advection in Eulerian coordinates can introduce large artificial diffusion in the model. It is shown that GPE source/sink in isopycnal coordinates is closely linked to physical property distribution, such as temperature, salinity and velocity. In comparison with z-coordinates, GPE source/sink due to stir-ring/cabbeling associated with isopycnal diffusion/advection is much smaller. Although isopycnal coordi-nates may be a better choice in terms of handling lateral diffusion, advection terms in the traditional Eule-rian coordinates can produce artificial source of GPE due to cabbeling associated with advection. Reducing such numerical errors remains a grand challenge.

  15. Field errors in hybrid insertion devices

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, R.D. [Lawrence Berkeley Lab., CA (United States)

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  16. Medical errors: legal and ethical responses.

    Science.gov (United States)

    Dickens, B M

    2003-04-01

    Liability to err is a human, often unavoidable, characteristic. Errors can be classified as skill-based, rule-based, knowledge-based and other errors, such as of judgment. In law, a key distinction is between negligent and non-negligent errors. To describe a mistake as an error of clinical judgment is legally ambiguous, since an error that a physician might have made when acting with ordinary care and the professional skill the physician claims, is not deemed negligent in law. If errors prejudice patients' recovery from treatment and/or future care, in physical or psychological ways, it is legally and ethically required that they be informed of them in appropriate time. Senior colleagues, facility administrators and others such as medical licensing authorities should be informed of serious forms of error, so that preventive education and strategies can be designed. Errors for which clinicians may be legally liable may originate in systemically defective institutional administration.

  17. Experimental demonstration of topological error correction.

    Science.gov (United States)

    Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei

    2012-02-22

    Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.

  18. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  19. L’errore nel laboratorio di Microbiologia

    Directory of Open Access Journals (Sweden)

    Paolo Lanzafame

    2006-03-01

    Full Text Available Error management plays one of the most important roles in facility process improvement efforts. By detecting and reducing errors quality and patient care improve. The records of errors was analysed over a period of 6 months and another was used to study the potential bias in the registrations.The percentage of errors detected was 0,17% (normalised 1720 ppm and the errors in the pre-analytical phase was the largest part.The major rate of errors was generated by the peripheral centres which send only sometimes the microbiology tests and don’t know well the specific procedures to collect and storage biological samples.The errors in the management of laboratory supplies were reported too. The conclusion is that improving operators training, in particular concerning samples collection and storage, is very important and that an affective system of error detection should be employed to determine the causes and the best corrective action should be applied.

  20. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  1. IDENTIFYING AN UNKNOWN SOURCE IN SPACE-FRACTIONAL DIFFUSION EQUATION

    Institute of Scientific and Technical Information of China (English)

    杨帆; 傅初黎; 李晓晓

    2014-01-01

    In this paper, we identify a space-dependent source for a fractional diffusion equation. This problem is ill-posed, i.e., the solution (if it exists) does not depend continu-ously on the data. The generalized Tikhonov regularization method is proposed to solve this problem. An a priori error estimate between the exact solution and its regularized approxi-mation is obtained. Moreover, an a posteriori parameter choice rule is proposed and a stable error estimate is also obtained. Numerical examples are presented to illustrate the validity and effectiveness of this method.

  2. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan

    2011-05-14

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity. This robust adaptive time discretization corrects the initial time step size to achieve a user specified bound on the discretization error and allows time step size variations of several orders of magnitude. In particular, in the one dimensional results presented in this work feature a change of four orders of magnitudes for the time step over the entire simulation.

  3. Error Propagation in a System Model

    Science.gov (United States)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  4. Experimental demonstration of topological error correction

    OpenAIRE

    2012-01-01

    Scalable quantum computing can only be achieved if qubits are manipulated fault-tolerantly. Topological error correction - a novel method which combines topological quantum computing and quantum error correction - possesses the highest known tolerable error rate for a local architecture. This scheme makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the first experimental demonstration of topological error correction with a...

  5. Sampling error of observation impact statistics

    OpenAIRE

    Kim, Sung-Min; Kim, Hyun Mee

    2014-01-01

    An observation impact is an estimate of the forecast error reduction by assimilating observations with numerical model forecasts. This study compares the sampling errors of the observation impact statistics (OBIS) of July 2011 and January 2012 using two methods. One method uses the random error under the assumption that the samples are independent, and the other method uses the error with lag correlation under the assumption that the samples are correlated with each other. The OBIS are obtain...

  6. Acoustic Evidence for Phonologically Mismatched Speech Errors

    Science.gov (United States)

    Gormley, Andrea

    2015-01-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of…

  7. Medication errors: the importance of safe dispensing.

    NARCIS (Netherlands)

    Cheung, K.C.; Bouvy, M.L.; Smet, P.A.G.M. de

    2009-01-01

    1. Although rates of dispensing errors are generally low, further improvements in pharmacy distribution systems are still important because pharmacies dispense such high volumes of medications that even a low error rate can translate into a large number of errors. 2. From the perspective of pharmacy

  8. Understanding EFL Students' Errors in Writing

    Science.gov (United States)

    Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti

    2015-01-01

    Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…

  9. Error Analysis of Quadrature Rules. Classroom Notes

    Science.gov (United States)

    Glaister, P.

    2004-01-01

    Approaches to the determination of the error in numerical quadrature rules are discussed and compared. This article considers the problem of the determination of errors in numerical quadrature rules, taking Simpson's rule as the principal example. It suggests an approach based on truncation error analysis of numerical schemes for differential…

  10. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  11. Error Analysis and the EFL Classroom Teaching

    Science.gov (United States)

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  12. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  13. Errors and Uncertainty in Physics Measurement.

    Science.gov (United States)

    Blasiak, Wladyslaw

    1983-01-01

    Classifies errors as either systematic or blunder and uncertainties as either systematic or random. Discusses use of error/uncertainty analysis in direct/indirect measurement, describing the process of planning experiments to ensure lowest possible uncertainty. Also considers appropriate level of error analysis for high school physics students'…

  14. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  15. Jonas Olson's Evidence for Moral Error Theory

    NARCIS (Netherlands)

    Evers, Daan

    2016-01-01

    Jonas Olson defends a moral error theory in (2014). I first argue that Olson is not justified in believing the error theory as opposed to moral nonnaturalism in his own opinion. I then argue that Olson is not justified in believing the error theory as opposed to moral contextualism either (although

  16. AWARENESS OF DE NTISTS ABOUT MEDICATION ERRORS

    Directory of Open Access Journals (Sweden)

    Sangeetha

    2014-01-01

    Full Text Available OBJECTIVE: To assess the awareness of medication errors among dentists. METHODS: Medication errors are the most common single preventable cause o f adverse events in medication practice. We conducted a survey with a sample of sixty dentists. Among them 30 were general dentists (BDS and 30 were dental specialists (MDS. Questionnaires were distributed to them with questions regarding medication erro rs and they were asked to fill up the questionnaire. Data was collected and subjected to statistical analysis using Fisher exact and Chi square test. RESULTS: In our study, sixty percent of general dentists and 76.7% of dental specialists were aware about the components of medication error. Overall 66.7% of the respondents in each group marked wrong duration as the dispensing error. Almost thirty percent of the general dentists and 56.7% of the dental specialists felt that technologic advances could accompl ish diverse task in reducing medication errors. This was of suggestive statistical significance with a P value of 0.069. CONCLUSION: Medication errors compromise patient confidence in the health - care system and increase health - care costs. Overall, the dent al specialists were more knowledgeable than the general dentists about the Medication errors. KEY WORDS: Medication errors; Dosing error; Prevention of errors; Adverse drug events; Prescribing errors; Medical errors.

  17. Error-Compensated Integrate and Hold

    Science.gov (United States)

    Matlin, M.

    1984-01-01

    Differencing circuit cancels error caused by switching transistors capacitance. In integrate and hold circuit using JFET switch, gate-to-source capacitance causes error in output voltage. Differential connection cancels out error. Applications in systems where very low voltages sampled or many integrate-and -hold cycles before circuit is reset.

  18. Jonas Olson's Evidence for Moral Error Theory

    NARCIS (Netherlands)

    Evers, Daan

    2016-01-01

    Jonas Olson defends a moral error theory in (2014). I first argue that Olson is not justified in believing the error theory as opposed to moral nonnaturalism in his own opinion. I then argue that Olson is not justified in believing the error theory as opposed to moral contextualism either (although

  19. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    Human errors are divided in two groups. The first group contains human errors, which effect the reliability directly. The second group contains human errors, which will not directly effect the reliability of the structure. The methodology used to estimate so-called reliability distributions on ba...

  20. The Problematic of Second Language Errors

    Science.gov (United States)

    Hamid, M. Obaidul; Doan, Linh Dieu

    2014-01-01

    The significance of errors in explicating Second Language Acquisition (SLA) processes led to the growth of error analysis in the 1970s which has since maintained its prominence in English as a second/foreign language (L2) research. However, one problem with this research is errors are often taken for granted, without problematising them and their…

  1. Error estimate for Doo-Sabin surfaces

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Based on a general bound on the distance error between a uniform Doo-Sabin surface and its control polyhedron, an exponential error bound independent of the subdivision process is presented in this paper. Using the exponential bound, one can predict the depth of recursive subdivision of the Doo-Sabin surface within any user-specified error tolerance.

  2. Medication errors: the importance of safe dispensing.

    NARCIS (Netherlands)

    Cheung, K.C.; Bouvy, M.L.; Smet, P.A.G.M. de

    2009-01-01

    1. Although rates of dispensing errors are generally low, further improvements in pharmacy distribution systems are still important because pharmacies dispense such high volumes of medications that even a low error rate can translate into a large number of errors. 2. From the perspective of pharmacy

  3. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error pre

  4. Fractal model of anomalous diffusion.

    Science.gov (United States)

    Gmachowski, Lech

    2015-12-01

    An equation of motion is derived from fractal analysis of the Brownian particle trajectory in which the asymptotic fractal dimension of the trajectory has a required value. The formula makes it possible to calculate the time dependence of the mean square displacement for both short and long periods when the molecule diffuses anomalously. The anomalous diffusion which occurs after long periods is characterized by two variables, the transport coefficient and the anomalous diffusion exponent. An explicit formula is derived for the transport coefficient, which is related to the diffusion constant, as dependent on the Brownian step time, and the anomalous diffusion exponent. The model makes it possible to deduce anomalous diffusion properties from experimental data obtained even for short time periods and to estimate the transport coefficient in systems for which the diffusion behavior has been investigated. The results were confirmed for both sub and super-diffusion.

  5. Diffusion in porous crystalline materials

    NARCIS (Netherlands)

    Krishna, R.

    2012-01-01

    The design and development of many separation and catalytic process technologies require a proper quantitative description of diffusion of mixtures of guest molecules within porous crystalline materials. This tutorial review presents a unified, phenomenological description of diffusion inside meso-

  6. Reflective Inverse Diffusion

    Directory of Open Access Journals (Sweden)

    Kenneth Burgi

    2016-11-01

    Full Text Available Phase front modulation was previously used to refocus light after transmission through scattering media. This process has been adapted here to work in reflection. A liquid crystal spatial light modulator is used to conjugate the phase scattering properties of diffuse reflectors to produce a converging phase front just after reflection. The resultant focused spot had intensity enhancement values between 13 and 122 depending on the type of reflector. The intensity enhancement of more specular materials was greater in the specular region, while diffuse reflector materials achieved a greater enhancement in non-specular regions, facilitating non-mechanical steering of the focused spot. Scalar wave optics modeling corroborates the experimental results.

  7. Diffused Religion and Prayer

    Directory of Open Access Journals (Sweden)

    Roberto Cipriani

    2011-06-01

    Full Text Available It is quite likely that the origins of prayer are to be found in ancient mourning and bereavement rites. Primeval ritual prayer was codified and handed down socially to become a deep-rooted feature of people’s cultural behavior, so much so, that it may surface again several years later, in the face of death, danger, need, even in the case of relapse from faith and religious practice. Modes of prayer depend on religious experience, on relations between personal prayer and political action, between prayer and forgiveness, and between prayer and approaches to religions. Various forms of prayer exist, from the covert-hidden to the overt-manifest kind. How can they be investigated? How can one, for instance, explore mental prayer? These issues regard the canon of diffused religion and, therefore, of diffused prayer.

  8. Galactic Diffuse Polarized Emission

    Indian Academy of Sciences (India)

    Ettore Carretti

    2011-12-01

    Diffuse polarized emission by synchrotron is a key tool to investigate magnetic fields in the Milky Way, particularly the ordered component of the large scale structure. Key observables are the synchrotron emission itself and the RM is by Faraday rotation. In this paper the main properties of the radio polarized diffuse emission and its use to investigate magnetic fields will be reviewed along with our current understanding of the galactic magnetic field and the data sets available. We will then focus on the future perspective discussing RM-synthesis – the new powerful instrument devised to unlock the information encoded in such an emission – and the surveys currently in progress like S-PASS and GMIMS.

  9. Anomalous diffusion of epicentres

    CERN Document Server

    Sotolongo-Costa, Oscar; Posadas, A; Luzon, F

    2007-01-01

    The classification of earthquakes in main shocks and aftershocks by a method recently proposed by M. Baiesi and M. Paczuski allows to the generation of a complex network composed of clusters that group the most correlated events. The spatial distribution of epicentres inside these structures corresponding to the catalogue of earthquakes in the eastern region of Cuba shows anomalous anti-diffusive behaviour evidencing the attractive nature of the main shock and the possible description in terms of fractional kinetics.

  10. [The diffusion of knowledge].

    Science.gov (United States)

    Ramiro-H, Manuel; Cruz-A, Enrique

    2016-01-01

    Between August 19 and 21, the Feria del Libro de las Ciencias de la Salud (Healthcare Book Fair) took place in the Palacio de Medicina in Mexico City. Archives of Medical Research, Revista Médica del IMSS, and Saber IMSS, three of the main instruments of knowledge diffusion of the Instituto Mexicano del Seguro Social, assisted to this book fair, which was organized by the Facultad de Medicina of UNAM.

  11. Diffusing Best Practices

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Baskerville, Richard

    2014-01-01

    Both the practice and the research literature on information systems attach great value to the identification and dissemination of information on “best practices”. In the philosophy of science, this type of knowledge is regarded as technological knowledge because it becomes manifest in the succes...... that the behavior will be effective). These two factors were especially critical if the source context of the best practice is qualitatively different from the target context into which the organization is seeking to diffuse the best practice.......Both the practice and the research literature on information systems attach great value to the identification and dissemination of information on “best practices”. In the philosophy of science, this type of knowledge is regarded as technological knowledge because it becomes manifest...... approach. The study context is a design case in which an organization desires to diffuse its best practices across different groups. The design goal is embodied in organizational mechanisms to achieve this diffusion. The study used Theory of Planned Behavior (TPB) as a kernel theory. The artifacts...

  12. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    Science.gov (United States)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  13. Correlated measurement error hampers association network inference.

    Science.gov (United States)

    Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B

    2014-09-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow

  14. Sodium diffusion in boroaluminosilicate glasses

    DEFF Research Database (Denmark)

    Smedskjaer, Morten M.; Zheng, Qiuju; Mauro, John C.

    2011-01-01

    diffusivity are explored in terms of the structural role of ferric and ferrous ions. By comparing the results obtained by the three approaches, we observe that both the tracer Na diffusion and the Na-K interdiffusion are significantly faster than the Na inward diffusion. The origin of this discrepancy could...

  15. A method for optimizing the cosine response of solar UV diffusers

    Science.gov (United States)

    Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki

    2013-07-01

    Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.

  16. Model error estimation in ensemble data assimilation

    Directory of Open Access Journals (Sweden)

    S. Gillijns

    2007-01-01

    Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.

  17. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  18. Errors in quantum tomography: diagnosing systematic versus statistical errors

    Science.gov (United States)

    Langford, Nathan K.

    2013-03-01

    A prime goal of quantum tomography is to provide quantitatively rigorous characterization of quantum systems, be they states, processes or measurements, particularly for the purposes of trouble-shooting and benchmarking experiments in quantum information science. A range of techniques exist to enable the calculation of errors, such as Monte-Carlo simulations, but their quantitative value is arguably fundamentally flawed without an equally rigorous way of authenticating the quality of a reconstruction to ensure it provides a reasonable representation of the data, given the known noise sources. A key motivation for developing such a tool is to enable experimentalists to rigorously diagnose the presence of technical noise in their tomographic data. In this work, I explore the performance of the chi-squared goodness-of-fit test statistic as a measure of reconstruction quality. I show that its behaviour deviates noticeably from expectations for states lying near the boundaries of physical state space, severely undermining its usefulness as a quantitative tool precisely in the region which is of most interest in quantum information processing tasks. I suggest a simple, heuristic approach to compensate for these effects and present numerical simulations showing that this approach provides substantially improved performance.

  19. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  20. Adjoint Error Estimation for Linear Advection

    Energy Technology Data Exchange (ETDEWEB)

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.