WorldWideScience

Sample records for monochrome error diffusion

  1. Color extended visual cryptography using error diffusion.

    Science.gov (United States)

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  2. Modulated error diffusion CGHs for neural nets

    Science.gov (United States)

    Vermeulen, Pieter J. E.; Casasent, David P.

    1990-05-01

    New modulated error diffusion CGHs (computer generated holograms) for optical computing are considered. Specific attention is given to their use in optical matrix-vector, associative processor, neural net and optical interconnection architectures. We consider lensless CGH systems (many CGHs use an external Fourier transform (FT) lens), the Fresnel sampling requirements, the effects of finite CGH apertures (sample and hold inputs), dot size correction (for laser recorders), and new applications for this novel encoding method (that devotes attention to quantization noise effects).

  3. Mirror monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Mankos, Marian [Electron Optica, Inc., Palo Alto, CA (United States); Shadman, Khashayar [Electron Optica, Inc., Palo Alto, CA (United States)

    2014-12-02

    In this SBIR project, Electron Optica, Inc. (EOI) is developing a mirror electron monochromator (MirrorChrom) attachment to new and retrofitted electron microscopes (EMs) for improving the energy resolution of the EM from the characteristic range of 0.2-0.5 eV to the range of 10-50 meV. This improvement will enhance the characterization of materials by imaging and spectroscopy. In particular, the monochromator will refine the energy spectra characterizing materials, as obtained from transmission EMs [TEMs] fitted with electron spectrometers, and it will increase the spatial resolution of the images of materials taken with scanning EMs (SEMs) operated at low voltages. EOI’s MirrorChrom technology utilizes a magnetic prism to simultaneously deflect the electron beam off the axis of the microscope column by 90° and disperse the electrons in proportional to their energies into a module with an electron mirror and a knife-edge. The knife-edge cuts off the tails of the energy distribution to reduce the energy spread of the electrons that are reflected, and subsequently deflected, back into the microscope column. The knife-edge is less prone to contamination, and thereby charging, than the conventional slits used in existing monochromators, which improves the reliability and stability of the module. The overall design of the MirrorChrom exploits the symmetry inherent in reversing the electron trajectory in order to maintain the beam brightness – a parameter that impacts how well the electron beam can be focused downstream onto a sample. During phase I, EOI drafted a set of candidate monochromator architectures and evaluated the trade-offs between energy resolution and beam current to achieve the optimum design for three particular applications with market potential: increasing the spatial resolution of low voltage SEMs, increasing the energy resolution of low voltage TEMs (beam energy of 5-20 keV), and increasing the energy resolution of conventional TEMs (beam

  4. Error-diffusion binarization for joint transform correlators

    Science.gov (United States)

    Inbar, Hanni; Mendlovic, David; Marom, Emanuel

    1993-02-01

    A normalized nonlinearly scaled binary joint transform image correlator (JTC) based on a 1D error-diffusion binarization method has been studied. The behavior of the error-diffusion method is compared with hard-clipping, the most widely used method of binarized JTC approaches, using a single spatial light modulator. Computer simulations indicate that the error-diffusion method is advantageous for the production of a binarized power spectrum interference pattern in JTC configurations, leading to better definition of the correlation location. The error-diffusion binary JTC exhibits autocorrelation characteristics which are superior to those of the high-clipping binary JTC over the whole nonlinear scaling range of the Fourier-transform interference intensity for all noise levels considered.

  5. Error diffusion applied to the manipulation of liquid-crystal display subpixels

    Science.gov (United States)

    Dallas, William J.; Fan, Jiahua; Roehrig, Hans; Krupinski, Elizabeth A.

    2004-05-01

    Flat-panel displays based on liquid crystal technology are becoming widely used in the medical imaging arena. Despite the impressive capabilities of presently-existing panels, some medical images push their boundaries. We are working with mammograms that contain up to 4800 x 6400 14-bit pixels. Stated differently, these images contain 30 mega-pixels each. In the standard environment, for film viewing, the mammograms are hung four-up, i.e. four images are located side by side. Because many of the LCD panels used for monochrome display of medical images are based on color models, the pixels of the panels are divided into sub-pixels. These sub-pixels vary in their numbers and in the degrees of independence. Manufacturers have used both spatial and temporal modulation of these sub-pixels to improve the quality of images presented by the monitors. In this presentation we show how the sub-pixel structure of some present and future displays can be used to attain higher spatial resolution than the full-pixel resolution specification would suggest while also providing increased contrast resolution. The error diffusion methods we discuss provide a natural way of controlling sub-pixels and implementing trade-offs. In smooth regions of the image contrast resolution can maximized. In rapidly-varying regions of the image spatial resolution can be favored.

  6. Optimized universal color palette design for error diffusion

    Science.gov (United States)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  7. Variable angle asymmetric cut monochromator

    International Nuclear Information System (INIS)

    Smither, R.K.; Fernandez, P.B.

    1993-09-01

    A variable incident angle, asymmetric cut, double crystal monochromator was tested for use on beamlines at the Advanced Photon Source (APS). For both undulator and wiggler beams the monochromator can expand area of footprint of beam on surface of the crystals to 50 times the area of incident beam; this will reduce the slope errors by a factor of 2500. The asymmetric cut allows one to increase the acceptance angle for incident radiation and obtain a better match to the opening angle of the incident beam. This can increase intensity of the diffracted beam by a factor of 2 to 5 and can make the beam more monochromatic, as well. The monochromator consists of two matched, asymmetric cut (18 degrees), silicon crystals mounted so that they can be rotated about three independent axes. Rotation around the first axis controls the Bragg angle. The second rotation axis is perpendicular to the diffraction planes and controls the increase of the area of the footprint of the beam on the crystal surface. Rotation around the third axis controls the angle between the surface of the crystal and the wider, horizontal axis for the beam and can make the footprint a rectangle with a minimum. length for this area. The asymmetric cut is 18 degrees for the matched pair of crystals, which allows one to expand the footprint area by a factor of 50 for Bragg angles up to 19.15 degrees (6 keV for Si[111] planes). This monochromator, with proper cooling, will be useful for analyzing the high intensity x-ray beams produced by both undulators and wigglers at the APS

  8. Multi-layer monochromator

    International Nuclear Information System (INIS)

    Schoenborn, B.P.; Caspar, D.L.D.

    1975-01-01

    This invention provides an artificial monochromator crystal for efficiently selecting a narrow band of neutron wavelengths from a neutron beam having a Maxwellian wavelength distribution, by providing on a substrate a plurality of germanium layers, and alternate periodic layers of a different metal having tailored thicknesses, shapes, and volumetric and neutron scattering densities. (U.S.)

  9. Binary joint transform correlation using error-diffusion techniques

    Science.gov (United States)

    Inbar, Hanni; Marom, Emanuel; Konforti, Naim

    1993-08-01

    Optical pattern recognition techniques based on the optical joint transform correlator (JTC) scheme are attractive due to their simplicity. Recent improvements in spatial light modulators (SLM) increased the popularity of the JTC, providing means for real time operation. Using a binary SLM for the display of the Fourier spectrum, first requires binarization of the joint power spectrum distribution. Although hard-clipping is the simplest and most common binarization method used, we suggest to apply error-diffusion as an improved binarization technique. The performance of a binary JTC, whose input image is considered to contain additive zero-mean white Gaussian noise, is investigated. Various ways for nonlinearly modifying the joint power spectrum prior to the binarization step, which is based on either error-diffusion or hard-clipping techniques, are discussed. These nonlinear modifications aim at increasing the contrast of the interference fringes at the joint power spectrum plane, leading to better definition of the correlation signal. Mathematical analysis, computer simulations and experimental results are presented.

  10. Principal distance constraint error diffusion algorithm for homogeneous dot distribution

    Science.gov (United States)

    Kang, Ki-Min; Kim, Choon-Woo

    1999-12-01

    The perceived quality of the halftoned image strongly depends on the spatial distribution of the binary dots. Various error diffusion algorithms have been proposed for realizing the homogeneous dot distribution in the highlight and shadow regions. However, they are computationally expensive and/or require large memory space. This paper presents a new threshold modulated error diffusion algorithm for the homogeneous dot distribution. The proposed method is applied exactly same as the Floyd-Steinberg's algorithm except the thresholding process. The threshold value is modulated based on the difference between the distance to the nearest minor pixel, `minor pixel distance', and the principal distance. To do so, calculation of the minor pixel distance is needed for every pixel. But, it is quite time consuming and requires large memory resources. In order to alleviate this problem, `the minor pixel offset array' that transforms the 2D history of minor pixels into the 1D codes is proposed. The proposed algorithm drastically reduces the computational load and memory spaces needed for calculation of the minor pixel distance.

  11. Residual sweeping errors in turbulent particle pair diffusion in a Lagrangian diffusion model.

    Science.gov (United States)

    Malik, Nadeem A

    2017-01-01

    Thomson, D. J. & Devenish, B. J. [J. Fluid Mech. 526, 277 (2005)] and others have suggested that sweeping effects make Lagrangian properties in Kinematic Simulations (KS), Fung et al [Fung J. C. H., Hunt J. C. R., Malik N. A. & Perkins R. J. J. Fluid Mech. 236, 281 (1992)], unreliable. However, such a conclusion can only be drawn under the assumption of locality. The major aim here is to quantify the sweeping errors in KS without assuming locality. Through a novel analysis based upon analysing pairs of particle trajectories in a frame of reference moving with the large energy containing scales of motion it is shown that the normalized integrated error [Formula: see text] in the turbulent pair diffusivity (K) due to the sweeping effect decreases with increasing pair separation (σl), such that [Formula: see text] as σl/η → ∞; and [Formula: see text] as σl/η → 0. η is the Kolmogorov turbulence microscale. There is an intermediate range of separations 1 < σl/η < ∞ in which the error [Formula: see text] remains negligible. Simulations using KS shows that in the swept frame of reference, this intermediate range is large covering almost the entire inertial subrange simulated, 1 < σl/η < 105, implying that the deviation from locality observed in KS cannot be atributed to sweeping errors. This is important for pair diffusion theory and modeling. PACS numbers: 47.27.E?, 47.27.Gs, 47.27.jv, 47.27.Ak, 47.27.tb, 47.27.eb, 47.11.-j.

  12. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-07

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

  13. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric

    2016-01-09

    The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.

  14. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    Science.gov (United States)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  15. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  16. FEM for time-fractional diffusion equations, novel optimal error analyses

    OpenAIRE

    Mustapha, Kassem

    2016-01-01

    A semidiscrete Galerkin finite element method applied to time-fractional diffusion equations with time-space dependent diffusivity on bounded convex spatial domains will be studied. The main focus is on achieving optimal error results with respect to both the convergence order of the approximate solution and the regularity of the initial data. By using novel energy arguments, for each fixed time $t$, optimal error bounds in the spatial $L^2$- and $H^1$-norms are derived for both cases: smooth...

  17. Computable error estimates for Monte Carlo finite element approximation of elliptic PDE with lognormal diffusion coefficients

    KAUST Repository

    Hall, Eric; Haakon, Hoel; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2016-01-01

    lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible

  18. Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

    KAUST Repository

    Sandberg, Mattias

    2015-01-01

    log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible

  19. APS high heat load monochromator

    International Nuclear Information System (INIS)

    Lee, W.K.; Mills, D.

    1993-02-01

    This document contains the design specifications of the APS high heat load (HHL) monochromator and associated accessories as of February 1993. It should be noted that work is continuing on many parts of the monochromator including the mechanical design, crystal cooling designs, etc. Where appropriate, we have tried to add supporting documentation, references to published papers, and calculations from which we based our decisions. The underlying philosophy behind performance specifications of this monochromator was to fabricate a device that would be useful to as many APS users as possible, that is, the design should be as generic as possible. In other words, we believe that this design will be capable of operating on both bending magnet and ID beamlines (with the appropriate changes to the cooling and crystals) with both flat and inclined crystal geometries and with a variety of coolants. It was strongly felt that this monochromator should have good energy scanning capabilities over the classical energy range of about 4 to 20 keywith Si (111) crystals. For this reason, a design incorporating one rotation stage to drive both the first and second crystals was considered most promising. Separate rotary stages for the first and second crystals can sometimes provide more flexibility in their capacities to carry heavy loads (for heavily cooled first crystals or sagittal benders of second crystals), but their tuning capabilities were considered inferior to the single axis approach

  20. Error quantification of the axial nodal diffusion kernel of the DeCART code

    International Nuclear Information System (INIS)

    Cho, J. Y.; Kim, K. S.; Lee, C. C.

    2006-01-01

    This paper is to quantify the transport effects involved in the axial nodal diffusion kernel of the DeCART code. The transport effects are itemized into three effects, the homogenization, the diffusion, and the nodal effects. A five pin model consisting of four fuel pins and one non-fuel pin is demonstrated to quantify the transport effects. The transport effects are analyzed for three problems, the single pin (SP), guide tube (GT) and control rod (CR) problems by replacing the non-fuel pin with the fuel pin, a guide-tube and a control rod pins, respectively. The homogenization and diffusion effects are estimated to be about -4 and -50 pcm for the eigenvalue, and less than 2 % for the node power. The nodal effect on the eigenvalue is evaluated to be about -50 pcm in the SP and GT problems, and +350 pcm in the CR problem. Regarding the node power, this effect induces about a 3 % error in the SP and GT problems, and about a 20 % error in the CR problem. The large power error in the CR problem is due to the plane thickness, and it can be decreased by using the adaptive plane size. From the error quantification, it is concluded that the homogenization and the diffusion effects are not controllable if DeCART maintains the diffusion kernel for the axial solution, but the nodal effect is controllable by introducing the adaptive plane size scheme. (authors)

  1. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems

    International Nuclear Information System (INIS)

    Stephansen, A.F.

    2007-12-01

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  2. High-heat-load monochromator options for the RIXS beamline at the APS with the MBA lattice

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zunping, E-mail: zpliu@anl.gov; Gog, Thomas, E-mail: gog@aps.anl.gov; Stoupin, Stanislav A.; Upton, Mary H.; Ding, Yang; Kim, Jung-Ho; Casa, Diego M.; Said, Ayman H.; Carter, Jason A.; Navrotski, Gary [Advanced Photon Source, Argonne National Laboratory, 9700 S. Cass Ave, Lemont, IL 60439 (United States)

    2016-07-27

    With the MBA lattice for APS-Upgrade, tuning curves of 2.6 cm period undulators meet the source requirements for the RIXS beamline. The high-heat-load monochromator (HHLM) is the first optical white beam component. There are four options for the HHLM such as diamond monochromators with refrigerant of either water or liquid nitrogen (LN{sub 2}), and silicon monochromators of either direct or indirect cooling system. Their performances are evaluated at energy 11.215 keV (Ir L-III edge). The cryo-cooled diamond monochromator has similar performance as the water-cooled diamond monochromator because GaIn of the Cu-GaIn-diamond interface becomes solid. The cryo-cooled silicon monochromators perform better, not only in terms of surface slope error due to thermal deformation, but also in terms of thermal capacity.

  3. Monochromated scanning transmission electron microscopy

    International Nuclear Information System (INIS)

    Rechberger, W.; Kothleitner, G.; Hofer, F.

    2006-01-01

    Full text: Electron energy-loss spectroscopy (EELS) has developed into an established technique for chemical and structural analysis of thin specimens in the (scanning) transmission electron microscope (S)TEM. The energy resolution in EELS is largely limited by the stability of the high voltage supply, by the resolution of the spectrometer and by the energy spread of the source. To overcome this limitation a Wien filter monochromator was recently introduced with commercially available STEMs, offering the advantage to better resolve EELS fine structures, which contain valuable bonding information. The method of atomic resolution Z-contrast imaging within an STEM, utilizing a high-angle annular dark-field (HAADF) detector can perfectly complement the excellent energy resolution, since EELS spectra can be collected simultaneously. In combination with a monochromator microscope not only high spatial resolution images can be recorded but also high energy resolution EELS spectra are attainable. In this work we investigated the STEM performance of a 200 kV monochromated Tecnai F20 with a high resolution Gatan Imaging Filter (HR-GIF). (author)

  4. The in-focus variable line spacing plane grating monochromator

    International Nuclear Information System (INIS)

    Reininger, R.

    2011-01-01

    The in-focus variable line spacing plane grating monochromator is based on only two plane optical elements, a variable line spacing plane grating and a plane pre-mirror that illuminates the grating at the angle of incidence that will focus the required photon energy. A high throughput beamline requires only a third optical element after the exit slit, an aberration corrected elliptical toroid. Since plane elements can be manufactured with the smallest figure errors, this monochromator design can achieve very high resolving power. Furthermore, this optical design can correct the deformations induced by the heat load on the optics along the dispersion plane. This should allow obtaining a resolution of 10 meV at 1 keV with currently achievable figure errors on plane optics. The position of the photon source when an insertion device center is not located at the center of the straight section, a common occurrence in new insertion device beamlines, is investigated.

  5. A Cold Neutron Monochromator and Scattering Apparatus; Monochromateur et appareillage pour la diffusion de neutrons lents; Monokhromator dlya ''kholodnykh'' nejtronov i pribor dlya rasseyaniya; Monocromador y aparato de dispersion para neutrones frios

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D; Cocking, S J; Egelstaff, P A; Webb, F J [Nuclear Physics Division, Aere, Harwell, Didcot, Berks (United Kingdom)

    1963-01-15

    A narrow band of neutron wavelengths (4 A and greater) is selected from a collimated neutron beam obtained from the Dido reactor at Harwell. These neutrqps are scattered by various samples and the energy transfer of the scattered neutrons measured using time-of-flight techniques. The neutrons, moderated by a liquid hydrogen source in the reactor pass through first a liquid nitrogen- cooled filter, then a single crystal of bismuth and finally they are ''chopped'' by a magnesium-cadmium high- speed curved slot rotor. In this apparatus the wavelength spread of 0. 3 A at 4 . 1 A is determined primarily by the Be-Bi filter, while the time spread (8 {mu}s) is determined by the rotor. The monochromated neutron bursts from this rotor are scattered by a sample and detected in one of two counter arrays. When studying liquid or polycrystalline samples an array of six BF{sub 3}, counter assemblies (each 2 inches x 24 inches in area)are used covering scatter angles from 20{sup o} to 90{sup o}. This array is placed below the neutron beam. Above the line of the neutron beam is a second array consisting of three scintillators 2 inches in diameter, which is used for the study of single crystal samples. The output of each counter is fed into a tape recording system which has 500 time channels available for each counter. This apparatus has been used to study neutron scattering from several gaseous, liquid and crystalline samples and the most recent measurements are presented in other papers in these proceedings. [French] Les auteurs extraient une bande etroite de neutrons ( 4 A et plus) d'un faisceau collimate de neutrons produits par le reacteur Dido de Harwell. On fait diffuser ces neutrons au moyen de divers echantillons et on mesure le transfert d'energie des neutrons diffuses par la methode du temps de vol. Les neutrons ralentis par de l'hydrogene liquide place dans le reacteur passent d'abord dans un filtre refroidi a l'azote liquide, puis dans un monocristal de bismuth

  6. Neutron optics with multilayer monochromators

    International Nuclear Information System (INIS)

    Saxena, A.M.; Majkrzak, C.F.

    1984-01-01

    A multilayer monochromator is made by depositing thin films of two materials in an alternating sequence on a glass substrate. This makes a multilayer periodic in a direction perpendicular to the plane of the films, with a d-spacing equal to the thickness of one bilayer. Neutrons of wavelength λ incident on a multilayer will be reflected at an angle phi given by the Bragg relation nλ = 2d sinphi, where n is the order of reflection. The use of thin-film multilayers for monochromating neutrons is discussed. Because of the low flux of neutrons, the samples have to be large, and the width of the incident beam can be as much as 2 cm. Multilayers made earlier were fabricated by resistive heating of the materials in a vacuum chamber. Because of geometrical constraints imposed by the size of the vacuum chamber, limits on the amount of material that can be loaded in a boat, and finite life of the boats, this method of preparation limits the length of a multilayer to ∼ 15 cm and the total number of bilayers in a multilayer to about 200. This paper discusses a thin-film deposition system using RF sputtering for depositing films

  7. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo.

    Science.gov (United States)

    Krogel, Jaron T; Kent, P R C

    2017-06-28

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energy and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+ and 4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.

  8. A compact double crystal monochromator for electrochemistry beamline at PLS

    CERN Document Server

    Rah, S; Kim, G H

    2001-01-01

    A compact double crystal monochromator based on 16.5'' CF flange has been designed, fabricated and installed for electrochemistry beamline at Pohang light source. The Bragg angle range of the monochromator is 7-75 deg. The mechanical design is modified from typical Boomerang design [J.A. Golovchenko et al., Rev. Sci. Instrum. 52 (1981) 509; J.P. Kirkland, Nucl. Instr. and Meth. A291 (1990) 185] to have fixed beam offset and single driving axis for spectroscopy experiments. The parallelism error of the crystals is minimized to less than 6 mu rad for the range, by using a precision single axis linear guide, Also, the number of mechanical parts in the vacuum is minimized and 1.8x10 sup - sup 9 Torr of vacuum is achieved without baking.

  9. An ultrahigh vacuum monochromator for photophysics beamline

    International Nuclear Information System (INIS)

    Meenakshi Raja Rao, P.; Padmanabhan, Saraswathy; Raja Sekhar, B.N.; Shastri, Aparna; Khan, H.A.; Sinha, A.K.

    2000-08-01

    The photophysics beamline designed for carrying out photoabsorption and fluorescence studies using the 450 MeV Synchrotron Radiation Source (SRS), INDUS-1, uses a 1 metre monochromator as premonochromator for monochromatising the continuum. An ultra high vacuum compatible monochromator in Seya-Namioka mount has been designed and fabricated indigenously. The monochromator was assembled and tested for its performance. Wavelength scanning mechanism was tested for its reproducibility and the monochromator was tested for its resolution using UV and VUV sources. An average spectral resolution of 2.5 A was achieved using a 1200 gr/mm grating. A wavelength repeatability of ± 1A was obtained. An ultra high vacuum of 2 X 10 -8 mbar was also achieved in the monochromator. Details of fabrication, assembly and testing are presented in this report. (author)

  10. Impact of errors in experimental parameters on reconstructed breast images using diffuse optical tomography.

    Science.gov (United States)

    Deng, Bin; Lundqvist, Mats; Fang, Qianqian; Carp, Stefan A

    2018-03-01

    Near-infrared diffuse optical tomography (NIR-DOT) is an emerging technology that offers hemoglobin based, functional imaging tumor biomarkers for breast cancer management. The most promising clinical translation opportunities are in the differential diagnosis of malignant vs. benign lesions, and in early response assessment and guidance for neoadjuvant chemotherapy. Accurate quantification of the tissue oxy- and deoxy-hemoglobin concentration across the field of view, as well as repeatability during longitudinal imaging in the context of therapy guidance, are essential for the successful translation of NIR-DOT to clinical practice. The ill-posed and ill-condition nature of the DOT inverse problem makes this technique particularly susceptible to model errors that may occur, for example, when the experimental conditions do not fully match the assumptions built into the image reconstruction process. To evaluate the susceptibility of DOT images to experimental errors that might be encountered in practice for a parallel-plate NIR-DOT system, we simulated 7 different types of errors, each with a range of magnitudes. We generated simulated data by using digital breast phantoms derived from five actual mammograms of healthy female volunteers, to which we added a 1-cm tumor. After applying each of the experimental error types and magnitudes to the simulated measurements, we reconstructed optical images with and without structural prior guidance and assessed the overall error in the total hemoglobin concentrations (HbT) and in the HbT contrast between the lesion and surrounding area vs. the best-case scenarios. It is found that slight in-plane probe misalignment and plate rotation did not result in large quantification errors. However, any out-of-plane probe tilting could result in significant deterioration in lesion contrast. Among the error types investigated in this work, optical images were the least likely to be impacted by breast shape inaccuracies but suffered the

  11. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.; Lazarov, R.; Pasciak, J.; Zhou, Z.

    2014-01-01

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  12. Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion

    KAUST Repository

    Jin, B.

    2014-05-30

    © 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.

  13. Velocity monochromator for macro-ions

    Energy Technology Data Exchange (ETDEWEB)

    Paquin, R; Baril, M [Laval Univ., Quebec City (Canada). Dept. de Physique

    1976-09-15

    We propose the use of a dynamic monochromator to reduce the energy spread of a macroion source. It is shown that the energy aberration can be corrected using linear acceleration after the particles are separated in a field free drift tube. We give a general expression for the resolution of the monochromator. We verify experimentally that the energy distribution of a beam of cesium ions of 160 eV mean energy could be reduced from 20 eV to 4.5 eV, giving an improvement of 4.3, with this monochromator which has an efficiency of 6%. Two suggestions to improve the transmission of the monochromator are also given.

  14. Velocity monochromator for macro-ions

    International Nuclear Information System (INIS)

    Paquin, R.; Baril, M.

    1976-01-01

    We propose the use of a dynamic monochromator to reduce the energy spread of a macroion source. It is shown that the energy aberration can be corrected using linear acceleration after the particles are separated in a field free drift tube. We give a general expression for the resolution of the monochromator. We verify experimentally that the energy distribution of a beam of cesium ions of 160 eV mean energy could be reduced from 20 eV to 4.5 eV, giving an improvement of 4.3, with this monochromator which has an efficiency of 6%. Two suggestions to improve the transmission of the monochromator are also given. (author)

  15. Heat load studies of a water-cooled minichannel monochromator for synchrotron x-ray beams

    Science.gov (United States)

    Freund, Andreas K.; Arthur, John R.; Zhang, Lin

    1997-12-01

    We fabricated a water-cooled silicon monochromator crystal with small channels for the special case of a double-crystal fixed-exit monochromator design where the beam walks across the crystal when the x-ray energy is changed. The two parts of the cooled device were assembled using a new technique based on low melting point solder. The bending of the system produced by this technique could be perfectly compensated by mechanical counter-bending. Heat load tests of the monochromator in a synchrotron beam of 75 W total power, 3 mm high and 15 mm wide, generated by a multipole wiggler at SSRL, showed that the thermal slope error of the crystal is 1 arcsec/40 W power, in full agreement with finite element analysis. The cooling scheme is adequate for bending magnet beamlines at the ESRF and present wiggler beamlines at the SSRL.

  16. Inclined monochromator for high heat-load synchrotron x-ray radiation

    Science.gov (United States)

    Khounsary, Ali M.

    1994-01-01

    A double crystal monochromator including two identical, parallel crystals, each of which is cut such that the normal to the diffraction planes of interest makes an angle less than 90 degrees with the surface normal. Diffraction is symmetric, regardless of whether the crystals are symmetrically or asymmetrically cut, enabling operation of the monochromator with a fixed plane of diffraction. As a result of the inclination of the crystal surface, an incident beam has a footprint area which is elongated both vertically and horizontally when compared to that of the conventional monochromator, reducing the heat flux of the incident beam and enabling more efficient surface cooling. Because after inclination of the crystal only a fraction of thermal distortion lies in the diffraction plane, slope errors and the resultant misorientation of the diffracted beam are reduced.

  17. Double crystal monochromator controlled by integrated computing on BL07A in New SUBARU, Japan

    Energy Technology Data Exchange (ETDEWEB)

    Okui, Masato, E-mail: okui@kohzu.co.jp [Kohzu Precision Co., Ltd., 2-6-15, Kurigi, Asao-ku, Kawasaki-shi, Kanagawa 215-8521 (Japan); Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan); Yato, Naoki; Watanabe, Akinobu; Lin, Baiming; Murayama, Norio [Kohzu Precision Co., Ltd., 2-6-15, Kurigi, Asao-ku, Kawasaki-shi, Kanagawa 215-8521 (Japan); Fukushima, Sei, E-mail: FUKUSHIMA.Sei@nims.go.jp [Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan); National Institute for Material Sciences (Japan); Kanda, Kazuhiro [Laboratory of Advanced Science and Technology for Industry, University of Hyogo (Japan)

    2016-07-27

    The BL07A beamline in New SUBARU, University of Hyogo, has been used for many studies of new materials. A new double crystal monochromator controlled by integrated computing was designed and installed in the beamline in 2014. In this report we will discuss the unique features of this new monochromator, MKZ-7NS. This monochromator was not designed exclusively for use in BL07A; on the contrary, it was designed to be installed at low cost in various beamlines to facilitate the industrial applications of medium-scale synchrotron radiation facilities. Thus, the design of the monochromator utilized common packages that can satisfy the wide variety of specifications required at different synchrotron radiation facilities. This monochromator can be easily optimized for any beamline due to the fact that a few control parameters can be suitably customized. The beam offset can be fixed precisely even if one of the two slave axes is omitted. This design reduces the convolution of mechanical errors. Moreover, the monochromator’s control mechanism is very compact, making it possible to reduce the size of the vacuum chamber can be made smaller.

  18. On the group approximation errors in description of neutron slowing-down at large distances from a source. Diffusion approach

    International Nuclear Information System (INIS)

    Kulakovskij, M.Ya.; Savitskij, V.I.

    1981-01-01

    The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other

  19. Internally cooled V-shape inclined monochromator

    Czech Academy of Sciences Publication Activity Database

    Oberta, Peter; Áč, V.; Hrdý, Jaromír

    2008-01-01

    Roč. 15, - (2008), 8-11 ISSN 0909-0495 R&D Projects: GA AV ČR IAA100100716 Grant - others:VEGA(SK) 1/4134/07 Institutional research plan: CEZ:AV0Z10100522 Keywords : inclined monochromator * heat load * internal cooling Subject RIV: BH - Optics, Masers, Lasers Impact factor: 2.333, year: 2008

  20. Calculation of thermal deformations in water-cooled monochromator crystals

    International Nuclear Information System (INIS)

    Nakamura, Ario; Hashimoto, Shinya; Motohashi, Haruhiko

    1994-11-01

    Through calculation of temperature distribution and thermal deformation of monochromators, optical degradation by the heat loads in SPring-8 have been discussed. Cooling experiments were made on three models of copper structures with the JAERI Electron Beam Irradiation Stand (JEBIS) and the results were used to estimate heat transfer coefficients in the models. The heat transfer coefficients have been adopted to simulate heating processes on silicon models of the same structures as the copper models, for which radiations from the SPring-8 bending magnet and the JAERI prototype undulator (WPH-33J) were considered. It has been concluded that, in the case of bending magnet (with power density of 0.27[MW/m 2 ] on monochromator surface), the temperature at the surface center reaches about 30[degC] from the initial temperature of 27[degC] in all the models. In the case of WPH-33J (with power density of 8.2[MW/m 2 ]), the temperature reaches about 200 to 280[degC] depending on the models. The radiation from WPH-33J yields slope errors bigger than the Darwin's width(23[μrad]). (author)

  1. The speed of memory errors shows the influence of misleading information: Testing the diffusion model and discrete-state models.

    Science.gov (United States)

    Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E

    2018-05-01

    In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Accuracy synthesis of T-shaped exit fixed mechanism in a double-crystal monochromator

    International Nuclear Information System (INIS)

    Wang Fengqin; Cao Chongzhen; Wang Jidai; Li Yushan; Gao Xueguan

    2007-01-01

    It is a key performance requirement for a double-crystal monochromator that the exit is fixed, and in order to improve the height accuracy of the exit in T-shaped exit fixed mechanism, the expression between the height of the exit and various original errors was put forward using geometrical analysis method. According to the independent action principle of original errors, accuracy synthesis of T-shaped exit fixed mechanism was studied by using the equal accuracy method, and the tolerance ranges of original errors were obtained. How to calculate the tolerance ranges of original errors was explained by giving an example. (authors)

  3. Monolithic I-Beam Crystal Monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Bagnasco, John

    2001-10-16

    Curved crystal, focusing monochromators featuring cubed-root thickness profiles typically employ side-clamped cooling to reduce thermally induced overall bend deformation of the crystal. While performance is improved, residual bend deformation is often an important limiting factor in the monochromator performance. A slightly asymmetric ``I-beam'' crystal cross section with cubed-root flange profiles has been developed to further reduce this effect. Physical motivation, finite-element modeling evaluation and performance characteristics of this design are discussed. Reduction of high mounting stress at the fixed end of the crystal required the soldering of an Invar support fixture to the crystal. Detailed descriptions of this process along with its performance characteristics are also presented.

  4. Submicrovolt resolution X-ray monochromators

    International Nuclear Information System (INIS)

    Trammell, G.T.; Hannon, J.P.

    1984-01-01

    Two methods are available to obtain monochromatic x-radiation from a white source: wavelength selection and frequency selection. The resolution of wavelength selection methods is limited to 1-10 MeV in the E = 10 KeV range. To exceed this resolution frequency selection methods based on nuclear resonance scattering can be used. Devices which give strong nuclear resonance reflections but weak electronic reflections are candidates for components of frequency selection monochromates. Some examples are discussed

  5. X-ray instrumentation: monochromators and mirrors

    International Nuclear Information System (INIS)

    Rodrigues, A.R.D.

    1983-01-01

    The main type of X-ray monochromators used with Synchrotron Radiation are discussed in relation to the energy resolution and to the spectral contamination, as well special systems for applications which require simultaneously high flux and resolution. The characteristics for X-ray mirrors necessaries for its utilization with synchrotron radiation are also analized, as conformators of the beam geometry and spectrum. (L.C.) [pt

  6. Variational Multiscale error estimator for anisotropic adaptive fluid mechanic simulations: application to convection-diffusion problems

    OpenAIRE

    Bazile , Alban; Hachem , Elie; Larroya-Huguet , Juan-Carlos; Mesri , Youssef

    2018-01-01

    International audience; In this work, we present a new a posteriori error estimator based on the Variational Multiscale method for anisotropic adaptive fluid mechanics problems. The general idea is to combine the large scale error based on the solved part of the solution with the sub-mesh scale error based on the unresolved part of the solution. We compute the latter with two different methods: one using the stabilizing parameters and the other using bubble functions. We propose two different...

  7. Investigation of a monochromator scheme for SPEAR

    International Nuclear Information System (INIS)

    Wille, K.; Chao, A.W.

    1984-08-01

    The possibility of mono-chromatizing SPEAR for the purpose of increasing the hadronic event rate at the narrow resonances was investigated. By using two pairs of electostatic skew quads in monochromator scheme it is found that the event rate can be increased by a factor of 2 for the mini beta optics assuming the luminosity is kept unchanged. An attempt to increase this enhancement factor by major rearrangements of the ring magnets has encountered serious optical difficulties; although enhancement factor of 8 seems possible in principle, this alternative is not recommended

  8. Cam-driven monochromator for QEXAFS

    Energy Technology Data Exchange (ETDEWEB)

    Caliebe, W.A. [National Synchrotron Light Source, Brookhaven National Laboratory, Upton, NY 11973 (United States); So, I. [National Synchrotron Light Source, Brookhaven National Laboratory, Upton, NY 11973 (United States); Lenhard, A. [National Synchrotron Light Source, Brookhaven National Laboratory, Upton, NY 11973 (United States); Siddons, D.P. [National Synchrotron Light Source, Brookhaven National Laboratory, Upton, NY 11973 (United States)

    2006-11-15

    We have developed a cam-drive for quickly tuning the energy of an X-ray monochromator through an X-ray absorption edge for quick extended X-ray absorption spectroscopy (QEXAFS). The data are collected using a 4-channel, 12-bit multiplexed VME analog to digital converter and a VME angle encoder. The VME crate controller runs a real-time operating system. This system is capable of collecting 2 EXAFS-scans in 1 s with an energy stability of better than 1 eV. Additional improvements to increase the speed and the energy stability are under way.

  9. Cam-driven monochromator for QEXAFS

    Science.gov (United States)

    Caliebe, W. A.; So, I.; Lenhard, A.; Siddons, D. P.

    2006-11-01

    We have developed a cam-drive for quickly tuning the energy of an X-ray monochromator through an X-ray absorption edge for quick extended X-ray absorption spectroscopy (QEXAFS). The data are collected using a 4-channel, 12-bit multiplexed VME analog to digital converter and a VME angle encoder. The VME crate controller runs a real-time operating system. This system is capable of collecting 2 EXAFS-scans in 1 s with an energy stability of better than 1 eV. Additional improvements to increase the speed and the energy stability are under way.

  10. Cam-driven monochromator for QEXAFS

    International Nuclear Information System (INIS)

    Caliebe, W.A.; So, I.; Lenhard, A.; Siddons, D.P.

    2006-01-01

    We have developed a cam-drive for quickly tuning the energy of an X-ray monochromator through an X-ray absorption edge for quick extended X-ray absorption spectroscopy (QEXAFS). The data are collected using a 4-channel, 12-bit multiplexed VME analog to digital converter and a VME angle encoder. The VME crate controller runs a real-time operating system. This system is capable of collecting 2 EXAFS-scans in 1 s with an energy stability of better than 1 eV. Additional improvements to increase the speed and the energy stability are under way

  11. A Single-Element Plane Grating Monochromator

    Directory of Open Access Journals (Sweden)

    Michael C. Hettrick

    2016-01-01

    Full Text Available Concerted rotations of a self-focused varied line-space diffraction grating about its groove axis and surface normal define a new geometric class of monochromator. Defocusing is canceled, while the scanned wavelength is reinforced at fixed conjugate distances and horizontal deviation angle. This enables high spectral resolution over a wide band, and is of particular advantage at grazing reflection angles. A new, rigorous light-path formulation employs non-paraxial reference points to isolate the lateral ray aberrations, with those of power-sum ≤ 3 explicitly expanded for a plane grating. Each of these 14 Fermat equations agrees precisely with the value extracted from numerical raytrace simulations. An example soft X-ray design (6° deviation angle and 2 × 4 mrad aperture attains a resolving power > 25 , 000 over a three octave scan range. The proposed rotation scheme is not limited to plane surfaces or monochromators, providing a new degree of freedom in optical design.

  12. On progress of the solution of the stationary 2-dimensional neutron diffusion equation: a polynomial approximation method with error analysis

    International Nuclear Information System (INIS)

    Ceolin, C.; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T.

    2015-01-01

    Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)

  13. On progress of the solution of the stationary 2-dimensional neutron diffusion equation: a polynomial approximation method with error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ceolin, C., E-mail: celina.ceolin@gmail.com [Universidade Federal de Santa Maria (UFSM), Frederico Westphalen, RS (Brazil). Centro de Educacao Superior Norte; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T., E-mail: celina.ceolin@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica

    2015-07-01

    Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)

  14. Thermal bump removal of a crystal monochromator by designing an optimal shape

    Energy Technology Data Exchange (ETDEWEB)

    Micha, Jean-Sébastien, E-mail: micha@esrf.fr [CRG-IF BM32 Beamline, ESRF, 6 rue J. Horowitz, BP 220, 38043 Grenoble (France); UMR SPrAM 5819, CEA-Grenoble/INAC/SPrAM, 17 avenue des Martyrs, 38054 Grenoble Cedex 9 (France); Geaymond, Olivier [CRG-IF BM32 Beamline, ESRF, 6 rue J. Horowitz, BP 220, 38043 Grenoble (France); Institut Néel, CNRS, 25 avenue des Martyrs, 38054 Grenoble Cedex 9 (France); Rieutord, Francois [CRG-IF BM32 Beamline, ESRF, 6 rue J. Horowitz, BP 220, 38043 Grenoble (France); CEA-Grenoble/INAC/NRS, 17 avenue des Martyrs, 38054 Grenoble Cedex 9 (France)

    2013-05-11

    Thermal bump arising at illuminated area of a water cooled monochromator crystal can be considerably reduced by designing an appropriate shape. Temperature and deformation have been simulated by finite element analysis (FEA) computations as a function of few geometrical parameters describing the shape of the crystal. As a result, a new crystal shape has been found which optimizes the throughput of a double crystals monochromator (DCM). Performances of the initial rectangular crystal and the new designed crystal predicted by FEA-based calculations and measured during experimental tests on a synchrotron beamline are reported. General design principles to overcome heat load issues and the objective function using the slope errors derived from FEA results are detailed. Current and foreseen performances at higher load are presented. Finally, advantages and limits of this simple-to-design and cheap solution are discussed.

  15. Diffusion

    International Nuclear Information System (INIS)

    Kubaschewski, O.

    1983-01-01

    The diffusion rate values of titanium, its compounds and alloys are summarized and tabulated. The individual chemical diffusion coefficients and self-diffusion coefficients of certain isotopes are given. Experimental methods are listed which were used for the determination of diffusion coefficients. Some values have been taken over from other studies. Also given are graphs showing the temperature dependences of diffusion and changes in the diffusion coefficient with concentration changes

  16. Plane grating monochromators for synchrotron radiation

    International Nuclear Information System (INIS)

    Howells, M.R.

    1979-01-01

    The general background and theoretical basis of plane grating monochromators (PGM's) is reviewed and the particular case of grazing incidence PGM's suitable for use with synchrotron radiation is considered in detail. The theory of reflection filtering is described and the problem of the finite source distance is shown to be of special importance with high brightness storage rings. The design philosophy of previous instruments is discussed and a new scheme proposed, aimed at dealing with the problem of the finite source distance. This scheme, involving a parabolic collimating mirror fabricated by diamond turning, is considered in the context of Wolter-type telescopes and microscopes. Some practical details concerning an instrument presently under construction using the new design are presented

  17. Rhodium SPND's Error Reduction using Extended Kalman Filter combined with Time Dependent Neutron Diffusion Equation

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Park, Tong Kyu; Jeon, Seong Su

    2014-01-01

    The Rhodium SPND is accurate in steady-state conditions but responds slowly to changes in neutron flux. The slow response time of Rhodium SPND precludes its direct use for control and protection purposes specially when nuclear power plant is used for load following. To shorten the response time of Rhodium SPND, there were some acceleration methods but they could not reflect neutron flux distribution in reactor core. On the other hands, some methods for core power distribution monitoring could not consider the slow response time of Rhodium SPND and noise effect. In this paper, time dependent neutron diffusion equation is directly used to estimate reactor power distribution and extended Kalman filter method is used to correct neutron flux with Rhodium SPND's and to shorten the response time of them. Extended Kalman filter is effective tool to reduce measurement error of Rhodium SPND's and even simple FDM to solve time dependent neutron diffusion equation can be an effective measure. This method reduces random errors of detectors and can follow reactor power level without cross-section change. It means monitoring system may not calculate cross-section at every time steps and computing time will be shorten. To minimize delay of Rhodium SPND's conversion function h should be evaluated in next study. Neutron and Rh-103 reaction has several decay chains and half-lives over 40 seconds causing delay of detection. Time dependent neutron diffusion equation will be combined with decay chains. Power level and distribution change corresponding movement of control rod will be tested with more complicated reference code as well as xenon effect. With these efforts, final result is expected to be used as a powerful monitoring tool of nuclear reactor core

  18. Rhodium SPND's Error Reduction using Extended Kalman Filter combined with Time Dependent Neutron Diffusion Equation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Hun; Park, Tong Kyu; Jeon, Seong Su [FNC Technology Co., Ltd., Yongin (Korea, Republic of)

    2014-05-15

    The Rhodium SPND is accurate in steady-state conditions but responds slowly to changes in neutron flux. The slow response time of Rhodium SPND precludes its direct use for control and protection purposes specially when nuclear power plant is used for load following. To shorten the response time of Rhodium SPND, there were some acceleration methods but they could not reflect neutron flux distribution in reactor core. On the other hands, some methods for core power distribution monitoring could not consider the slow response time of Rhodium SPND and noise effect. In this paper, time dependent neutron diffusion equation is directly used to estimate reactor power distribution and extended Kalman filter method is used to correct neutron flux with Rhodium SPND's and to shorten the response time of them. Extended Kalman filter is effective tool to reduce measurement error of Rhodium SPND's and even simple FDM to solve time dependent neutron diffusion equation can be an effective measure. This method reduces random errors of detectors and can follow reactor power level without cross-section change. It means monitoring system may not calculate cross-section at every time steps and computing time will be shorten. To minimize delay of Rhodium SPND's conversion function h should be evaluated in next study. Neutron and Rh-103 reaction has several decay chains and half-lives over 40 seconds causing delay of detection. Time dependent neutron diffusion equation will be combined with decay chains. Power level and distribution change corresponding movement of control rod will be tested with more complicated reference code as well as xenon effect. With these efforts, final result is expected to be used as a powerful monitoring tool of nuclear reactor core.

  19. Processing method for high resolution monochromator

    International Nuclear Information System (INIS)

    Kiriyama, Koji; Mitsui, Takaya

    2006-12-01

    A processing method for high resolution monochromator (HRM) has been developed at Japanese Atomic Energy Agency/Quantum Beam Science Directorate/Synchrotron Radiation Research unit at SPring-8. For manufacturing a HRM, a sophisticated slicing machine and X-ray diffractometer have been installed for shaping a crystal ingot and orienting precisely the surface of a crystal ingot, respectively. The specification of the slicing machine is following; Maximum size of a diamond blade is φ 350mm in diameter, φ 38.1mm in the spindle diameter, and 2mm in thickness. A large crystal such as an ingot with 100mm in diameter, 200mm in length can be cut. Thin crystal samples such as a wafer can be also cut using by another sample holder. Working distance of a main shaft with the direction perpendicular to working table in the machine is 350mm at maximum. Smallest resolution of the main shaft with directions of front-and-back and top-and-bottom are 0.001mm read by a digital encoder. 2mm/min can set for cutting samples in the forward direction. For orienting crystal faces relative to the blade direction adjustment, a one-circle goniometer and 2-circle segment are equipped on the working table in the machine. A rotation and a tilt of the stage can be done by manual operation. Digital encoder in a turn stage is furnished and has angle resolution of less than 0.01 degrees. In addition, a hand drill as a supporting device for detailed processing of crystal is prepared. Then, an ideal crystal face can be cut from crystal samples within an accuracy of about 0.01 degrees. By installation of these devices, a high energy resolution monochromator crystal for inelastic x-ray scattering and a beam collimator are got in hand and are expected to be used for nanotechnology studies. (author)

  20. Design and optimization of the grating monochromator for soft X-ray self-seeding FELs

    Energy Technology Data Exchange (ETDEWEB)

    Serkez, Svitozar

    2015-10-15

    The emergence of Free Electron Lasers (FEL) as a fourth generation of light sources is a breakthrough. FELs operating in the X-ray range (XFEL) allow one to carry out completely new experiments that probably most of the natural sciences would benefit. Self-amplified spontaneous emission (SASE) is the baseline FEL operation mode: the radiation pulse starts as a spontaneous emission from the electron bunch and is being amplified during an FEL process until it reaches saturation. The SASE FEL radiation usually has poor properties in terms of a spectral bandwidth or, on the other side, longitudinal coherence. Self-seeding is a promising approach to narrow the SASE bandwidth of XFELs significantly in order to produce nearly transformlimited pulses. It is achieved by the radiation pulse monochromatization in the middle of an FEL amplification process. Following the successful demonstration of the self-seeding setup in the hard X-ray range at the LCLS, there is a need for a self-seeding extension into the soft X-ray range. Here a numerical method to simulate the soft X-ray self seeding (SXRSS) monochromator performance is presented. It allows one to perform start-to-end self-seeded FEL simulations along with (in our case) GENESIS simulation code. Based on this method, the performance of the LCLS self-seeded operation was simulated showing a good agreement with an experiment. Also the SXRSS monochromator design developed in SLAC was adapted for the SASE3 type undulator beamline at the European XFEL. The optical system was studied using Gaussian beam optics, wave optics propagation method and ray tracing to evaluate the performance of the monochromator itself. Wave optics analysis takes into account the actual beam wavefront of the radiation from the coherent FEL source, third order aberrations and height errors from each optical element. The monochromator design is based on a toroidal VLS grating working at a fixed incidence angle mounting without both entrance and exit

  1. Design and optimization of the grating monochromator for soft X-ray self-seeding FELs

    International Nuclear Information System (INIS)

    Serkez, Svitozar

    2015-10-01

    The emergence of Free Electron Lasers (FEL) as a fourth generation of light sources is a breakthrough. FELs operating in the X-ray range (XFEL) allow one to carry out completely new experiments that probably most of the natural sciences would benefit. Self-amplified spontaneous emission (SASE) is the baseline FEL operation mode: the radiation pulse starts as a spontaneous emission from the electron bunch and is being amplified during an FEL process until it reaches saturation. The SASE FEL radiation usually has poor properties in terms of a spectral bandwidth or, on the other side, longitudinal coherence. Self-seeding is a promising approach to narrow the SASE bandwidth of XFELs significantly in order to produce nearly transformlimited pulses. It is achieved by the radiation pulse monochromatization in the middle of an FEL amplification process. Following the successful demonstration of the self-seeding setup in the hard X-ray range at the LCLS, there is a need for a self-seeding extension into the soft X-ray range. Here a numerical method to simulate the soft X-ray self seeding (SXRSS) monochromator performance is presented. It allows one to perform start-to-end self-seeded FEL simulations along with (in our case) GENESIS simulation code. Based on this method, the performance of the LCLS self-seeded operation was simulated showing a good agreement with an experiment. Also the SXRSS monochromator design developed in SLAC was adapted for the SASE3 type undulator beamline at the European XFEL. The optical system was studied using Gaussian beam optics, wave optics propagation method and ray tracing to evaluate the performance of the monochromator itself. Wave optics analysis takes into account the actual beam wavefront of the radiation from the coherent FEL source, third order aberrations and height errors from each optical element. The monochromator design is based on a toroidal VLS grating working at a fixed incidence angle mounting without both entrance and exit

  2. A pseudo-curved oriented pyrolytic graphite neutron monochromator

    International Nuclear Information System (INIS)

    Ettedgui, H.; Gurewitz, E.; Pinto, H.

    1979-03-01

    A pseudo-curved neutron monochromator with a continuously variable curvature was constructed with four flat pieces of oriented pyrolytic graphite (OPG). Curvatures which yield maximum diffracted intensities were determined for neutrons of wavelengths 1 A and 2.4 A. The increase of the intensity relatively to that of a flat monochromator is by a factor of 2 and 1.5, for 1 A and 2.4 A, respectively. The neutron flux at three positions along the neutron path was determined by gold foils activation and compared with the flux from flat monochromators of OPG and copper

  3. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  4. Grating monochromator for soft X-ray self-seeding the European XFEL

    Energy Technology Data Exchange (ETDEWEB)

    Serkez, Svitozar; Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany)

    2013-02-15

    Self-seeding is a promising approach to significantly narrow the SASE bandwidth of XFELs to produce nearly transform-limited pulses. The implementation of this method in the soft X-ray wavelength range necessarily involves gratings as dispersive elements. We study a very compact self-seeding scheme with a grating monochromator originally designed at SLAC, which can be straightforwardly installed in the SASE3 type undulator beamline at the European XFEL. The monochromator design is based on a toroidal VLS grating working at a fixed incidence angle mounting without entrance slit. It covers the spectral range from 300 eV to 1000 eV. The optical system was studied using wave optics method (in comparison with ray tracing) to evaluate the performance of the self-seeding scheme. Our wave optics analysis takes into account the actual beam wavefront of the radiation from the coherent FEL source, third order aberrations, and errors from each optical element. Wave optics is the only method available, in combination with FEL simulations, for the design of a self-seeding monochromator without exit slit. We show that, without exit slit, the self-seeding scheme is distinguished by the much needed experimental simplicity, and can practically give the same resolving power (about 7000) as with an exit slit. Wave optics is also naturally applicable to calculations of the self-seeding scheme efficiency, which include the monochromator transmittance and the effect of the mismatching between seed beam and electron beam. Simulations show that the FEL power reaches 1 TW and that the spectral density for a TW pulse is about two orders of magnitude higher than that for the SASE pulse at saturation.

  5. Grating monochromator for soft X-ray self-seeding the European XFEL

    International Nuclear Information System (INIS)

    Serkez, Svitozar; Kocharyan, Vitali; Saldin, Evgeni; Geloni, Gianluca

    2013-02-01

    Self-seeding is a promising approach to significantly narrow the SASE bandwidth of XFELs to produce nearly transform-limited pulses. The implementation of this method in the soft X-ray wavelength range necessarily involves gratings as dispersive elements. We study a very compact self-seeding scheme with a grating monochromator originally designed at SLAC, which can be straightforwardly installed in the SASE3 type undulator beamline at the European XFEL. The monochromator design is based on a toroidal VLS grating working at a fixed incidence angle mounting without entrance slit. It covers the spectral range from 300 eV to 1000 eV. The optical system was studied using wave optics method (in comparison with ray tracing) to evaluate the performance of the self-seeding scheme. Our wave optics analysis takes into account the actual beam wavefront of the radiation from the coherent FEL source, third order aberrations, and errors from each optical element. Wave optics is the only method available, in combination with FEL simulations, for the design of a self-seeding monochromator without exit slit. We show that, without exit slit, the self-seeding scheme is distinguished by the much needed experimental simplicity, and can practically give the same resolving power (about 7000) as with an exit slit. Wave optics is also naturally applicable to calculations of the self-seeding scheme efficiency, which include the monochromator transmittance and the effect of the mismatching between seed beam and electron beam. Simulations show that the FEL power reaches 1 TW and that the spectral density for a TW pulse is about two orders of magnitude higher than that for the SASE pulse at saturation.

  6. Second crystal cooling on cryogenically cooled undulator and wiggler double crystal monochromators

    International Nuclear Information System (INIS)

    Knapp, G. S.

    1998-01-01

    Simple methods for the cooling of the second crystals of cryogenically cooled undulator and wiggler double crystal monochromators are described. Copper braids between the first and second crystals are used to cool the second crystals of the double crystal monochromators. The method has proved successful for an undulator monochromator and we describe a design for a wiggler monochromator

  7. Synchrotron Radiation Beam Line of Piezoelectric Monochromator Control

    International Nuclear Information System (INIS)

    Ye Shengan; LIU Ping; Zheng Lifang

    2009-01-01

    It describes a Piezo Amplifier and Servo-controller module applied in the LN2-cooled Monochromator control system. The application of RS232 communication based on EPICS software environment and its software are implemented. (authors)

  8. Mechanical design and performance evaluation for plane grating monochromator in a soft X-ray microscopy beamline at SSRF.

    Science.gov (United States)

    Gong, Xuepeng; Lu, Qipeng

    2015-01-01

    A new monochromator is designed to develop a high performance soft X-ray microscopy beamline at Shanghai Synchrotron Radiation Facility (SSRF). But owing to its high resolving power and high accurate spectrum output, there exist many technical difficulties. In the paper presented, as two primary design targets for the monochromator, theoretical energy resolution and photon flux of the beamline are calculated. For wavelength scanning mechanism, primary factors affecting the rotary angle errors are presented, and the measuring results are 0.15'' and 0.17'' for plane mirror and plane grating, which means that it is possible to provide sufficient scanning precision to specific wavelength. For plane grating switching mechanism, the repeatabilities of roll, yaw and pitch angles are 0.08'', 0.12'' and 0.05'', which can guarantee the high accurate switch of the plane grating effectively. After debugging, the repeatability of light spot drift reaches to 0.7'', which further improves the performance of the monochromator. The commissioning results show that the energy resolving power is higher than 10000 at Ar L-edge, the photon flux is higher than 1 × 108 photons/sec/200 mA, and the spatial resolution is better than 30 nm, demonstrating that the monochromator performs very well and reaches theoretical predictions.

  9. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems; Methodes de Galerkine discontinues et analyse d'erreur a posteriori pour les problemes de diffusion heterogene

    Energy Technology Data Exchange (ETDEWEB)

    Stephansen, A.F

    2007-12-15

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  10. Multiple order reflections in crystal neutron monochromators

    International Nuclear Information System (INIS)

    Fulfaro, R.

    1976-01-01

    A study of the higher order reflections in neutron crystal monochromators was made in order to obtain, for the IEA single crystal spectrometer, the operation range of 1,0eV to 0,01eV. Two crystals were studied, an Al(III) near 1,0eV and a Ge(III) in lower energies. For the Ge(III) case the higher order contaminations in the reflected beam were determined using as standard the gold total neutron cross section and performing the crystal reflectivity calculation for several orders of reflection. The knowledge of the contamination for each order as a function of neutron wavelength allows the optimization of the filter thickness in order to avoid higher order neutrons. The Ge(III) crystal was used because its second order reflections are theoretically forbidden, giving an advantage on other crystals, since measurements can be made until 0.02eV directly without filters. In the energy range 0.02 to 0.01eV, order contaminations higher than the second are present, therefore, either quartz filters are employed or calculated corrections are applied to the experimental data. The Al(III) crystal was used in order to estimate the second order contamination effect, in the iridium resonance measurements, at E 0 = 0.654eV. In that region, approximations can be made and it was not necessary to make the crystal reflectivity calculation for the filters thickness optimization. Since only the second order affects the results in that region, tellurium was used for the filtration, because this element has a resonance in the range of neutrons with energy 4E [pt

  11. Asymmetric-cut variable-incident-angle monochromator.

    Science.gov (United States)

    Smither, R K; Graber, T J; Fernandez, P B; Mills, D M

    2012-03-01

    A novel asymmetric-cut variable-incident-angle monochromator was constructed and tested in 1997 at the Advanced Photon Source of Argonne National Laboratory. The monochromator was originally designed as a high heat load monochromator capable of handling 5-10 kW beams from a wiggler source. This was accomplished by spreading the x-ray beam out on the surface an asymmetric-cut crystal and by using liquid metal cooling of the first crystal. The monochromator turned out to be a highly versatile monochromator that could perform many different types of experiments. The monochromator consisted of two 18° asymmetrically cut Si crystals that could be rotated about 3 independent axes. The first stage (Φ) rotates the crystal around an axis perpendicular to the diffraction plane. This rotation changes the angle of the incident beam with the surface of the crystal without changing the Bragg angle. The second rotation (Ψ) is perpendicular to the first and is used to control the shape of the beam footprint on the crystal. The third rotation (Θ) controls the Bragg angle. Besides the high heat load application, the use of asymmetrically cut crystals allows one to increase or decrease the acceptance angle for crystal diffraction of a monochromatic x-ray beam and allows one to increase or decrease the wavelength bandwidth of the diffraction of a continuum source like a bending-magnet beam or a normal x-ray-tube source. When the monochromator is used in the doubly expanding mode, it is possible to expand the vertical size of the double-diffracted beam by a factor of 10-15. When this was combined with a bending magnet source, it was possible to generate an 8 keV area beam, 16 mm wide by 26 mm high with a uniform intensity and parallel to 1.2 arc sec that could be applied in imaging experiments.

  12. Mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods

    International Nuclear Information System (INIS)

    Baker, A.R.

    1982-07-01

    A study has been performed of mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods. As the objective was to illuminate the issues, the study was performed for a 1D slab model of a reactor with one neutron-energy group for which analytical solutions were possible. A computer code SLAB was specially written to perform the finite-difference and finite-element calculations and also to obtain the analytical solutions. The standard finite-difference equations were obtained by starting with an expansion of the neutron current in powers of the mesh size, h, and keeping terms as far as h 2 . It was confirmed that these equations led to the well-known result that the criticality parameter varied with the square of the mesh size. An improved form of the finite-difference equations was obtained by continuing the expansion for the neutron current as far as the term in h 4 . In this case, the critical parameter varied as the fourth power of the mesh size. The finite-element solutions for 2 and 3 nodes per element revealed that the criticality parameter varied as the square and fourth power of the mesh size, respectively. Numerical results are presented for a bare reactive core of uniform composition with 2 zones of different uniform mesh and for a reactive core with an absorptive reflector. (author)

  13. MONO: A program to calculate synchrotron beamline monochromator throughputs

    International Nuclear Information System (INIS)

    Chapman, D.

    1989-01-01

    A set of Fortran programs have been developed to calculate the expected throughput of x-ray monochromators with a filtered synchrotron source and is applicable to bending magnet and wiggler beamlines. These programs calculate the normalized throughput and filtered synchrotron spectrum passed by multiple element, flat un- focussed monochromator crystals of the Bragg or Laue type as a function of incident beam divergence, energy and polarization. The reflected and transmitted beam of each crystal is calculated using the dynamical theory of diffraction. Multiple crystal arrangements in the dispersive and non-dispersive mode are allowed as well as crystal asymmetry and energy or angle offsets. Filters or windows of arbitrary elemental composition may be used to filter the incident synchrotron beam. This program should be useful to predict the intensities available from many beamline configurations as well as assist in the design of new monochromator and analyzer systems. 6 refs., 3 figs

  14. A hard X-ray laboratory for monochromator characterisation

    Energy Technology Data Exchange (ETDEWEB)

    Hamelin, B [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France)

    1997-04-01

    Since their installation at ILL during the 1970`s the ILL {gamma}-ray diffractometers have been intensively used in the development of neutron monochromators. However, the ageing of the sources and new developments in hard X-ray diffractometry lead to a decision at the end of 1995 to replace the existing {gamma}-ray laboratory with a hard X-ray laboratory, based on a 420 keV generator, making available in the long term several beam-lines for rapid characterisation of monochromator crystals. The facility is now installed and its characteristics and advantages are outlined. (author). 2 refs.

  15. Development of Bent Perfect Crystal Monochromator (II): Experiments for the evaluation of BPC for a monochromator of neutron diffractometers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Nam; Kim, Shin Ae; Lee, Chang Hee [KAERI, Taejon (Korea, Republic of); Kim, Sung Kyu [Pusan National Univ., Pusan (Korea, Republic of); Kim, Seong Baek [Kookmin Univ., Seoul (Korea, Republic of); Mikula, P. [NPI, Prague (Czech Republic)

    2004-11-01

    Various experimental properties of BPC monochromator at the FCD mode of the ST1 test station have been investigated. To test and verify the performance of the Si-BPC monochromator for the substitutional monochromator, diffraction measurements using copper single crystal and polycrystalline copper rod at various diffraction geometries were carried out. Considering the situation of FCD instrument which is used for both single crystal and texture measurement, a special cut silicon BPC slab which contains (331), (311) and (220) diffraction planes would be a best candidate. Diffraction measurements at the monochromatic focusing is the first experimental demonstration of the theoretical properties and give us a suggestion that the simultaneous measurement at both parallel and anti-parallel diffraction positions could be achievable with a reasonal resolution property as well as the intensity gain.

  16. Development of an imaging VUV monochromator in normal incidence region

    Energy Technology Data Exchange (ETDEWEB)

    Koog, Joong-San

    1996-07-01

    This paper describes a development of the two-dimensional imaging monochromator system. A commercial normal incidence monochromator working on off-Rowland circle mounting is used for this purpose. The imaging is achieved with utilizing the pinhole camera effect created by an entrance slit of limited height. The astigmatism in the normal incidence mounting is small compared with a grazing incidence mount, but has a finite value. The point is that for near normal incidence, the vertical focusing with a concave grating is produced at outside across the exit slit. Therefore, by putting a 2-D detector at the position away from the exit slit ({approx}30 cm), a one-to-one correspondence between the position of a point on the detector and where it originated in the source is accomplished. This paper consists of (1) the principle and development of the imaging monochromator using the off-Rowland mounting, including the 2-D detector system, (2) a computer simulation by ray tracing for investigations of the imaging properties of imaging system, and aberration from the spherical concave grating on the exit slit, (3) the plasma light source (TPD-S) for the test experiments, (4) Performances of the imaging monochromator system on the spatial resolution and sensitivity, and (5) the use of this system for diagnostic studies on the JIPP T-IIU tokamak. (J.P.N.)

  17. High Heat Load Diamond Monochromator Project at ESRF

    International Nuclear Information System (INIS)

    Van aerenbergh, P.; Detlefs, C.; Haertwig, J.; Lafford, T. A.; Masiello, F.; Roth, T.; Schmid, W.; Wattecamps, P.; Zhang, L.

    2010-01-01

    Due to its outstanding thermal properties, diamond is an attractive alternative to silicon as a monochromator material for high intensity X-ray beams. To date, however, the practical applications have been limited by the small size and relatively poor crystallographic quality of the crystals available. The ESRF Diamond Project Group has studied the perfection of diamonds in collaboration with industry and universities. The group has also designed and tested different stress-free mounting techniques to integrate small diamonds into larger X-ray optical elements. We now propose to develop a water-cooled Bragg-Bragg double crystal monochromator using diamond (111) crystals. It will be installed on the ESRF undulator beamline, ID06, for testing under high heat load. This monochromator will be best suited for the low energy range, typically from ∼3.4 keV to 15 keV, due to the small size of the diamonds available and the size of the beam footprint. This paper presents stress-free mounting techniques studied using X-ray diffraction imaging, and their thermal-mechanical analysis by finite element modelling, as well as the status of the ID06 monochromator project.

  18. Modifications to improve entrance slit thermal stability for grasshopper monochromators

    Science.gov (United States)

    Wallace, Daniel J.; Rogers, Gregory C.; Crossley, Sherry L.

    1994-08-01

    As new monochromators are designed for high-flux storage rings, computer modeling and thermal engineering can be done to process increased heat loads and achieve mechanical stability. Several older monochromators, such as the Mark 2 and Mark 5 Grasshopper monochromators, which were designed in 1974, have thermal instabilities in their entrance slit mechanisms. The Grasshoppers operating with narrow slits experience closure of the entrance slit from thermal expansion. In extreme cases, the thermal expansion of the precision components has caused permanent mechanical damage, leaving the slit uncalibrated and/or inoperable. For the Mark 2 and Mark 5 Grasshopper monochromators at the Synchrotron Radiation Center, the original 440 stainless steel entrance slit jaws were retrofitted with an Invar (low expansion Fe, Ni alloy) slit jaw. To transfer the heat from the critical components, two flexible heat straps of Cu were attached. These changes allow safe operation with a 10 μm entrance slit width where the previous limit was 30 μm. After an initial 2 min equilibration, the slit remains stable to 10%, with 100 mA of beam current. Additional improvements in slit thermal stability are planned for a third Grasshopper.

  19. Composite germanium monochromators - results for the TriCS

    Energy Technology Data Exchange (ETDEWEB)

    Schefer, J.; Fischer, S.; Boehm, M.; Keller, L.; Horisberger, M.; Medarde, M.; Fischer, P. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-09-01

    Composite germanium monochromators are in the beginning of their application in neutron diffraction. We show here the importance of the permanent quality control with neutrons on the example of the 311 wafers which will be used on the single crystal diffractometer TriCS at SINQ. (author) 2 figs., 3 refs.

  20. Development of an imaging VUV monochromator in normal incidence region

    International Nuclear Information System (INIS)

    Koog, Joong-San.

    1996-07-01

    This paper describes a development of the two-dimensional imaging monochromator system. A commercial normal incidence monochromator working on off-Rowland circle mounting is used for this purpose. The imaging is achieved with utilizing the pinhole camera effect created by an entrance slit of limited height. The astigmatism in the normal incidence mounting is small compared with a grazing incidence mount, but has a finite value. The point is that for near normal incidence, the vertical focusing with a concave grating is produced at outside across the exit slit. Therefore, by putting a 2-D detector at the position away from the exit slit (∼30 cm), a one-to-one correspondence between the position of a point on the detector and where it originated in the source is accomplished. This paper consists of 1) the principle and development of the imaging monochromator using the off-Rowland mounting, including the 2-D detector system, 2) a computer simulation by ray tracing for investigations of the imaging properties of imaging system, and aberration from the spherical concave grating on the exit slit, 3) the plasma light source (TPD-S) for the test experiments, 4) Performances of the imaging monochromator system on the spatial resolution and sensitivity, and 5) the use of this system for diagnostic studies on the JIPP T-IIU tokamak. (J.P.N.)

  1. A Double-Crystal Monochromator for Neutron Stress Diffractometry

    Czech Academy of Sciences Publication Activity Database

    Em, V.; Balagurov, A. M.; Glazkov, V. P.; Karpov, I. D.; Mikula, Pavol; Miron, N. F.; Somenkov, V. A.; Sumin, V. V.; Šaroun, Jan; Shushunov, M. N.

    2017-01-01

    Roč. 60, č. 4 (2017), s. 526-532 ISSN 0020-4412 Institutional support: RVO:61389005 Keywords : neutron diffraction * double-crystal * monochromator Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics, supercond.) Impact factor: 0.437, year: 2016

  2. Evaluation of Systematic and Random Error in the Measurement of Equilibrium Solubility and Diffusion Coefficient for Liquids in Polymers

    National Research Council Canada - National Science Library

    Shuely, Wendel

    2001-01-01

    A standardized thermogravimetric analyzer (TGA) desorption method for measuring the equilibrium solubility and diffusion coefficient of toxic contaminants with polymers was further developed and evaluated...

  3. Design and fabrication of an active polynomial grating for soft-X-ray monochromators and spectrometers

    CERN Document Server

    Chen, S J; Perng, S Y; Kuan, C K; Tseng, T C; Wang, D J

    2001-01-01

    An active polynomial grating has been designed for use in synchrotron radiation soft-X-ray monochromators and spectrometers. The grating can be dynamically adjusted to obtain the third-order-polynomial surface needed to eliminate the defocus and coma aberrations at any photon energy. Ray-tracing results confirm that a monochromator or spectrometer based on this active grating has nearly no aberration limit to the overall spectral resolution in the entire soft-X-ray region. The grating substrate is made of a precisely milled 17-4 PH stainless steel parallel plate, which is joined to a flexure-hinge bender shaped by wire electrical discharge machining. The substrate is grounded into a concave cylindrical shape with a nominal radius and then polished to achieve a roughness of 0.45 nm and a slope error of 1.2 mu rad rms. The long trace profiler measurements show that the active grating can reach the desired third-order polynomial with a high degree of figure accuracy.

  4. Development of an automated scanning monochromator for sensitivity calibration of the MUSTANG instrument

    Science.gov (United States)

    Rivers, Thane D.

    1992-06-01

    An Automated Scanning Monochromator was developed using: an Acton Research Corporation (ARC) Monochromator, Ealing Photomultiplier Tube and a Macintosh PC in conjunction with LabVIEW software. The LabVIEW Virtual Instrument written to operate the ARC Monochromator is a mouse driven user friendly program developed for automated spectral data measurements. Resolution and sensitivity of the Automated Scanning Monochromator System were determined experimentally. The Automated monochromator was then used for spectral measurements of a Platinum Lamp. Additionally, the reflectivity curve for a BaSO4 coated screen has been measured. Reflectivity measurements indicate a large discrepancy with expected results. Further analysis of the reflectivity experiment is required for conclusive results.

  5. Cryogenically cooled monochromators for the Advanced Photon Source

    International Nuclear Information System (INIS)

    Mills, D.M.

    1996-01-01

    The use of cryogenically cooled monochromators looks to be a very promising possibility for the Advanced Photon Source. This position has recently been bolstered by several experiments performed on beamlines at the ESRF and CHESS. At the ESRF, several crystal geometries have been tested that were designed for high power densities (approx-gt 150 W/mm 2 ) and moderate total absorbed powers (<200 W). These geometries have proven to be very successful at handling these power parameters with measured strains on the arc-second level. The experiments performed at CHESS were focused on high total power (approx-gt 1000 W) but moderate power densities. As with the previously mentioned experiments, the crystals designed for this application performed superbly with no measurable broadening of the rocking curves on the arc-second level. These experiments will be summarized and, based on these results, the performance of cryogenic monochromators for the APS will be assessed. copyright 1996 American Institute of Physics

  6. Recovery, modernization and computerization of the monochromator MDR-23

    International Nuclear Information System (INIS)

    Miranda, L. J.

    2012-01-01

    For use in the Optics Laboratory, newly created in the CEADEN, one of the necessary equipment is the Monochromator MDR-23, which was necessary for recovery, modernization (replacing the power supply and control) and computerization by designing a VI (virtual instrument) to control stepper motor through a PC using Lab VIEW 7.1 that allows users to select the address mode, the number of steps and the engine speed and give the initial value of the position and limits the range of the monochromator to start working with it. The principle of operation of the program is done in detail to facilitate understanding of how to operate and use the graphical programming designed and achieve efficient use of equipment. (Author)

  7. Spectrum scanning of monochromator by microcontroller ATtiny 2313

    International Nuclear Information System (INIS)

    Veklich, A.M.; Boretskij, V.F.; Kleshich, M.M.; Fesenko, S.O.

    2009-01-01

    The results of interface developing on the base of Atmel microcontroller ATtiny 2313 are shown. This device is dedicated to the spectrum scanning of monochromators by the step motor. The design principles of motor control scheme are analyzed. The original algorithm of microcontroller program was suggested and realized. The principal performance availability of the USB interface for the purpose of the device control by the personal computer is considered.

  8. Design and performance of the ALS double-crystal monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Jones, G.; Ryce, S.; Perera, R.C.C. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-04-01

    A new {open_quotes}Cowan type{close_quotes} double-crystal monochromator, based on the boomerang design used at NSLS beamline X-24A, has been developed for beamline 9.3.1 at the ALS, a windowless UHV beamline covering the 1-6 keV photon-energy range. Beamline 9.3.1 is designed to simultaneously achieve the goals of high energy resolution, high flux, and high brightness at the sample. The mechanical design has been simplified, and recent developments in technology have been included. Measured mechanical precision of the monochromator shows significant improvement over existing designs. In tests with x-rays at NSLS beamline X-23 A2, maximum deviations in the intensity of monochromatic light were just 7% during scans of several hundred eV in the vicinity of the Cr K edge (6 keV) with the monochromator operating without intensity feedback. Such precision is essential because of the high brightness of the ALS radiation and the overall length of beamline 9.3.1 (26 m).

  9. Design and performance of the ALS double-crystal monochromator

    International Nuclear Information System (INIS)

    Jones, G.; Ryce, S.; Perera, R.C.C.

    1997-01-01

    A new open-quotes Cowan typeclose quotes double-crystal monochromator, based on the boomerang design used at NSLS beamline X-24A, has been developed for beamline 9.3.1 at the ALS, a windowless UHV beamline covering the 1-6 keV photon-energy range. Beamline 9.3.1 is designed to simultaneously achieve the goals of high energy resolution, high flux, and high brightness at the sample. The mechanical design has been simplified, and recent developments in technology have been included. Measured mechanical precision of the monochromator shows significant improvement over existing designs. In tests with x-rays at NSLS beamline X-23 A2, maximum deviations in the intensity of monochromatic light were just 7% during scans of several hundred eV in the vicinity of the Cr K edge (6 keV) with the monochromator operating without intensity feedback. Such precision is essential because of the high brightness of the ALS radiation and the overall length of beamline 9.3.1 (26 m)

  10. Beam-smiling in bent-Laue monochromators

    International Nuclear Information System (INIS)

    Ren, B.; Dilmanian, F. A.; Wu, X. Y.; Huang, X.; Chapman, L. D.; Ivanov, I.; Zhong, Z.; Thomlinson, W. C.

    1997-01-01

    When a wide fan-shaped x-ray beam is diffracted by a bent crystal in the Laue geometry, the profile of the diffracted beam generally does not appear as a straight line, but as a line with its ends curved up or curved down. This effect, referred to as 'beam-smiling', has been a major obstacle in developing bent-Laue crystal monochromators for medical applications of synchrotron x-ray. We modeled a cylindrically bent crystal using the Finite Element Analysis (FEA) method, and we carried out experiments at the National Synchrotron Light Source and Cornell High Energy Synchrotron Source. Our studies show that, while beam-smiling exists in most of the crystal's area because of anticlastic bending effects, there is a region parallel to the bending axis of the crystal where the diffracted beam is 'smile-free'. By applying asymmetrical bending, this smile-free region can be shifted vertically away from the geometric center of the crystal, as desired. This leads to a novel method of compensating for beam-smiling. We will discuss the method of ''differential bending'' for smile removal, beam-smiling in the Cauchios and the polychromatic geometry, and the implications of the method on developing single- and double-bent Laue monochromators. The experimental results will be discussed, concentrating on specific beam-smiling observation and removal as applied to the new monochromator of the Multiple Energy Computed Tomography [MECT] project of the Medical Department, Brookhaven National Laboratory

  11. Software feedback for monochromator tuning at UNICAT (abstract)

    Science.gov (United States)

    Jemian, Pete R.

    2002-03-01

    Automatic tuning of double-crystal monochromators presents an interesting challenge in software. The goal is to either maximize, or hold constant, the throughput of the monochromator. An additional goal of the software feedback is to disable itself when there is no beam and then, at the user's discretion, re-enable itself when the beam returns. These and other routine goals, such as adherence to limits of travel for positioners, are maintained by software controls. Many solutions exist to lock in and maintain a fixed throughput. Among these include a hardware solution involving a wave form generator, and a lock-in amplifier to autocorrelate the movement of a piezoelectric transducer (PZT) providing fine adjustment of the second crystal Bragg angle. This solution does not work when the positioner is a slow acting device such as a stepping motor. Proportional integral differential (PID) loops have been used to provide feedback through software but additional controls must be provided to maximize the monochromator throughput. Presented here is a software variation of the PID loop which meets the above goals. By using two floating point variables as inputs, representing the intensity of x rays measured before and after the monochromator, it attempts to maximize (or hold constant) the ratio of these two inputs by adjusting an output floating point variable. These floating point variables are connected to hardware channels corresponding to detectors and positioners. When the inputs go out of range, the software will stop making adjustments to the control output. Not limited to monochromator feedback, the software could be used, with beam steering positioners, to maintain a measure of beam position. Advantages of this software feedback are the flexibility of its various components. It has been used with stepping motors and PZTs as positioners. Various devices such as ion chambers, scintillation counters, photodiodes, and photoelectron collectors have been used as

  12. A double-multilayer monochromator using a modular design for the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shu, D.; Yun, W.; Lai, B.; Barraza, J.; Kuzay, T.M.

    1994-01-01

    A novel double-multilayer monochromator has been designed for the Advanced Photon Source X-ray undulator beamline at Argonne National Laboratory. The monochromator consists of two ultra high-vacuum (UHV) compatible modular vessels, each with a sine-bar driving structure and a water-cooled multilayer holder. A high precision Y-Z stage is used to provide compensating motion for the second multilayer from outside the vacuum chamber so that the monochromator can fix the output monochromatic beam direction and angle during the energy scan in a narrow range. The design details for this monochromator are presented in this paper

  13. Beam-smiling in bent-Laue monochromators

    International Nuclear Information System (INIS)

    Ren, B.; Dilmanian, F.A.; Wu, X.Y.; Huang, X.; Ivanov, I.; Thomlinson, W.C.

    1997-01-01

    When a wide fan-shaped x-ray beam is diffracted by a bent crystal in the Laue geometry, the profile of the diffracted beam generally does not appear as a straight line, but as a line with its ends curved up or curved down. This effect, referred to as ' beam-smiling', has been a major obstacle in developing bent-Laue crystal monochromators for medical applications of synchrotron x-ray. We modeled a cylindrically bent crystal using the Finite Element Analysis (FEA) method, and we carried out experiments at the National Synchrotron Light Source and Cornell High Energy Synchrotron Source. Our studies show that, while beam-smiling exists in most of the crystal close-quote s area because of anticlastic bending effects, there is a region parallel to the bending axis of the crystal where the diffracted beam is ' smile-free'. By applying asymmetrical bending, this smile-free region can be shifted vertically away from the geometric center of the crystal, as desired. This leads to a novel method of compensating for beam-smiling. We will discuss the method of ' differential bending' for smile removal, beam-smiling in the Cauchios and the polychromatic geometry, and the implications of the method on developing single- and double-bent Laue monochromators. The experimental results will be discussed, concentrating on specific beam-smiling observation and removal as applied to the new monochromator of the Multiple Energy Computed Tomography [MECT] project of the Medical Department, Brookhaven National Laboratory. copyright 1997 American Institute of Physics

  14. Characterisation of a Sr-90 based electron monochromator

    CERN Document Server

    Arfaoui, S; CERN; Casella, C; ETH Zurich

    2015-01-01

    This note describes the characterisation of an energy filtered Sr-90 source to be used in laboratory studies that require Minimum Ionising Particles (MIP) with a kinetic energy of up to approx. 2 MeV. The energy calibration was performed with a LYSO scintillation crystal read out by a digital Silicon Photomultiplier (dSiPM). The LYSO/dSiPM set-up was pre-calibrated using a Na-22 source. After introducing the motivation behind the usage of such a device, this note presents the principle and design of the electron monochromator as well as its energy and momentum characterisation.

  15. Optical design of grazing incidence toroidal grating monochromator

    International Nuclear Information System (INIS)

    Pouey, M.; Howells, M.R.; Takacs, P.Z.

    1982-01-01

    Design rules using geometrical optics and physical optics associated with the phase balancing method are discussed for stigmatic toroidal grazing incidence monochromators. To determine the optical performance of devices involving mirrirs and/or gratings, ray tracing programs using exact geometry are quite widely used. It is then desirable to have some way to infer the practical performance of an instrument from a spot diagram created by tracing a limited number of rays. We propose a first approach to this problem involving an estimation of the geometrical intensity distribution in the image plane and the corresponding line spread function. (orig.)

  16. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  17. 78 FR 32424 - Notice of Issuance of Final Determination Concerning Monochrome Laser Printers

    Science.gov (United States)

    2013-05-30

    ... 5200DNG/SP 5210DNG monochrome laser printers for purposes of U.S. Government procurement? LAW AND ANALYSIS... procurement purposes is the United States. Notice of this final determination will be given in the Federal... of certain monochrome laser printers. Based upon the facts presented, CBP has concluded in the final...

  18. Microcontroller-based servo for two-crystal X-ray monochromators.

    Science.gov (United States)

    Siddons, D P

    1998-05-01

    Microcontrollers have become increasingly easy to incorporate into instruments as the architectures and support tools have developed. The PIC series is particularly easy to use, and this paper describes a controller used to stabilize the output of a two-crystal X-ray monochromator at a given offset from its peak intensity position, as such monochromators are generally used.

  19. Performance of a beam-multiplexing diamond crystal monochromator at the Linac Coherent Light Source

    DEFF Research Database (Denmark)

    Zhu, Diling; Feng, Yiping; Stoupin, Stanislav

    2014-01-01

    A double-crystal diamond monochromator was recently implemented at the Linac Coherent Light Source. It enables splitting pulses generated by the free electron laser in the hard x-ray regime and thus allows the simultaneous operations of two instruments. Both monochromator crystals are High-Pressu...

  20. Test results of a diamond double-crystal monochromator at the advanced photon source

    International Nuclear Information System (INIS)

    Fernandez, P.B.; Graber, T.; Krasnicki, S.; Lee, W.; Mills, D.M.; Rogers, C.S.; Assoufid, L.

    1997-01-01

    We have tested the first diamond double-crystal monochromator at the Advanced Photon Source (APS). The monochromator consisted of two synthetic type 1b (111) diamond plates in symmetric Bragg geometry. We tested two pairs of single-crystal plates: the first pair was 6 mm by 5 mm by 0.25 mm and 6 mm by 5 mm by 0.37 mm; the second set was 7 mm by 5.5 mm by 0.44 mm. The monochromator first crystal was indirectly cooled by edge contact with a water-cooled copper holder. We studied the performance of the monochromator under the high-power x-ray beam delivered by the APS undulator A. We found no indication of thermal distortions or strains even at the highest incident power (280 watts) and power density (123W/mm 2 at normal incidence). The calculated maximum power and power density absorbed by the first crystal were 37 watts and 4.3W/mm 2 , respectively. We also compared the maximum intensity delivered by the diamond monochromator and by a silicon (111) cryogenically cooled monochromator. For energies in the range of 6 to 10 keV, the flux through the diamond monochromator was about a factor of two less than through the silicon monochromator, in good agreement with calculations. We conclude that water-cooled diamond monochromators can handle the high-power beams from the undulator beamlines at the APS. As single-crystal diamond plates of larger size and better quality become available, the use of diamond monochromators will become a very attractive option. copyright 1997 American Institute of Physics

  1. Biological monochromator with a high flux in the visible spectrum

    International Nuclear Information System (INIS)

    Andre, M.; Guerin de Montgareuil, P.

    1965-01-01

    The object is to carry out research into photosynthesis using energetic illuminations similar to those employed with white light studies. The limitations are due mainly to the source. A comparison of various possible solutions has led to the choice of the sun used in conjunction with 4 large gratings. In an intermediate stage, a description is given of a medium-aperture monochromator with a 3 kW xenon arc and a single grating. With this set-up it is possible to obtain the following performance, given as an example; energy illumination, 1.3 mW/cm 2 over a surface of 50 cm 2 and for a bandwidth at half-height of 50 Angstroms. (authors) [fr

  2. MACS low-background doubly focusing neutron monochromator

    CERN Document Server

    Smee, S A; Scharfstein, G A; Qiu, Y; Brand, P C; Anand, D K; Broholm, C L

    2002-01-01

    A novel doubly focusing neutron monochromator has been developed as part of the Multi-Analyzer Crystal Spectrometer (MACS) at the NIST Center for Neutron Research. The instrument utilizes a unique vertical focusing element that enables active vertical and horizontal focusing with a large, 357-crystal (1428 cm sup 2), array. The design significantly reduces the amount of structural material in the beam path as compared to similar instruments. Optical measurements verify the excellent focal performance of the device. Analytical and Monte Carlo simulations predict that, when mounted at the NIST cold-neutron source, the device should produce a monochromatic beam (DELTA E=0.2 meV) with flux phi>10 sup 8 n/cm sup 2 s. (orig.)

  3. Studies Of The (n, γ) Reaction With A Neutron Monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Kane, W. R.; Gardner, D.; Brown, T.; Kevey, A.; Mateosian, E. der; Emery, G. T.; Gelletly, W.; Mariscotti, M. A.J. [Brookhaven National Laboratory, Upton, Long Island, NY (United States); Schröder, I. [National Bureau of Standards, Washington, DC (United States)

    1969-11-15

    A crystal diffraction neutron monochromator has been constructed specifically for studies of the(n, γ) reaction. This equipment plays a complementary role to that of time-of-flight devices in providing a neutron beam with a full duty cycle at a given energy. This feature and the small target size, large geometrical efficiency for y-ray detection, and negligible fast neutron background afford advantages for certain classes of experiments. The useful energy range extends from 0.01 to 20 eV. Novel features of the equipment include a complete reliance upon precision angle encoders for setting arm and crystal angles, the employment of a liquid shield to facilitate the extraction of the diffracted neutron beam, and the use of air bearings to provide for the motion of the target, detection devices, and associated shielding. Results obtained on low energy resonances of {sup 139}La, {sup 189}Os, and {sup 235}U will be presented. (author)

  4. Bragg reflection transmission filters for variable resolution monochromators

    International Nuclear Information System (INIS)

    Chapman, D.

    1989-01-01

    There are various methods for improving the angular and spectral resolution of monochromator and analyzer systems. The novel system described here, though limited to higher x-ray energies (>20keV), is based on a dynamical effect occurring on the transmitted beam with a thin perfect crystal plate set in the Bragg reflection case. In the case of Bragg reflection from a perfect crystal, the incident beam is rapidly attenuated as it penetrates the crystal in the range of reflection. This extinction length is of the order of microns. The attenuation length, which determines the amount of normal transmission through the plate is generally much longer. Thus, in the range of the Bragg reflection the attenuation of the transmitted beam can change by several orders of magnitude with a small change in energy or angle. This thin crystal plate cuts a notch in the transmitted beam with a width equal to its Darwin width, thus acting as a transmission filter. When used in a non-dispersive mode with other monochromator crystals, the filter when set at the Bragg angle will reflect the entire Darwin width of the incident beam and transmit the wings of the incident beam distribution. When the element is offset in angle by some fraction of the Darwin width, the filter becomes useful in adjusting the angular width of the transmitted beam and removing a wing. Used in pairs with a symmetric offset, the filters can be used to continuously adjust the intrinsic angular divergence of the beam with good wing reduction. Instances where such filters may be useful are in improving the angular resolution of a small angle scattering camera. These filters may be added to a Bonse-Hart camera with one pair on the incident beam to reduce the intrinsic beam divergence and a second pair on the analyzer arm to improve the analyzer resolution. 2 refs., 3 Figs

  5. Cascade self-seeding scheme with wake monochromator for narrow-bandwidth X-ray FELs

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-06-15

    Three different approaches have been proposed so far for production of highly monochromatic X-rays from a baseline XFEL undulator: (i) single-bunch selfseeding scheme with a four crystal monochromator in Bragg reflection geometry; (ii) double-bunch self-seeding scheme with a four-crystal monochromator in Bragg reflection geometry; (iii) single-bunch self-seeding scheme with a wake monochromator. A unique element of the X-ray optical design of the last scheme is the monochromatization of X-rays using a single crystal in Bragg-transmission geometry. A great advantage of this method is that the monochromator introduces no path delay of X-rays. This fact eliminates the need for a long electron beam bypass, or for the creation of two precisely separated, identical electron bunches, as required in the other two self-seeding schemes. In its simplest configuration, the self-seeded XFEL consists of an input undulator and an output undulator separated by a monochromator. In some experimental situations this simplest two-undulator configuration is not optimal. The obvious and technically possible extension is to use a setup with three or more undulators separated by monochromators. This amplification-monochromatization cascade scheme is distinguished, in performance, by a small heat-loading of crystals and a high spectral purity of the output radiation. This paper describes such cascade self-seeding scheme with wake monochromators.We present feasibility study and exemplifications for the SASE2 line of the European XFEL. (orig.)

  6. The development of a 200 kV monochromated field emission electron source

    Energy Technology Data Exchange (ETDEWEB)

    Mukai, Masaki, E-mail: mmukai@jeol.co.jp [JEOL Ltd., 3-1-2 Musashino, Akishima, Tokyo 196-8558 (Japan); Kim, Judy S. [University of Oxford, Department of Materials, Parks Road, Oxford, OX1 3PH (United Kingdom); Omoto, Kazuya; Sawada, Hidetaka; Kimura, Atsushi; Ikeda, Akihiro; Zhou, Jun; Kaneyama, Toshikatsu [JEOL Ltd., 3-1-2 Musashino, Akishima, Tokyo 196-8558 (Japan); Young, Neil P.; Warner, Jamie H.; Nellist, Peter D.; Kirkland, Angus I. [University of Oxford, Department of Materials, Parks Road, Oxford, OX1 3PH (United Kingdom)

    2014-05-01

    We report the development of a monochromator for an intermediate-voltage aberration-corrected electron microscope suitable for operation in both STEM and TEM imaging modes. The monochromator consists of two Wien filters with a variable energy selecting slit located between them and is located prior to the accelerator. The second filter cancels the energy dispersion produced by the first filter and after energy selection forms a round monochromated, achromatic probe at the specimen plane. The ultimate achievable energy resolution has been measured as 36 meV at 200 kV and 26 meV at 80 kV. High-resolution Annular Dark Field STEM images recorded using a monochromated probe resolve Si–Si spacings of 135.8 pm using energy spreads of 218 meV at 200 kV and 217 meV at 80 kV respectively. In TEM mode an improvement in non-linear spatial resolution to 64 pm due to the reduction in the effects of partial temporal coherence has been demonstrated using broad beam illumination with an energy spread of 134 meV at 200 kV. - Highlights: • Monochromator for 200 kV aberration corrected TEM and STEM was developed. • Monochromator produces monochromated and achromatic probe at specimen plane. • Ultimate energy resolution was measured to be 36 meV at 200 kV and 26 meV at 80 kV. • Atomic resolution STEM images were recorded using monochromated electron probe. • Improvements of TEM resolution were confirmed using monochromated illumination.

  7. The development of a 200 kV monochromated field emission electron source

    International Nuclear Information System (INIS)

    Mukai, Masaki; Kim, Judy S.; Omoto, Kazuya; Sawada, Hidetaka; Kimura, Atsushi; Ikeda, Akihiro; Zhou, Jun; Kaneyama, Toshikatsu; Young, Neil P.; Warner, Jamie H.; Nellist, Peter D.; Kirkland, Angus I.

    2014-01-01

    We report the development of a monochromator for an intermediate-voltage aberration-corrected electron microscope suitable for operation in both STEM and TEM imaging modes. The monochromator consists of two Wien filters with a variable energy selecting slit located between them and is located prior to the accelerator. The second filter cancels the energy dispersion produced by the first filter and after energy selection forms a round monochromated, achromatic probe at the specimen plane. The ultimate achievable energy resolution has been measured as 36 meV at 200 kV and 26 meV at 80 kV. High-resolution Annular Dark Field STEM images recorded using a monochromated probe resolve Si–Si spacings of 135.8 pm using energy spreads of 218 meV at 200 kV and 217 meV at 80 kV respectively. In TEM mode an improvement in non-linear spatial resolution to 64 pm due to the reduction in the effects of partial temporal coherence has been demonstrated using broad beam illumination with an energy spread of 134 meV at 200 kV. - Highlights: • Monochromator for 200 kV aberration corrected TEM and STEM was developed. • Monochromator produces monochromated and achromatic probe at specimen plane. • Ultimate energy resolution was measured to be 36 meV at 200 kV and 26 meV at 80 kV. • Atomic resolution STEM images were recorded using monochromated electron probe. • Improvements of TEM resolution were confirmed using monochromated illumination

  8. Construction and characterization of the fringe field monochromator for a field emission gun

    Science.gov (United States)

    Mook; Kruit

    2000-04-01

    Although some microscopes have shown stabilities sufficient to attain below 0.1 eV spectral resolution in high-resolution electron energy loss spectroscopy, the intrinsic energy width of the high brightness source (0.3-0.6 eV) has been limiting the resolution. To lower the energy width of the source to 50 meV without unnecessary loss of brightness, a monochromator has been designed consisting of a short (4 mm) fringe field Wien filter and a 150 nm energy selection slit (nanoslit) both to be incorporated in the gun area of the microscope. A prototype has been built and tested in an ultra-high-vacuum setup (10(-9) mbar). The monochromator, operating on a Schottky field emission gun, showed stable and reproducible operation. The nanoslits did not contaminate and the structure remained stable. By measuring the current through the slit structure a direct image of the beam in the monochromator could be attained and the monochromator could be aligned without the use of a microscope. Good dispersed imaging conditions were found indicating an ultimate resolution of 55 meV. A Mark II fringe field monochromator (FFM) was designed and constructed compatible with the cold tungsten field emitter of the VG scanning transmission microscope. The monochromator was incorporated in the gun area of the microscope at IBM T.J. Watson research center, New York. The monochromator was aligned on 100 kV and the energy distribution measured using the monochromator displayed a below 50 meV filtering capability. The retarding Wien filter spectrometer was used to show a 61 meV EELS system resolution. The FFM is shown to be a monochromator which can be aligned without the use of the electron microscope. This makes it directly applicable for scanning transmission microscopy and low-voltage scanning electron microscopy, where it can lower the resolution loss which is caused by chromatic blur of the spot.

  9. Optimization of Monochromated TEM for Ultimate Resolution Imaging and Ultrahigh Resolution Electron Energy Loss Spectroscopy

    KAUST Repository

    Lopatin, Sergei; Cheng, Bin; Liu, Wei-Ting; Tsai, Meng-Lin; He, Jr-Hau; Chuvilin, Andrey

    2017-01-01

    The performance of a monochromated transmission electron microscope with Wien type monochromator is optimized to achieve an extremely narrow energy spread of electron beam and an ultrahigh energy resolution with spectroscopy. The energy spread in the beam is improved by almost an order of magnitude as compared to specified values. The optimization involves both the monochromator and the electron energy loss detection system. We demonstrate boosted capability of optimized systems with respect to ultra-low loss EELS and sub-angstrom resolution imaging (in a combination with spherical aberration correction).

  10. Optimization of Monochromated TEM for Ultimate Resolution Imaging and Ultrahigh Resolution Electron Energy Loss Spectroscopy

    KAUST Repository

    Lopatin, Sergei

    2017-09-01

    The performance of a monochromated transmission electron microscope with Wien type monochromator is optimized to achieve an extremely narrow energy spread of electron beam and an ultrahigh energy resolution with spectroscopy. The energy spread in the beam is improved by almost an order of magnitude as compared to specified values. The optimization involves both the monochromator and the electron energy loss detection system. We demonstrate boosted capability of optimized systems with respect to ultra-low loss EELS and sub-angstrom resolution imaging (in a combination with spherical aberration correction).

  11. Realisation of a novel crystal bender for a fast double crystal monochromator

    CERN Document Server

    Zaeper, R; Wollmann, R; Luetzenkirchen-Hecht, D; Frahm, R

    2001-01-01

    A novel crystal bender for an X-ray undulator beamline as part of a fast double crystal monochromator development for full EXAFS energy range was characterized. Rocking curves of the monochromator crystal system were recorded under different heat loads and bending forces of the indirectly cooled first Si(1 1 1) crystal. The monochromator development implements new piezo-driven tilt tables with wide angular range to adjust the crystals' Bragg angles and a high pressure actuated bender mechanism for the first crystal.

  12. Analysis and design of multilayer structures for neutron monochromators and supermirrors

    International Nuclear Information System (INIS)

    Masalovich, S.

    2013-01-01

    A relatively simple and accurate analytical model for studying the reflectivity of neutron multilayer monochromators and supermirrors is proposed. Design conditions that must be fulfilled in order to reach the maximum reflectivity are considered. The question of the narrowest bandwidth of a monochromator is discussed and the number of layers required to build such a monochromator is derived. Finally, we propose a new and efficient algorithm for synthesis of a supermirror with specified parameters and discuss some inherent restrictions on an attainable reflectivity. -- Highlights: • The inequality (not equation) that defines the thicknesses of layers was obtained. • Ready-to-use formula for the width of the spectral line was found. • Non-quarter-wave monochromators were suggested. • We propose a new algorithm for design of a neutron supermirror. • The problem of minimizing the number of layers in a supermirror is raised

  13. Aberration corrected and monochromated environmental transmission electron microscopy: challenges and prospects for materials science

    DEFF Research Database (Denmark)

    Hansen, Thomas Willum; Wagner, Jakob Birkedal; Dunin-Borkowski, Rafal E.

    2010-01-01

    The latest generation of environmental transmission electron microscopes incorporates aberration correctors and monochromators, allowing studies of chemical reactions and growth processes with improved spatial resolution and spectral sensitivity. Here, we describe the performance of such an instr...

  14. Designing and commissioning of a prototype double Laue monochromator at CHESS

    Science.gov (United States)

    Ko, J. Y. Peter; Oswald, Benjamin B.; Savino, James J.; Pauling, Alan K.; Lyndaker, Aaron; Revesz, Peter; Miller, Matthew P.; Brock, Joel D.

    2014-03-01

    High-energy X-rays are efficiently focused sagittally by a set of asymmetric Laue (transmission) crystals. We designed, built and commissioned a prototype double Laue monochromator ((111) reflection in Si(100)) optimized for high-energy X-rays (30-60 keV). Here, we report our design of novel prototype sagittal bender and highlight results from recent characterization experiments. The design of the bender combines the tuneable bending control afforded by previous leaf-spring designs with the stability and small size of a four-bar bender. The prototype monochromator focuses a 25 mm-wide white beam incident on the first monochromator crystal to a monochromatized 0.6 mm beam waist in the experimental station. Compared to the flux in the same focal spot with the Bragg crystal (without focusing), the prototype Laue monochromator delivered 85 times more at 30 keV.

  15. Designing and commissioning of a prototype double Laue monochromator at CHESS

    International Nuclear Information System (INIS)

    Ko, J Y Peter; Oswald, Benjamin B; Savino, James J; Pauling, Alan K; Lyndaker, Aaron; Revesz, Peter; Miller, Matthew P; Brock, Joel D

    2014-01-01

    High-energy X-rays are efficiently focused sagittally by a set of asymmetric Laue (transmission) crystals. We designed, built and commissioned a prototype double Laue monochromator ((111) reflection in Si(100)) optimized for high-energy X-rays (30-60 keV). Here, we report our design of novel prototype sagittal bender and highlight results from recent characterization experiments. The design of the bender combines the tuneable bending control afforded by previous leaf-spring designs with the stability and small size of a four-bar bender. The prototype monochromator focuses a 25 mm-wide white beam incident on the first monochromator crystal to a monochromatized 0.6 mm beam waist in the experimental station. Compared to the flux in the same focal spot with the Bragg crystal (without focusing), the prototype Laue monochromator delivered 85 times more at 30 keV.

  16. Diamond monochromator for high heat flux synchrotron x-ray beams

    International Nuclear Information System (INIS)

    Khounsary, A.M.; Smither, R.K.; Davey, S.; Purohit, A.

    1992-12-01

    Single crystal silicon has been the material of choice for x-ray monochromators for the past several decades. However, the need for suitable monochromators to handle the high heat load of the next generation synchrotron x-ray beams on the one hand and the rapid and on-going advances in synthetic diamond technology on the other make a compelling case for the consideration of a diamond mollochromator system. In this Paper, we consider various aspects, advantage and disadvantages, and promises and pitfalls of such a system and evaluate the comparative an monochromator subjected to the high heat load of the most powerful x-ray beam that will become available in the next few years. The results of experiments performed to evaluate the diffraction properties of a currently available synthetic single crystal diamond are also presented. Fabrication of diamond-based monochromator is within present technical means

  17. Performance of a beam-multiplexing diamond crystal monochromator at the Linac Coherent Light Source

    International Nuclear Information System (INIS)

    Zhu, Diling; Feng, Yiping; Lemke, Henrik T.; Fritz, David M.; Chollet, Matthieu; Glownia, J. M.; Alonso-Mori, Roberto; Sikorski, Marcin; Song, Sanghoon; Williams, Garth J.; Messerschmidt, Marc; Boutet, Sébastien; Robert, Aymeric; Stoupin, Stanislav; Shvyd'ko, Yuri V.; Terentyev, Sergey A.; Blank, Vladimir D.; Driel, Tim B. van

    2014-01-01

    A double-crystal diamond monochromator was recently implemented at the Linac Coherent Light Source. It enables splitting pulses generated by the free electron laser in the hard x-ray regime and thus allows the simultaneous operations of two instruments. Both monochromator crystals are High-Pressure High-Temperature grown type-IIa diamond crystal plates with the (111) orientation. The first crystal has a thickness of ∼100 μm to allow high reflectivity within the Bragg bandwidth and good transmission for the other wavelengths for downstream use. The second crystal is about 300 μm thick and makes the exit beam of the monochromator parallel to the incoming beam with an offset of 600 mm. Here we present details on the monochromator design and its performance

  18. Design and fabrication of a vacuum ultraviolet monochromator using Seya-Namioka mount

    International Nuclear Information System (INIS)

    Krishnamurty, G.; Sarma, Y.A.; Meenakshi Raja Rao, P.; Bhattacharya, S.S.

    1983-01-01

    The design and fabrication of a one meter vacuum ultraviolet monochromator in the Seya-Namioka mounting is described. The monochromator consists of a concave replica grating (1200 grooves/mm) blazed at 1500 A. The grating rotates about a vertical axis through the center of grating by means of sine drive mechanism. An EMI 6256 photomultiplier coupled with a VUV scintillator, sodium salicylate, is used to detect the radiation. (author)

  19. X fluorescence spectrometer including at least one toroidal monochromator with logarithmic spiral

    International Nuclear Information System (INIS)

    Florestan, J.

    1986-01-01

    This spectrometer includes a X-ray source, an entrance diaphragm, a revolution monochromator with monocrystal thin plates and a seal set in its center, an outer diaphragm and a X-ray detector. A second monochromator can be set between the source and the sample. The thin plates are set so as to be a toroidal ring whose cross section in an axial plane describes a logarithmic spiral [fr

  20. Diffractive-refractive optics: (+,-,-,+) X-ray crystal monochromator with harmonics separation

    Czech Academy of Sciences Publication Activity Database

    Hrdý, Jaromír; Mikulík, P.; Oberta, Peter

    2011-01-01

    Roč. 18, č. 2 (2011), s. 299-301 ISSN 0909-0495 R&D Projects: GA MPO FR-TI1/412 Institutional research plan: CEZ:AV0Z10100522 Keywords : diffractive-refractive optics * x-ray synchrotron radiation monochromator * x-ray crystal monochromator * harmonics separation Subject RIV: BH - Optics, Masers, Lasers Impact factor: 2.726, year: 2011

  1. Characteristics of Pyrolytic Graphite as a Neutron Monochromator

    International Nuclear Information System (INIS)

    Adib, M.; Habib, N.; El-Mesiry, M.S.; Fathallah, M.

    2011-01-01

    Pyrolytic graphite (PG) has become nearly indispensable in neutron spectroscopy. Since the integrated reflectivity of the monochromatic neutrons from PG crystals cut along its c-axis is high within a wavelength band from 0.1 nm up to .65 nm. The monochromatic features of PG crystal is detailed in terms of the optimum mosaic spread, crystal thickness and reactor moderating temperature for efficient integrated neutron reflectivity within the wavelength band. A computer code Mono-PG has been developed to carry out the required calculations for the PG hexagonal close-packed structure. Calculation shows that, 2 mm thick of PG crystal having 0.30 FWHM on mosaic spread are the optimum parameters of PG crystal as a monochromator at selected neutron wavelength shorter than 2 nm. However, the integrated neutron intensity of 2nd and 3rd orders from thermal reactor flux is even higher than that of the 1st order one at neutron wavelengths longer than 2 nm. While, from cold reactor flux, integrated neutron intensity of the 1st order within the wavelength band from 0.25 up to 0.5 nm is higher than the 2nd and 3rd ones

  2. Higher harmonics suppression in Fe/Si polarizing neutron monochromators

    Energy Technology Data Exchange (ETDEWEB)

    Merkel, D.G., E-mail: merkel.daniel@wigner.mta.hu [Wigner Research Centre for Physics, P.O. Box 49, H-1525, Budapest (Hungary); Nagy, B.; Sajti, Sz.; Szilágyi, E. [Wigner Research Centre for Physics, P.O. Box 49, H-1525, Budapest (Hungary); Kovács-Mezei, R. [Mirrotron Ltd. Konkoly-Thege M. út 29-33, H-1121 Budapest (Hungary); Bottyán, L. [Wigner Research Centre for Physics, P.O. Box 49, H-1525, Budapest (Hungary)

    2013-03-11

    The reflected neutron beam originating from a crystal monochromator contains higher order wavelength contributions. Multilayer mirror structures with various custom reflectivity curves including monochromatization and/or polarization of the neutron beam constitute a challenge in modern neutron optics. In this work, we present the study of three types of magnetron-sputtered Fe/Si layer structures with the purpose of higher harmonic suppression. First, an approximately sinusoidal profile was achieved directly by carefully controlling the evaporation parameters during sputtering that leads to first-Bragg-peak reflectivity and polarizing efficiency of R{sub c}=82% and P=97%, respectively. Second, a random, quasi-periodic distribution of the layer thicknesses was implemented, in which the layer structure of the structure was derived from a fit to a prescribed simulated spectrum. This solution resulted in R{sub c}=92% and P=88%. Third, a structure of Fe/Si layers with rounded scattering length profile was constructed starting with a step-like profile and applying 350 keV Ne{sup +} irradiation of 0, 0.5, 1.0, 2.7 and 27×10{sup 15}/cm{sup 2} fluence. Disregarding the highest fluence, the increasing fluence improved the monochromatization (decreasing the intensity of higher order reflections from a total of 11.1% to 2.2% and that of the first Bragg peak from 80% to 70%) and increased the polarizing efficiency from P=79% to 91%). In none of the above structures was a contrast matching agent added to the constituents.

  3. X-ray diffraction characteristics of curved monochromators for sychrotron radiation

    International Nuclear Information System (INIS)

    Boeuf, A.; Rustichelli, F.; Mazkedian, S.; Puliti, P.; Melone, S.

    1978-01-01

    A theoretical study is presented concerning the diffraction characteristics of curved monochromators for X-ray synchrotron radiation used at the laboratories of Hamburg, Orsay and Stanford. The investigation was performed by extending to the X-ray case a simple model recently developed and fruitfully employed to describe the neutron diffraction properties of curved monochromators. Several diffraction patterns were obtained corresponding to different monochromator materials (Ge, Si) used by the different laboratories, for different reflecting planes (111), (220), asymmetry angles, X-ray wave-lengths (Mo Kα, Cu Kα, Cr Kα) and curvature radii. The results are discussed in physical terms and their implications on the design of curved monochromators for synchrotron radiation are presented. In particular, the study shows that all the monochromators used in the different laboratories should behave practically as perfect crystals and therefore should have a very low integrated reflectivity corresponding to an optimized wavelength passband Δlambda/lambda approximately 10 -4 . The gain that can be obtained by increasing the curvature, by introducing a gradient in the lattice spacing or by any other kind of imperfection is quite limited and much lower than the desirable value. The adopted model can help in obtaining a possible moderate gain in intensity by also taking into consideration other parameters, such as crystal material, reflecting plane, asymmetry of the reflection and X-ray wavelength. (Auth.)

  4. Mark IV 'Grasshopper' grazing incidence mono-chromator for the Canadian Synchrotron Radiation Facility (CSRF)

    International Nuclear Information System (INIS)

    Tan, K.H.; Bancroft, G.M.; Coatsworth, L.L.; Yates, B.W.

    1982-01-01

    The vacuum, mechanical, and optical characteristics of a 'Grasshopper' grazing incidence monochromator for use with a synchrotron radiation source in the 30-300 eV range is described. The monochromator is compatible with ultrahigh vacuum ( -10 Torr throughout), and the motor driven scan mechanism is linear and reliable. The monchromator has been calibrated using several known absorption edges between 36 and 102 eV and a nonlinear least squares fit to the scan equation. These same absorption edges, plus a scan over zero order, show that the present resolution of the monochromator (with 10 and 16 μm exit and entrance slits respectively) is 0.16 A (0.06 eV at the AlLsub(2,3) edge). With 10 μm entrance and exit slits the resolution will be very close to the theoretical Δlambda = 0.083 A

  5. The SSRL ultrahigh vacuum grazing incidence monochromator: design considerations and operating experience

    International Nuclear Information System (INIS)

    Brown, F.C.; Bachrach, R.Z.; Lien, N.

    1978-01-01

    Considerable experience has now accumulated with the 'grasshopper' monochromator installed on the four degree line at the Stanford Synchrotron Radiation Laboratory. This is one of the first bakeable high vacuum instruments for use with a storage ring source in the photon energy range 25 to 1000 eV. The unique features of this instrument will be discussed from a general point of view, including the source emittance and the transforming properties of the beam line plus monochromator. Actual performance figures will be given in order to better appraise the limits of focusing optics and gratings at two degree grazing incidence. Improvements such as post-monochromator optics, isolation valves and provisions for adjustment will be briefly discussed. (Auth.)

  6. Precision mechanical design of an UHV-compatible artificial channel-cut x-ray monochromator

    International Nuclear Information System (INIS)

    Shu, D.; Narayanan, S.; Sandy, A.; Sprung, M.; Preissner, C.; Sullivan, J.

    2007-01-01

    A novel ultra-high-vacuum (UHV)-compatible x-ray monochromator has been designed and commissioned at the undulator beamline 8-ID-I at the Advanced Photon Source (APS) for x-ray photon correlation spectroscopy applications. To meet the challenging stability and x-ray optical requirements, the monochromator integrates two new precision angular positioning mechanisms into its crystal optics motion control system: An overconstrained weak-link mechanism that enables the positioning of an assembly of two crystals to achieve the same performance as a single channel-cut crystal, the so called 'artificial channel-cut crystal'; A ceramic motor driven in-vacuum sine-bar mechanism for the double crystal combined pitch motion. The mechanical design of the monochromator, as well as the test results of its positioning performance are presented in this paper.

  7. Precision mechanical design of an UHV-compatible artificial channel-cut x-ray monochromator.

    Energy Technology Data Exchange (ETDEWEB)

    Shu, D.; Narayanan, S.; Sandy, A.; Sprung, M.; Preissner, C.; Sullivan, J.; APS Engineering Support Division

    2007-01-01

    A novel ultra-high-vacuum (UHV)-compatible x-ray monochromator has been designed and commissioned at the undulator beamline 8-ID-I at the Advanced Photon Source (APS) for x-ray photon correlation spectroscopy applications. To meet the challenging stability and x-ray optical requirements, the monochromator integrates two new precision angular positioning mechanisms into its crystal optics motion control system: An overconstrained weak-link mechanism that enables the positioning of an assembly of two crystals to achieve the same performance as a single channel-cut crystal, the so called 'artificial channel-cut crystal'; A ceramic motor driven in-vacuum sine-bar mechanism for the double crystal combined pitch motion. The mechanical design of the monochromator, as well as the test results of its positioning performance are presented in this paper.

  8. Optimization of flat and horizontally curved neutron monochromators for given diffractometer geometries

    International Nuclear Information System (INIS)

    Graf, H.A.

    1983-08-01

    The computer program MONREF was written for calculating the integrated intensity and the k-vector distribution produced by mosaic-crystal monochromators in neutron diffractometers of given geometries. The program treats flat and horizontally curved monochromators in Bragg reflection. Its basic algorithm is derived from Zachariasen's coupled differential equations which were modified to include the case of asymmetrically cut crystals. The calculations are restricted to the scattering in the experimental plane. In the first part of the report the program and its applications are described. In the second part a compilation of intensities is presented, calculated for crystals of Cu, Si, Ge and pyrolytic graphite commonly used as monochromators, in a standard diffractometer configuration. (orig.)

  9. Generating high gray-level resolution monochrome displays with conventional computer graphics cards and color monitors.

    Science.gov (United States)

    Li, Xiangrui; Lu, Zhong-Lin; Xu, Pengjing; Jin, Jianzhong; Zhou, Yifeng

    2003-11-30

    Display systems based on conventional computer graphics cards are capable of generating images with about 8-bit luminance resolution. However, most vision experiments require more than 12 bits of luminance resolution. Pelli and Zhang [Spatial Vis. 10 (1997) 443] described a video attenuator for generating high luminance resolution displays on a monochrome monitor, or for driving just the green gun of a color monitor. Here we show how to achieve a white display by adding video amplifiers to duplicate the monochrome signal to drive all three guns of any color monitor. Because of the lack of the availability of high quality monochrome monitors, our method provides an inexpensive way to achieve high-resolution monochromatic displays using conventional, easy-to-get equipment. We describe the design principles, test results, and a few additional functionalities.

  10. Monochromator for synchrotron light with temperature controlled by electrical current on silicon crystal

    Energy Technology Data Exchange (ETDEWEB)

    Cusatis, Cesar; Souza, Paulo E.N. [Universidade Federal do Parana (LORXI/UFPR), Curitiba, PR (Brazil). Dept. de Fisica. Lab. de Optica de Raios X e Instrumentacao; Franco, Margareth Kobayaski; Kakuno, Edson [Laboratorio Nacional de Luz Sincroton (LNLS), Campinas, SP (Brazil); Gobbi, Angelo; Carvalho Junior, Wilson de [Centro de Pesquisa e Desenvolvimento em Telecomunicacoes (CPqD), Campinas, SP (Brazil)

    2011-07-01

    Full text. doped silicon crystal was used simultaneously as a monochromator, sensor and actuator in such way that its temperature could be controlled. Ohmic contacts allowed resistance measurements on a perfect silicon crystal, which were correlated to its temperature. Using the ohmic contacts, an electrical current caused Joule heating on the monochromator that was used to control its temperature. A simple stand-alone electronic box controlled the system. The device was built and tested with white beam synchrotron light on the double crystal monochromator of the XRD line of LNLS, Laboratorio Nacional de Luz Sincrotron, Campinas. The first crystal of a double crystal monochromator determines the energy that is delivered to a synchrotron experimental station and its temperature instability is a major source of energy and intensity instability. If the (333) silicon monochromator is at theta Bragg near 45 degree the variation of the diffraction angle is around one second of arc per degree Kelvin. It may take several minutes for the first crystal temperature to stabilize at the beginning of the station operation when the crystal and its environment are cold. With water refrigeration, the average overall temperature of the crystal may be constant, but the temperature of the surface changes with and without the white beam. The time used to wait for stabilization of the beam energy/intensity is lost unless the temperature of the crystal surface is kept constant. One solution for keeping the temperature of the monochromator and its environment constant or nearly constant is Joule heating it with a controlled small electrical current flowing on the surface of a doped perfect crystal. When the white beam is on, this small amount of extra power will be more concentrated at the beam footpath because the resistance is lower in this region due to the higher temperature. In addition, if the crystal itself is used to detect the temperature variation by measuring the electrical

  11. Performance tests of a 2-meter grasshopper monochromator at photon factory

    International Nuclear Information System (INIS)

    Yanagihara, Mihiro; Maezawa, Hideki; Sasaki, Taizo; Suzuki, Yoshio; Iguchi, Yasuo.

    1984-12-01

    A 2-meter grasshopper monochromator was installed and adjusted at BL-11A in Photon Factory, and performance tests were carried out. The usable photon energy range for the monochromator is 90 to 1000 eV for a 2400 grooves/mm grating, and the flux is 10 8 - 10 9 photons/sec for entrance and exit slit widths of 15 μm. A resolving power of about 2000 is realized at 250 eV for this slit width. (author)

  12. Physical evaluation of color and monochrome medical displays using an imaging colorimeter

    Science.gov (United States)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2013-03-01

    This paper presents an approach to physical evaluation of color and monochrome medical grade displays using an imaging colorimeter. The purpose of this study was to examine the influence of medical display types, monochrome or color at the same maximum luminance settings, on diagnostic performance. The focus was on the measurements of physical characteristics including spatial resolution and noise performance, which we believed could affect the clinical performance. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between two EIZO displays.

  13. Design of a cryo-cooled artificial channel-cut crystal monochromator for the European XFEL

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Xiaohao, E-mail: xiaohao.dong@xfel.eu; Sinn, Harald, E-mail: harald.sinn@xfel.eu [European XFEL GmbH, Hamburg, D-22761 (Germany); Shu, Deming, E-mail: shu@aps.anl.gov [Argonne National Laboratory, Argonne, IL 60439, U.S.A (United States)

    2016-07-27

    An artificial channel-cut crystal monochromator for the hard X-Ray beamlines of SASE 1&2, cryogenically cooled by the so-called pulse tube cooler (cryorefrigerator), is currently under development at the European XFEL ( http://www.xfel.eu/ ). The fabrication is on-going. We present here the crystal optical consideration and the novel cooling configuration, according to the X-Ray FEL pulses proprieties. The mechanical design improvements are pointed out as well to implement such kind of monochromator based on the previous similar design.

  14. Active phase double crystal monochromator for JET (diagnostic system KS1)

    International Nuclear Information System (INIS)

    Andelfinger, C.; Fink, J.; Fussmann, G.; Krause, H.; Roehr, H.; Schilling, H.B.; Schumacher, U.; Becker, P.; Siegert, H.; Abel, P.; Keul, J.

    1984-03-01

    The determination of the impurity concentrations in JET plasmas by absolute radiation measurements in a wide spectral range can be done with a double crystal monochromator device in parallel mode, which is able to operate during all experimental phases of JET. The report describes the engineering design and tests for a double crystal monochromator that fulfills the conditions of parallel orientation of the two crystals during fast wavelength scan, of shielding against neutrons and gamma rays by its folded optical pathway and of sufficient spectral resolution for line profile measurements. (orig.)

  15. Collimator type monochromator as a possible impurities monitor for fusion plasmas. Preliminary tests on the Tokamak TM-1-MH

    International Nuclear Information System (INIS)

    Musa, G.; Lungu, C.P.; Badalec, J.; Jakubka, K.; Kopecky, V.; Stoeckel, J.; Zacek, F.

    1984-09-01

    A collimator type monochromator has been tested for the first time as the impurity monitor on Tokamak. The possibility to use this type of monochromator in fusion devices is analyzed and a monoslit device is proposed as a convenient monitor for impurities. (authors)

  16. Test of a high-heat-load double-crystal diamond monochromator at the advanced photon source

    International Nuclear Information System (INIS)

    Fernandez, P.B.; Graber, T.; Lee, W.-K.; Mills, D.M.; Rogers, C.S.; Assoufid, L.

    1997-01-01

    We have tested the first diamond double-crystal monochromator at the advanced photon source (APS). The monochromator consisted of two synthetic type 1b (111) diamond plates in symmetric Bragg geometry. The single-crystal plates were 6 mm x 5 mm x 0.25 mm and 6 mm x 5 mm x 0.37 mm and showed a combination of mosaic spread/strain of the order of 2-4 arcsec over a central 1.4 mm-wide strip. The monochromator first crystal was indirectly cooled by edge contact with a water-cooled copper holder. We studied the performance of the monochromator under the high-power X-ray beam delivered by the APS undulator A. By changing the undulator gap, we varied the power incident on the first crystal and found no indication of thermal distortions or strains even at the highest incident power (200 W) and power density (108 W/mm 2 in normal incidence). The calculated maximum power and power density absorbed by the first crystal were 14.5 W and 2.4 W/mm 2 , respectively. We also compared the maximum intensity delivered by this monochromator and by a silicon (111) cryogenically cooled monochromator. For energies in the range 6-10 keV, the flux through the diamond monochromator was about a factor of two less than through the silicon monochromator, in good agreement with calculations. We conclude that water-cooled diamond monochromators can handle the high-power beams from the undulator beamlines at the APS. As single-crystal diamond plates of larger size and better quality become available, the use of diamond monochromators will become a very attractive option. (orig.)

  17. Measurement & Minimization of Mount Induced Strain on Double Crystal Monochromator Crystals

    Science.gov (United States)

    Kelly, J.; Alcock, S. G.

    2013-03-01

    Opto-mechanical mounts can cause significant distortions to monochromator crystals and mirrors if not designed or implemented carefully. A slope measuring profiler, the Diamond-NOM [1], was used to measure the change in tangential slope as a function of crystal clamping configuration and load. A three point mount was found to exhibit the lowest surface distortion (Diamond Light Source.

  18. Optics and design of the fringe field monochromator for a Schottky field emission gun

    International Nuclear Information System (INIS)

    Mook, H.W.; Kruit, P.

    1999-01-01

    For the improvement of high-resolution electron energy loss spectroscopy a new electron source monochromator, based on the Wien filter principle, is presented. In the fringe field monochromator the electric and magnetic filter fields are tightly enclosed by field clamps to satisfy the Wien condition, E=vB. The whole monochromator including the 150 nm energy selection slits (Nanoslits) is positioned in the gun area. Its total length is only 42 mm. Using electron trajectory simulation through the filter fields the dispersion and aberrations are determined. The parasitic astigmatism of the gun lens needs to be corrected using an electrostatic quadrupole field incorporated in the filter. Estimations of the influence of filter electrode misalignment show that at least six filter electrodes must be used to loosen the alignment demands sufficiently. Using theoretical estimations of the Coulomb interaction the final energy resolution, beam brightness and current are predicted. For a Schottky field emission electron gun with typical brightness of 10 8 A/sr m 2 V the monochromator is expected to produce a 50 meV 1 nA beam with a brightness of 10 7

  19. High luminance monochrome vs. color displays: impact on performance and search

    Science.gov (United States)

    Krupinski, Elizabeth A.; Roehrig, Hans; Matsui, Takashi

    2011-03-01

    To determine if diagnostic accuracy and visual search efficiency with a high luminance medical-grade color display are equivalent to a high luminance medical-grade monochrome display. Six radiologists viewed DR chest images, half with a solitary pulmonary nodule and half without. Observers reported whether or not a nodule was present and their confidence in that decision. Total viewing time per image was recorded. On a subset of 15 cases eye-position was recorded. Confidence data were analyzed using MRMC ROC techniques. There was no statistically significant difference (F = 0.0136, p = 0.9078) between color (mean Az = 0.8981, se = 0.0065) and monochrome (mean Az = 0.8945, se = 0.0148) diagnostic performance. Total viewing time per image did not differ significantly (F = 0.392, p = 0.5315) as a function of color (mean = 27.36 sec, sd = 12.95) vs monochrome (mean = 28.04, sd = 14.36) display. There were no significant differences in decision dwell times (true and false, positive and negative) overall for color vs monochrome displays (F = 0.133, p = 0.7154). The true positive (TP) and false positive (FP) decisions were associated with the longest dwell times, the false negatives (FN) with slightly shorter dwell times, and the true negative decisions (TN) with the shortest (F = 50.552, p radiology.

  20. Optimization of bent perfect Si(220)-crystal monochromator for residual strain/stress instrument - Part II

    Czech Academy of Sciences Publication Activity Database

    Moon, MK.; Em, Vt.; Lee, C.H.; Mikula, Pavol; Hong, KP; Choi, YH; Cheon, JK; Nam, UW; Kong, KN; Jin, KC

    2005-01-01

    Roč. 368, 1 2 3 4 (2005), s. 70-75 ISSN 0921-4526 R&D Projects: GA ČR(CZ) GA202/03/0891 Institutional research plan: CEZ:AV0Z10480505 Keywords : neutron monochromator * residual stress measurement * neutron diffractometers Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.796, year: 2005

  1. Performance of a high resolution monochromator for the vacuum ultraviolet radiation from the DORIS storage ring

    International Nuclear Information System (INIS)

    Saile, V.; Skibowski, M.; Steinmann, W.; Guertler, P.; Koch, E.E.; Kozevnikov, A.

    1976-03-01

    The unique properties of the DORIS storage ring at DESY as a synchrotron radiation source are exploited for high resolution spectroscopy in the vacuum ultraviolet. We describe a new experimental set up with a 3 meter normal incidence monochromator for wavelengths between 3,000 A to 300 A (4 [de

  2. Electron-optical design parameters for a high-resolution electron monochromator

    International Nuclear Information System (INIS)

    Tanaka, H.; Huebner, R.H.

    1976-01-01

    Detailed design parameters of a new, high-resolution electron monochromator are presented. The design utilizes a hemispherical filter as the energy-dispersing element and combines both cylindrical and aperture electrostatic lenses to accelerate, decelerate, transport, and focus the electron beam from the cathode to the interaction region

  3. Suppression of surface effect by using bent-perfect-crystal monochromator in residual strain scanning

    Czech Academy of Sciences Publication Activity Database

    Vrána, Miroslav; Mikula, Pavol

    490/491, - (2005), s. 234-238 ISSN 0255-5476 R&D Projects: GA ČR GA202/03/0891; GA AV ČR KSK1010104 Institutional research plan: CEZ:AV0Z1048901 Keywords : neutron diffraction * residual strain scanning * bent monochromator Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.399, year: 2005

  4. Principles for the comparison of the proportions of straylight present in monochromators of various types

    International Nuclear Information System (INIS)

    Glock, E.

    1976-03-01

    The origin and propagation of straylight in monochromators is investigated and an expression is derived for the ratio of the straylight and the effective spectral intensity within the exit slit. This expression, for a prescribed resolution and speed, permits to select the design involving the minimum of straylight which can be attained. (orig.) [de

  5. A novel monochromator for high heat-load synchrotron x-ray radiation

    International Nuclear Information System (INIS)

    Khounsary, A.M.

    1992-01-01

    The high heat load associated with the powerful and concentrated x-ray beams generated by the insertion devices at a number of present and many of the future (planned or under construction) synchrotron radiation facilities pose a formidable engineering challenge in the designer of the monochromators and other optical devices. For example, the Undulator A source on the Advanced Photon Source (APS) ring (being constructed at the Argonne National Laboratory) will generate as much as 10 kW of heat deposited on a small area (about 1 cm 2 ) of the first optics located some 24 m from the source. The peak normal incident heat flux can be as high as 500 W/mm 2 . Successful utilization of the intense x-ray beams from insertion devices critically depends on the development, design, and availability of optical elements that provide acceptable performance under high heat load. Present monochromators can handle, at best, heat load levels that are an order of magnitude lower than those generated by such sources. The monochromator described here and referred to as the open-quote inclinedclose quotes monochromator can provide a solution to high heat-load problems

  6. Self-seeding scheme with gas monochromator for narrow-bandwidth soft X-ray FELs

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2011-03-15

    Self-seeding schemes, consisting of two undulators with a monochromator in between, aim at reducing the bandwidth of SASE X-ray FELs. We recently proposed to use a new method of monochromatization exploiting a single crystal in Bragg transmission geometry for self-seeding in the hard X-ray range. Here we consider a possible extension of this method to the soft X-ray range using a cell filled with resonantly absorbing gas as monochromator. The transmittance spectrum in the gas exhibits an absorbing resonance with narrow bandwidth. Then, similarly to the hard X-ray case, the temporal waveform of the transmitted radiation pulse is characterized by a long monochromatic wake. In fact, the FEL pulse forces the gas atoms to oscillate in a way consistent with a forward-propagating, monochromatic radiation beam. The radiation power within this wake is much larger than the equivalent shot noise power in the electron bunch. Further on, the monochromatic wake of the radiation pulse is combined with the delayed electron bunch and amplified in the second undulator. The proposed setup is extremely simple, and composed of as few as two simple elements. These are the gas cell, to be filled with noble gas, and a short magnetic chicane. The installation of the magnetic chicane does not perturb the undulator focusing system and does not interfere with the baseline mode of operation. In this paper we assess the features of gas monochromator based on the use of He and Ne.We analyze the processes in the monochromator gas cell and outside it, touching upon the performance of the differential pumping system as well. We study the feasibility of using the proposed self-seeding technique to generate narrow bandwidth soft X-ray radiation in the LCLS-II soft X-ray beam line. We present conceptual design, technical implementation and expected performances of the gas monochromator self-seeding scheme. (orig.)

  7. Self-seeding scheme with gas monochromator for narrow-bandwidth soft X-ray FELs

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni

    2011-03-01

    Self-seeding schemes, consisting of two undulators with a monochromator in between, aim at reducing the bandwidth of SASE X-ray FELs. We recently proposed to use a new method of monochromatization exploiting a single crystal in Bragg transmission geometry for self-seeding in the hard X-ray range. Here we consider a possible extension of this method to the soft X-ray range using a cell filled with resonantly absorbing gas as monochromator. The transmittance spectrum in the gas exhibits an absorbing resonance with narrow bandwidth. Then, similarly to the hard X-ray case, the temporal waveform of the transmitted radiation pulse is characterized by a long monochromatic wake. In fact, the FEL pulse forces the gas atoms to oscillate in a way consistent with a forward-propagating, monochromatic radiation beam. The radiation power within this wake is much larger than the equivalent shot noise power in the electron bunch. Further on, the monochromatic wake of the radiation pulse is combined with the delayed electron bunch and amplified in the second undulator. The proposed setup is extremely simple, and composed of as few as two simple elements. These are the gas cell, to be filled with noble gas, and a short magnetic chicane. The installation of the magnetic chicane does not perturb the undulator focusing system and does not interfere with the baseline mode of operation. In this paper we assess the features of gas monochromator based on the use of He and Ne.We analyze the processes in the monochromator gas cell and outside it, touching upon the performance of the differential pumping system as well. We study the feasibility of using the proposed self-seeding technique to generate narrow bandwidth soft X-ray radiation in the LCLS-II soft X-ray beam line. We present conceptual design, technical implementation and expected performances of the gas monochromator self-seeding scheme. (orig.)

  8. Measurement of the electronic absorption coefficient for 57Co 14.4 keV gamma photons in aluminium using the Moessbauer effect as a monochromator

    International Nuclear Information System (INIS)

    Rajan, N.; Nigam, A.K.

    1984-01-01

    The total electronic absorption coefficient for 14.4 keV gamma photons in aluminium, has been measured experimentally, for the first time, using the Moessbauer effect as a monochromator. This data is important for the determination of background in Moessbauer recoilless fraction measurements especially if the energy of X-rays of the source host lattice lie near the 14.4 keV photon energy (e.g. in Rh and Pd) in which case electronic absorption coefficients should be known precisely. The coefficient obtained by interpolation from available values at other energies differ from our experimental value by as much as 20%. It is shown that this can lead to errors, in recoilless fraction values, which are far from negligible. The above absorption coefficient for aluminium was measured to be 11+-1 cm 2 /g. (orig.)

  9. Hard x-ray monochromator with milli-electron volt bandwidth for high-resolution diffraction studies of diamond crystals

    Energy Technology Data Exchange (ETDEWEB)

    Stoupin, Stanislav; Shvyd' ko, Yuri; Shu Deming; Khachatryan, Ruben; Xiao, Xianghui; DeCarlo, Francesco; Goetze, Kurt; Roberts, Timothy; Roehrig, Christian; Deriy, Alexey [Advanced Photon Source, Argonne National Laboratory, Illinois 60439 (United States)

    2012-02-15

    We report on design and performance of a high-resolution x-ray monochromator with a spectral bandwidth of {Delta}E{sub X}{approx_equal} 1.5 meV, which operates at x-ray energies in the vicinity of the backscattering (Bragg) energy E{sub H} = 13.903 keV of the (008) reflection in diamond. The monochromator is utilized for high-energy-resolution diffraction characterization of diamond crystals as elements of advanced x-ray crystal optics for synchrotrons and x-ray free-electron lasers. The monochromator and the related controls are made portable such that they can be installed and operated at any appropriate synchrotron beamline equipped with a pre-monochromator.

  10. Performance limits of direct cryogenically cooled silicon monochromators - experimental results at the APS

    International Nuclear Information System (INIS)

    Lee, W.-K.; Fernandez, P.; Mills, D.M.

    2000-01-01

    The successful use of cryogenically cooled silicon monochromators at third-generation synchrotron facilities is well documented. At the Advanced Photon Source (APS) it has been shown that, at 100 mA operation with the standard APS undulator A, the cryogenically cooled silicon monochromator performs very well with minimal (<2 arcsec) or no observable thermal distortions. However, to date there has not been any systematic experimental study on the performance limits of this approach. This paper presents experimental results on the performance limits of these directly cooled crystals. The results show that if the beam is limited to the size of the radiation central cone then, at the APS, the crystal will still perform well at twice the present 100 mA single 2.4 m-long 3.3 cm-period undulator heat load. However, the performance would degrade rapidly if a much larger incident white-beam size is utilized

  11. High-flux normal incidence monochromator for circularly polarized synchrotron radiation

    International Nuclear Information System (INIS)

    Schaefers, F.; Peatman, W.; Eyers, A.; Heckenkamp, C.; Schoenhense, G.; Heinzmann, U.

    1986-01-01

    A 6.5-m normal incidence monochromator installed at the storage ring BESSY, which is optimized for a high throughput of circularly polarized off-plane radiation at moderate resolution is described. The monochromator employs two exit slits and is specially designed and used for low-signal experiments such as spin- and angle-resolved photoelectron spectroscopy on solids, adsorbates, free atoms, and molecules. The Monk--Gillieson mounting (plane grating in a convergent light beam) allows for large apertures with relatively little astigmatism. With two gratings, a flux of more than 10 11 photons s -1 bandwidth -1 (0.2--0.5 nm) with a circular polarization of more than 90% in the wavelength range from 35 to 675 nm is achieved

  12. The use of diffraction efficiency theory in the design of soft x-ray monochromators

    International Nuclear Information System (INIS)

    Padmore, H.A.; Martynov, V.; Hollis, K.; Mount Vernon Hospital, Northwood

    1993-01-01

    In general, the diffraction efficiency of gratings is limited by the constraints imposed by the type of geometry used to scan the photon energy. In the simplest example, the spherical grating monochromator (SGM), the deviation angle, the grating groove width and depth and the groove density are all constrained by considerations of the maximum photon energy and the tuning range for individual gratings. We have examined the case in which these parameters are unconstrained, resulting in predictions of the ultimate performance of lamellar type gratings for groove densities from 300 to 2400 1/mm for gold and nickel coatings. The differential method of Neviere et al was used for modeling the behavior of the gratings and justification is presented for this by rigorous comparison with measurements. The implications of these results for future monochromators based on a variable included angle geometry are discussed

  13. Diamond double-crystal monochromator in Bragg geometry installed on BL-11XU at SPring-8

    CERN Document Server

    Marushita, M; Fukuda, T; Takahasi, M; Inami, T; Katayama, Y; Shiwaku, H; Mizuki, J

    2001-01-01

    We present here the feature of the diamond double-crystal monochromator in Bragg geometry installed on a standard undulator beamline at SPring-8. The crystal was manufactured by Sumitomo Electric Industries, Ltd., whose size was 8.6 mm (w) x3.5 mm (l) x0.35 mm (t) for the first crystal and 10 mm (w) x4.7 mm (l) x0.39 mm (t) for the second. The feature of the monochromator was tested by rocking curve measurements as a function of the total power and of the energy that impinged on the crystal. As a result, no significant increase of the full-width at half-maximum was observed up to a total power of 330 W on the first crystal. We discuss the experimental results with the comparison to the calculated FWHM with use of the beamline parameters.

  14. A new transmission based monochromator for energy-selective neutron imaging at the ICON beamline

    International Nuclear Information System (INIS)

    Peetermans, S.; Tamaki, M.; Hartmann, S.; Kaestner, A.; Morgano, M.; Lehmann, E.H.

    2014-01-01

    A new type of monochromator has been developed for energy-selective neutron imaging at continuous sources. It combines the use of a mechanical neutron velocity selector with pyrolytic graphite crystals of different mosaicity. The beam can be monochromatized to similar levels as a standard double crystal monochromator. It can flexibly produce different desired spectral shapes, even an asymmetric one. Intrinsically, no higher order contamination of the spectrum is present. Working with the transmitted beam, the beam divergence (and thus the spatial resolution) is uncompromised. The device has been calibrated, characterized and its performance demonstrated with the measurement of Bragg edges for iron and lead, resolving them more sharply than if solely a mechanical velocity selector was used

  15. Ultrathin nondoped emissive layers for efficient and simple monochrome and white organic light-emitting diodes.

    Science.gov (United States)

    Zhao, Yongbiao; Chen, Jiangshan; Ma, Dongge

    2013-02-01

    In this paper, highly efficient and simple monochrome blue, green, orange, and red organic light emitting diodes (OLEDs) based on ultrathin nondoped emissive layers (EMLs) have been reported. The ultrathin nondoped EML was constructed by introducing a 0.1 nm thin layer of pure phosphorescent dyes between a hole transporting layer and an electron transporting layer. The maximum external quantum efficiencies (EQEs) reached 17.1%, 20.9%, 17.3%, and 19.2% for blue, green, orange, and red monochrome OLEDs, respectively, indicating the universality of the ultrathin nondoped EML for most phosphorescent dyes. On the basis of this, simple white OLED structures are also demonstrated. The demonstrated complementary blue/orange, three primary blue/green/red, and four color blue/green/orange/red white OLEDs show high efficiency and good white emission, indicating the advantage of ultrathin nondoped EMLs on constructing simple and efficient white OLEDs.

  16. Pulse Compression of Phase-matched High Harmonic Pulses from a Time-Delay Compensated Monochromator

    Directory of Open Access Journals (Sweden)

    Ito Motohiko

    2013-03-01

    Full Text Available Pulse compression of single 32.6-eV high harmonic pulses from a time-delay compensated monochromator was demonstrated down to 11±3 fs by compensating the pulse front tilt. The photon flux was intensified up to 5.7×109 photons/s on target by implementing high harmonic generation under a phase matching condition in a hollow fiber used for increasing the interaction length.

  17. Local Treatment for Monochrome Outdoor Painted Metal Sculptures: Assessing the suitability of conservation paints for retouching

    OpenAIRE

    van Basten, Nikki; Defeyt, Catherine; Rivenc, Rachel

    2015-01-01

    When outdoor painted sculptures get chipped, scratched or abraded, conservators might consider local retouching treatments as an option that would protect the exposed metal substrate and restore the aesthetic integrity, thus postponing a very costly and invasive overall repainting. Unfortunately, matching colour gloss and texture on large monochrome surfaces is always challenging. This paper reports on research undertaken to investigate some of the materials and application techniques that co...

  18. Workshop on cooling of x-ray monochromators on high power beamlines

    International Nuclear Information System (INIS)

    Matsushita, T.; Ishikawa, T.

    1989-03-01

    This report is a Workshop on Cooling of X-ray Monochromators on High Power Beamlines held on August 31, 1988 at the Photon Factory during the Third International Conference on Synchrotron Radiation Instrumentation (SRI88). On high power beamlines, especially on insertion device beamlines, heating of crystal monochromators is becoming a serious problem: Researchers observe that the intensity of the X-ray beam on the sample is not proportional to the source intensity because of thermal distortion of the monochromator crystal. This problem will be even more serious on beamlines for the next generation X-ray rings. In the very tight program of the SRI88 conference, only 2 speakers were able to give invited talks closely related to this problem in the session of OPTICAL COMPONENTS FOR HIGH POWER BEAMLINES on Wednesday morning of August 31, 1988. We held this workshop in the afternoon of the same day with the intention of offering further opportunities to exchange information on efforts underway at various laboratories and to discuss ideas how to solve this problem. We also intended that the workshop would be a 'follow-up' to the X-ray optics workshop held at ESRF, Grenoble in September 1987, where the importance of crystal cooling was strongly pointed out. There were 32 participants from 7 countries. 12 people represented their experiences and ideas for reducing thermal distortion of crystal monochromators. Following those presentations, there were discussions on collaborations for solving this important problem. The attendees agreed that exchange of information should be continued by holding such meetings at reasonable intervals. (J.P.N.)

  19. Radiation-shielded double crystal X-ray monochromator for JET

    International Nuclear Information System (INIS)

    Barnsley, R.; Morsi, H.W.; Rupprecht, G.; Kaellne, E.

    1989-01-01

    A double crystal X-ray monochromator for absolute wavelength and intensity measurements with very effective shielding of its detector against neutrons and hard X-rays was brought into operation at JET. Fast wavelength scans were taken of impurity line radiation in the wavelength region from about 0.1 nm to 2.3 nm, and monochromatic as well as spectral line scans, for different operational modes of JET. (author) 5 refs., 4 figs

  20. Linac Coherent Light Source soft x-ray materials science instrument optical design and monochromator commissioning

    Czech Academy of Sciences Publication Activity Database

    Heimann, P.; Krupin, O.; Schlotter, W.F.; Turner, J.; Krzywinski, J.; Sorgenfrei, F.; Messerschmidt, M.; Bernstein, D.; Chalupský, Jaromír; Hájková, Věra; Hau-Riege, S.; Holmes, M.; Juha, Libor; Kelez, N.; Lüning, J.; Nordlund, D.; Perea, M.F.; Scherz, A.; Soufli, R.; Wurth, W.; Rowen, M.

    2011-01-01

    Roč. 82, č. 9 (2011), 093104/1-093104/8 ISSN 0034-6748 R&D Projects: GA MŠk(CZ) ME10046 Institutional research plan: CEZ:AV0Z10100523 Keywords : diffraction gratings * light sources * linear accelerators * optical materials * x-ray monochromators * x-ray optics Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.367, year: 2011

  1. Mechanical design aspects of a soft X-ray plane grating monochromator

    Czech Academy of Sciences Publication Activity Database

    Vašina, R.; Kolařík, V.; Doležel, P.; Mynář, M.; Vondráček, Martin; Cháb, Vladimír; Slezák, Jiří; Comicioli, C.; Prince, K. C.

    467-468, 467-468 (2001), s. 561-564 ISSN 0168-9002 R&D Projects: GA ČR GV202/98/K002 Grant - others:-(XE) CIPA-CT94-0217 Institutional research plan: CEZ:AV0Z2041904 Keywords : synchrotron radiation * monochromator * plane grating Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.026, year: 2001

  2. Design of an adaptive cooled first crystal for an X-ray monochromator

    International Nuclear Information System (INIS)

    Dezoret, D.; Marmoret, R.; Freund, A.K.; Kvick, AA.; Ravelet, R.

    1994-01-01

    We report here on the design of the first crystal in an x-ray monochromator for E.S.R.F. beam lines. This crystal is a thin silicon foil bonded to a cooled beryllium support. A system of piezoelectric actuators is used to counterbalance the deformations induced by synchrotron beams. This work was carried out by the C.E.A. in collaboration with the E.S.R.F. and the LASERDOT Company (Aerospatiale Group). (orig.)

  3. Diffractive-refractive optics: (+,-,-,+) X-ray crystal monochromator with harmonics separation.

    Science.gov (United States)

    Hrdý, Jaromír; Mikulík, Petr; Oberta, Peter

    2011-03-01

    A new kind of two channel-cut crystals X-ray monochromator in dispersive (+,-,-,+) position which spatially separates harmonics is proposed. The diffracting surfaces are oriented so that the diffraction is inclined. Owing to refraction the diffracted beam is sagittally deviated. The deviation depends on wavelength and is much higher for the first harmonics than for higher harmonics. This leads to spatial harmonics separation. The idea is supported by ray-tracing simulation.

  4. High heat flux x-ray monochromators: What are the limits?

    International Nuclear Information System (INIS)

    Rogers, C.S.

    1997-06-01

    First optical elements at third-generation, hard x-ray synchrotrons, such as the Advanced Photon Source (APS), are subjected to immense heat fluxes. The optical elements include crystal monochromators, multilayers and mirrors. This paper presents a mathematical model of the thermal strain of a three-layer (faceplate, heat exchanger, and baseplate), cylindrical optic subjected to narrow beam of uniform heat flux. This model is used to calculate the strain gradient of a liquid-gallium-cooled x-ray monochromator previously tested on an undulator at the Cornell High Energy Synchrotron Source (CHESS). The resulting thermally broadened rocking curves are calculated and compared to experimental data. The calculated rocking curve widths agree to within a few percent of the measured values over the entire current range tested (0 to 60 mA). The thermal strain gradient under the beam footprint varies linearly with the heat flux and the ratio of the thermal expansion coefficient to the thermal conductivity. The strain gradient is insensitive to the heat exchanger properties and the optic geometry. This formulation provides direct insight into the governing parameters, greatly reduces the analysis time, and provides a measure of the ultimate performance of a given monochromator

  5. A bent Laue-Laue monochromator for a synchrotron-based computed tomography system

    CERN Document Server

    Ren, B; Chapman, L D; Ivanov, I; Wu, X Y; Zhong, Z; Huang, X

    1999-01-01

    We designed and tested a two-crystal bent Laue-Laue monochromator for wide, fan-shaped synchrotron X-ray beams for the program multiple energy computed tomography (MECT) at the National Synchrotron Light Source (NSLS). MECT employs monochromatic X-ray beams from the NSLS's X17B superconducting wiggler beamline for computed tomography (CT) with an improved image quality. MECT uses a fixed horizontal fan-shaped beam with the subject's apparatus rotating around a vertical axis. The new monochromator uses two Czochralski-grown Si crystals, 0.7 and 1.4 mm thick, respectively, and with thick ribs on their upper and lower ends. The crystals are bent cylindrically, with the axis of the cylinder parallel to the fan beam, using 4-rod benders with two fixed rods and two movable ones. The bent-crystal feature of the monochromator resolved the difficulties we had had with the flat Laue-Laue design previously used in MECT, which included (a) inadequate beam intensity, (b) excessive fluctuations in beam intensity, and (c) i...

  6. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  7. Self-healing diffusion quantum Monte Carlo algorithms: methods for direct reduction of the fermion sign error in electronic structure calculations

    International Nuclear Information System (INIS)

    Reboredo, F.A.; Hood, R.Q.; Kent, P.C.

    2009-01-01

    We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant expansion of the fixed node ground state wave function and (ii) to define a cost function that relates the interacting-ground-state-fixed-node and the non-interacting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave function with better nodal structure and (b) we argue that the noise in the fixed-node wave function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust towards the ones of the exact many-body ground state in a simulated annealing-like process. Based on these principles, we propose a method to improve both single determinant and multi-determinant expansions of the trial wave function. The method can be generalized to other wave function forms such as pfaffians. We test the method in a model system where benchmark configuration interaction calculations can be performed and most components of the Hamiltonian are evaluated analytically. Comparing the DMC calculations with the exact solutions, we find that the trial wave function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave functions orthogonal to the exact ground state. Similarly, the DMC total energy and density converges to the exact solutions for the model. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form whose existence was predicted in a previous publication (Phys. Rev. B 77 245110 (2008)). Tests of the method are

  8. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  9. A new flexible monochromator setup for quick scanning x-ray absorption spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Stoetzel, J.; Luetzenkirchen-Hecht, D.; Frahm, R. [Fachbereich C, Physik, Bergische Universitaet Wuppertal, Gaussstr. 20, 42097 Wuppertal (Germany)

    2010-07-15

    A new monochromator setup for quick scanning x-ray absorption spectroscopy in the subsecond time regime is presented. Novel driving mechanics allow changing the energy range of the acquired spectra by remote control during data acquisition for the first time, thus dramatically increasing the flexibility and convenience of this method. Completely new experiments are feasible due to the fact that time resolution, edge energy, and energy range of the acquired spectra can be changed continuously within seconds without breaking the vacuum of the monochromator vessel and even without interrupting the measurements. The advanced mechanics are explained in detail and the performance is characterized with x-ray absorption spectra of pure metal foils. The energy scale was determined by a fast and accurate angular encoder system measuring the Bragg angle of the monochromator crystal with subarcsecond resolution. The Bragg angle range covered by the oscillating crystal can currently be changed from 0 deg. to 3.0 deg. within 20 s, while the mechanics are capable to move with frequencies of up to ca. 35 Hz, leading to ca. 14 ms/spectrum time resolution. A new software package allows performing programmed scan sequences, which enable the user to measure stepwise with alternating parameters in predefined time segments. Thus, e.g., switching between edges scanned with the same energy range is possible within one in situ experiment, while also the time resolution can be varied simultaneously. This progress makes the new system extremely user friendly and efficient to use for time resolved x-ray absorption spectroscopy at synchrotron radiation beamlines.

  10. Comparison of the commercial color LCD and the medical monochrome LCD using randomized object test patterns.

    Directory of Open Access Journals (Sweden)

    Jay Wu

    Full Text Available Workstations and electronic display devices in a picture archiving and communication system (PACS provide a convenient and efficient platform for medical diagnosis. The performance of display devices has to be verified to ensure that image quality is not degraded. In this study, we designed a set of randomized object test patterns (ROTPs consisting of randomly located spheres with various image characteristics to evaluate the performance of a 2.5 mega-pixel (MP commercial color LCD and a 3 MP diagnostic monochrome LCD in several aspects, including the contrast, resolution, point spread effect, and noise. The ROTPs were then merged into 120 abdominal CT images. Five radiologists were invited to review the CT images, and receiver operating characteristic (ROC analysis was carried out using a five-point rating scale. In the high background patterns of ROTPs, the sensitivity performance was comparable between both monitors in terms of contrast and resolution, whereas, in the low background patterns, the performance of the commercial color LCD was significantly poorer than that of the diagnostic monochrome LCD in all aspects. The average area under the ROC curve (AUC for reviewing abdominal CT images was 0.717±0.0200 and 0.740±0.0195 for the color monitor and the diagnostic monitor, respectively. The observation time (OT was 145±27.6 min and 127±19.3 min, respectively. No significant differences appeared in AUC (p = 0.265 and OT (p = 0.07. The overall results indicate that ROTPs can be implemented as a quality control tool to evaluate the intrinsic characteristics of display devices. Although there is still a gap in technology between different types of LCDs, commercial color LCDs could replace diagnostic monochrome LCDs as a platform for reviewing abdominal CT images after monitor calibration.

  11. Easily exchangeable x-ray mirrors and hybrid monochromator modules a study of their performance

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Fan. [Philips Analytical, Asia Pacific, Toa Payoh, (Singapore); Kogan, V. [Philips Analytical, EA Almelo, (Netherlands); Saito, K. [Philips Analytical, Tokyo, (Japan)

    1999-12-01

    Full text: PreFix prealigned optical mounts allowing rapid and easily changeover will be presented. The benefits of laterally graded multilayer X-Ray mirrors coupled with these Prefix mounts - conversion of divergent beam to parallel beam, increase of intensity by a factor of 3-7, monochromation to {alpha}1 and {alpha}2 and a dynamic range of 10 {sup 4-5} CpS will be demonstrated in areas such as Thin Film and Powder analysis. Data will be shown on a diffraction profile of thin film (Cr/SiO{sub 2}) with and without a mirror and Si powder with and without a mirror. Further enhancement will be demonstrated by combining a channel cut monochromator-collimator with an X-Ray mirror to produce a high intensity, parallel, pure Cu K{alpha}1 beam with a high intensity of up to 4.5 x 10{sup 8} cps and a divergence down to 0.01 deg. The applicability to various ranging from High Resolution to thin film/reflectivity to Rietveld structural refinement and to phase analysis will be shown. The Rocking curve of HEMT 10nm InGaAs on InP will be presented using various `standard` optics and hybrid optics, also Si powder and a Rietveld refinement of CuS0{sub 4}.5H{sub 2}0 and Aspirin. A comparison of the benefits and application of X-Ray Mirrors and Hybrid Mirror/Monochromators will be given. The data presented will show that by using X-Ray Mirrors and Hybrid modules the performance of standard `Laboratory` Diffractometers can be greatly enhanced to a level previously unachievable with great practical benefits. Copyright (1999) Australian X-ray Analytical Association Inc.

  12. First operation of an extended range grasshopper monochromator on the Aladdin storage ring

    International Nuclear Information System (INIS)

    Brown, F.C.

    1986-01-01

    First operation of a new extended range monochromator on the 1 GeV storage ring Aladdin is described. Curves are given of output flux as a function of photon energy for the 2 m and for the 5 m gratings as measured with an NBS diode. Relatively low background and flux up to 1500 eV is obtained using a 1200 line/mm 5 m holographic grating. Highly reproducible scans were obtained of the transmission of thin films including the carbon K and titanium L edges. This reproducibility and high throughput is in large part due to the small beam size and excellent stability of Aladdin. (orig.)

  13. Image quality evaluation of medical color and monochrome displays using an imaging colorimeter

    Science.gov (United States)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    The purpose of this presentation is to demonstrate the means which permit examining the accuracy of Image Quality with respect to MTF (Modulation Transfer Function) and NPS (Noise Power Spectrum) of Color Displays and Monochrome Displays. Indications were in the past that color displays could affect the clinical performance of color displays negatively compared to monochrome displays. Now colorimeters like the PM-1423 are available which have higher sensitivity and color accuracy than the traditional cameras like CCD cameras. Reference (1) was not based on measurements made with a colorimeter. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future SPIE Conference.Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. This paper focuses on the measurements of physical characteristics of the spatial resolution and noise performance of color and monochrome medical displays which were made with a colorimeter and we will after this meeting submit the data to an ROC study so we have again a paper to present at a future Annual SPIE Conference. Specifically, Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) were evaluated and compared at different digital driving levels (DDL) between the two medical displays. The Imaging Colorimeter. Measurement of color image quality needs were done with an imaging colorimeter as it is shown below. Imaging colorimetry is ideally suited to FPD measurement because imaging systems capture spatial data generating millions of data points in a single measurement operation. The imaging colorimeter which was used was the PM-1423 from Radiant Imaging. It uses

  14. Performance of the SURF-II high-throughput toroidal grating monochromator

    International Nuclear Information System (INIS)

    Kurtz, R.L.; Ederer, D.L.; Barth, J.; Stockbauer, R.

    1988-01-01

    The performance of the 'high-flux' toroidal grating monochromator (HFTGM) at the NBS SURF-II synchrotron storage ring is assessed. Two gratings are studied: One with a ruled profile and the other having a laminar profile. The laminar profile is shown to reduce substantially the intensity of higher-order diffracted light with only a small decrease in the intensity of the first order light. The dependence of the energy resolution as a function of the area of the grating illuminated is also discussed. (orig.)

  15. Raytracing, chopper, and guideline for double-headed Dragon monochromators (invited)

    International Nuclear Information System (INIS)

    Chen, C.T.

    1992-01-01

    The raytracing of the double-headed Dragon, a recently proposed monochromator for producing two simultaneous left and right circularly polarized soft x-ray beams, is presented. The energy resolution and wavelength of these two beams are confirmed to be identical, and the high performance of the original Dragon is found to be preserved in the double-headed configuration. A compact ultra-high vacuum compatible chopper for rapid alternation between left and right helicities is presented, and a guideline for collecting circularly polarized light from bending magnet sources is given

  16. A new gradient monochromator for the IN13 back-scattering spectrometer

    International Nuclear Information System (INIS)

    Ciampolini, L.; Bove, L.E.; Mondelli, C.; Alianelli, L.; Labbe-Lavigne, S.; Natali, F.; Bee, M.; Deriu, A.

    2005-01-01

    We present new McStas simulations of the back-scattering thermal neutron spectrometer IN13 to evaluate the advantages of a new temperature gradient monochromator relative to a conventional one. The simulations show that a flux gain up to a factor 7 can be obtained with just a 10% loss in energy resolution and a 20% increase in beam spot size at the sample. The results also indicate that a moderate applied temperature gradient (ΔT∼16K) is sufficient to obtain this significant flux gain. n

  17. Output diagnostics of the grazing incidence plane grating monochromator BUMBLE BEE (15 to 1500 eV)

    Energy Technology Data Exchange (ETDEWEB)

    Jark, W.; Kunz, C.

    1985-09-01

    The BUMBLE BEE is a bakeable uhv compatible plane grating monochromator, with a fixed exit beam, and the capability to suppress higher order radiation in a wide energy range. The instrument was built to be used in connection with a uhv reflectometer and has a differential pumping section between the optical components and the sample, allowing a pressure of 10/sup -5/ torr in the experimental chamber without influencing the uhv in the monochromator. The monochromator is not optimized for resolution. Due to its location at a beamline with a short source distance we achieve only medium resolving power in the order of E/..delta..E approx. = 200. The primary goal is the suppression of higher orders, fortunately the thus selected operating parameters for the coupled rotations of the optical components also give nearly the highest available output. The instrument is characterized in great detail. The performance of the instrument is discussed and compared with extensive theoretical calculations.

  18. Synchrotron X-ray adaptative monochromator: study and realization of a prototype

    International Nuclear Information System (INIS)

    Dezoret, D.

    1995-01-01

    This work presents a study of a prototype of a synchrotron X-ray monochromator. The spectral qualities of this optic are sensitive to the heat loads which are particularly important on third synchrotron generation like ESRF. Indeed, powers generated by synchrotron beams can reach few kilowatts and power densities about a few tens watts per square millimeters. The mechanical deformations of the optical elements of the beamlines issue issue of the heat load can damage their spectral efficiencies. In order to compensate the deformations, wa have been studying the transposition of the adaptive astronomical optics technology to the x-ray field. First, we have considered the modifications of the spectral characteristics of a crystal induced by x-rays. We have established the specifications required to a technological realisation. Then, thermomechanical and technological studies have been required to transpose the astronomical technology to an x-ray technology. After these studies, we have begun the realisation of a prototype. This monochromator is composed by a crystal of silicon (111) bonded on a piezo-electric structure. The mechanical control is a loop system composed by a infrared light, a Shack-Hartmann CDD and wave front analyser. This system has to compensate the deformations of the crystal in the 5 kcV to 60 kcV energy range with a power density of 1 watt per square millimeters. (authors)

  19. Periodic magnetic field as a polarized and focusing thermal neutron spectrometer and monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Cremer, J. T.; Williams, D. L.; Fuller, M. J.; Gary, C. K.; Piestrup, M. A. [Adelphi Technology, Inc., 2003 East Bayshore Rd., Redwood City, California 94063 (United States); Pantell, R. H.; Feinstein, J. [Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Flocchini, R. G.; Boussoufi, M.; Egbert, H. P.; Kloh, M. D.; Walker, R. B. [Davis McClellan Nuclear Radiation Center, University of California, McClellan, California 95652 (United States)

    2010-01-15

    A novel periodic magnetic field (PMF) optic is shown to act as a prism, lens, and polarizer for neutrons and particles with a magnetic dipole moment. The PMF has a two-dimensional field in the axial direction of neutron propagation. The PMF alternating magnetic field polarity provides strong gradients that cause separation of neutrons by wavelength axially and by spin state transversely. The spin-up neutrons exit the PMF with their magnetic spins aligned parallel to the PMF magnetic field, and are deflected upward and line focus at a fixed vertical height, proportional to the PMF period, at a downstream focal distance that increases with neutron energy. The PMF has no attenuation by absorption or scatter, as with material prisms or crystal monochromators. Embodiments of the PMF include neutron spectrometer or monochromator, and applications include neutron small angle scattering, crystallography, residual stress analysis, cross section measurements, and reflectometry. Presented are theory, experimental results, computer simulation, applications of the PMF, and comparison of its performance to Stern-Gerlach gradient devices and compound material and magnetic refractive prisms.

  20. Vibration measurements of high-heat-load monochromators for DESY PETRA III extension

    Energy Technology Data Exchange (ETDEWEB)

    Kristiansen, Paw, E-mail: paw.kristiansen@fmb-oxford.com [FMB Oxford Ltd, Unit 1 Ferry Mills, Oxford OX2 0ES (United Kingdom); Horbach, Jan; Döhrmann, Ralph; Heuer, Joachim [DESY, Deutsches Elektronen-Synchrotron Hamburg, Notkestrasse 85, 22607 Hamburg (Germany)

    2015-05-09

    Vibration measurements of a cryocooled double-crystal monochromator are presented. The origins of the vibrations are identified. The minimum achieved vibration of the relative pitch between the two crystals is 48 nrad RMS and the minimum achieved absolute vibration of the second crystal is 82 nrad RMS. The requirement for vibrational stability of beamline optics continues to evolve rapidly to comply with the demands created by the improved brilliance of the third-generation low-emittance storage rings around the world. The challenge is to quantify the performance of the instrument before it is installed at the beamline. In this article, measurement techniques are presented that directly and accurately measure (i) the relative vibration between the two crystals of a double-crystal monochromator (DCM) and (ii) the absolute vibration of the second-crystal cage of a DCM. Excluding a synchrotron beam, the measurements are conducted under in situ conditions, connected to a liquid-nitrogen cryocooler. The investigated DCM utilizes a direct-drive (no gearing) goniometer for the Bragg rotation. The main causes of the DCM vibration are found to be the servoing of the direct-drive goniometer and the flexibility in the crystal cage motion stages. It is found that the investigated DCM can offer relative pitch vibration down to 48 nrad RMS (capacitive sensors, 0–5 kHz bandwidth) and absolute pitch vibration down to 82 nrad RMS (laser interferometer, 0–50 kHz bandwidth), with the Bragg axis brake engaged.

  1. Strain-free polished channel-cut crystal monochromators: a new approach and results

    Science.gov (United States)

    Kasman, Elina; Montgomery, Jonathan; Huang, XianRong; Lerch, Jason; Assoufid, Lahsen

    2017-08-01

    The use of channel-cut crystal monochromators has been traditionally limited to applications that can tolerate the rough surface quality from wet etching without polishing. We have previously presented and discussed the motivation for producing channel cut crystals with strain-free polished surfaces [1]. Afterwards, we have undertaken an effort to design and implement an automated machine for polishing channel-cut crystals. The initial effort led to inefficient results. Since then, we conceptualized, designed, and implemented a new version of the channel-cut polishing machine, now called C-CHiRP (Channel-Cut High Resolution Polisher), also known as CCPM V2.0. The new machine design no longer utilizes Figure-8 motion that mimics manual polishing. Instead, the polishing is achieved by a combination of rotary and linear functions of two coordinated motion systems. Here we present the new design of C-CHiRP, its capabilities and features. Multiple channel-cut crystals polished using the C-CHiRP have been deployed into several beamlines at the Advanced Photon Source (APS). We present the measurements of surface finish, flatness, as well as topography results obtained at 1-BM of APS, as compared with results typically achieved when polishing flat-surface monochromator crystals using conventional polishing processes. Limitations of the current machine design, capabilities and considerations for strain-free polishing of highly complex crystals are also discussed, together with an outlook for future developments and improvements.

  2. Periodic magnetic field as a polarized and focusing thermal neutron spectrometer and monochromator.

    Science.gov (United States)

    Cremer, J T; Williams, D L; Fuller, M J; Gary, C K; Piestrup, M A; Pantell, R H; Feinstein, J; Flocchini, R G; Boussoufi, M; Egbert, H P; Kloh, M D; Walker, R B

    2010-01-01

    A novel periodic magnetic field (PMF) optic is shown to act as a prism, lens, and polarizer for neutrons and particles with a magnetic dipole moment. The PMF has a two-dimensional field in the axial direction of neutron propagation. The PMF alternating magnetic field polarity provides strong gradients that cause separation of neutrons by wavelength axially and by spin state transversely. The spin-up neutrons exit the PMF with their magnetic spins aligned parallel to the PMF magnetic field, and are deflected upward and line focus at a fixed vertical height, proportional to the PMF period, at a downstream focal distance that increases with neutron energy. The PMF has no attenuation by absorption or scatter, as with material prisms or crystal monochromators. Embodiments of the PMF include neutron spectrometer or monochromator, and applications include neutron small angle scattering, crystallography, residual stress analysis, cross section measurements, and reflectometry. Presented are theory, experimental results, computer simulation, applications of the PMF, and comparison of its performance to Stern-Gerlach gradient devices and compound material and magnetic refractive prisms.

  3. Performances of synchrotron X-ray monochromators under heat load. Part 2. Application of the Takagi-Taupin diffraction theory

    CERN Document Server

    Mocella, V; Freund, A K; Hoszowska, J; Zhang, L; Epelboin, Y

    2001-01-01

    The aim of this work is to generate the rocking curves of monochromators exposed to heat load in synchrotron radiation beams with a computer code performing diffraction calculations based on the theory of Takagi and Taupin. The model study starts with the calculation of deformation by finite element analysis and from an accurate characterization of the incident wave and includes the simulation of the wavefront propagation between the first and the second crystal (analyzer) of a double crystal monochromator. A monochromatic plane wave as well as a polychromatic spherical wave approach is described. The theoretical predictions of both methods are compared with experimental data measured in Bragg geometry and critically discussed.

  4. Performances of synchrotron X-ray monochromators under heat load. Part 2. Application of the Takagi-Taupin diffraction theory

    International Nuclear Information System (INIS)

    Mocella, V.; Ferrero, C.; Freund, A.K.; Hoszowska, J.; Zhang, L.; Epelboin, Y.

    2001-01-01

    The aim of this work is to generate the rocking curves of monochromators exposed to heat load in synchrotron radiation beams with a computer code performing diffraction calculations based on the theory of Takagi and Taupin. The model study starts with the calculation of deformation by finite element analysis and from an accurate characterization of the incident wave and includes the simulation of the wavefront propagation between the first and the second crystal (analyzer) of a double crystal monochromator. A monochromatic plane wave as well as a polychromatic spherical wave approach is described. The theoretical predictions of both methods are compared with experimental data measured in Bragg geometry and critically discussed

  5. A fine adjustment mechanism of the second crystal in a double-crystal monochromator with a 3-PS parallel manipulator

    International Nuclear Information System (INIS)

    Cao Chongzhen; Gao, X.; Ma, P.; Yu, H.; Wang, F.; Huang, Y.; Liu, P.

    2005-01-01

    A novel fine adjustment mechanism of the second crystal in a double-crystal monochromator is put forward, which is based on a 3-PS parallel manipulator and the magnetic force. Not only is the principle of fine adjusting the pitch angle and the roll angle analyzed, but also optimization of the structure parameters of the permanent magnet, a key part of the fine adjustment mechanism. The fine adjustment mechanism with the 3-PS parallel manipulator has been applied successfully in the double-crystal monochromator of 4W1B beam line in the Beijing Synchrotron Radiation Facility (BSRF)

  6. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  7. Some thoughts on source monochromation and the implications for electron energy loss spectroscopy

    CERN Document Server

    Brydson, R; Brown, A

    2003-01-01

    We briefly outline the factors determining the intrinsic widths of features in electron energy loss near edge structure (ELNES) measured by electron energy loss spectroscopy (EELS) in the transmission electron microscope (TEM). We have made estimates of the differing contributions of both the initial and final state lifetime effects in the ELNES ionisation processes and also show how these may be combined with the instrumental energy resolution. We discuss the potential benefits of source monochromation for ELNES measurements via a comparison of these theoretical estimates with experimental spectra from the literature. We show that for certain core level excitations, solid state broadening mechanisms may be the fundamental limiting factor for resolving fine detail in ELNES. (orig.)

  8. The sapphire backscattering monochromator at the Dynamics beamline P01 of PETRA III

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, P., E-mail: pavel.alexeev@desy.de [Deutsches Elektronen-Synchrotron DESY (Germany); Asadchikov, V. [Russian Academy of Sciences, A.V. Shubnikov Institute of Crystallography (Russian Federation); Bessas, D. [European Synchrotron Radiation Facility (France); Butashin, A.; Deryabin, A. [Russian Academy of Sciences, A.V. Shubnikov Institute of Crystallography (Russian Federation); Dill, F.-U.; Ehnes, A.; Herlitschke, M. [Deutsches Elektronen-Synchrotron DESY (Germany); Hermann, R. P.; Jafari, A. [JARA-FIT, Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI (Germany); Prokhorov, I. [Kaluga Branch of Shubnikov Institute of Crystallography RAS, Research Center for Space Materials Science (Russian Federation); Roshchin, B. [Russian Academy of Sciences, A.V. Shubnikov Institute of Crystallography (Russian Federation); Röhlsberger, R.; Schlage, K.; Sergueev, I.; Siemens, A.; Wille, H.-C., E-mail: hans.christian.wille@desy.de [Deutsches Elektronen-Synchrotron DESY (Germany)

    2016-12-15

    We report on a high resolution sapphire backscattering monochromator installed at the Dynamics beamline P01 of PETRA III. The device enables nuclear resonance scattering experiments on Mössbauer isotopes with transition energies between 20 and 60 keV with sub-meV to meV resolution. In a first performance test with {sup 119}Sn nuclear resonance at a X-ray energy of 23.88 keV an energy resolution of 1.34 meV was achieved. The device extends the field of nuclear resonance scattering at the PETRA III synchrotron light source to many further isotopes like {sup 151}Eu, {sup 149}Sm, {sup 161}Dy, {sup 125}Te and {sup 121}Sb.

  9. Comparison of elliptical and spherical mirrors for the grasshopper monochromators at SSRL

    International Nuclear Information System (INIS)

    Waldhauer, A.P.

    1989-01-01

    A comparison of the performance of a spherical and elliptical mirror in the grasshopper monochromator is presented. The problem was studied by ray tracing and then tested using visible (λ=633 nm) laser light. Calculations using ideal optics yield an improvement in flux by a factor of up to 2.7, while tests with visible light show an increase by a factor of 5 because the old spherical mirror is compared to a new, perfect elliptical one. The FWHM of the measured focus is 90 μm with a spherical mirror, and 25 μm with an elliptical one. Elliptical mirrors have been acquired and are now being installed in the two grasshoppers at SSRL

  10. An elastic, low-background vertical focusing element for a doubly focusing neutron monochromator

    International Nuclear Information System (INIS)

    Smee, Stephen A.; Brand, Paul C.; Barry, Dwight D.; Broholm, Collin L.; Anand, Dave K.

    2001-01-01

    A novel, variable radius of curvature, device for the focusing of neutrons is presented. This elastic element consists of a thin variable thickness, constant width, aluminum blade to which diffracting crystals can be attached. When buckled, the blade assumes a circular focal shape, the radius of which is easily controlled by the relative displacement of supporting pivots. Precision electromechanical and optical measurements show that the slope of the buckled blade conforms to a circular arc to within 0.15 degree sign for radii in the range 900 mm< R<10 000 mm. This easily scalable, low mass mechanism is well suited for use in a focusing neutron monochromator, as the parasitic scattering typically associated with traditional lead screw and lever mechanisms is greatly reduced

  11. The double rotor neutron monochromator facility at the ET-RR-1 reactor

    International Nuclear Information System (INIS)

    Adib, M.; Maayouf, R.M.A.; Abdel-Kawy, A.; Gwaily, S.E.; Hamouda, I.

    1983-01-01

    A double rotor neutron monochromator recently installed in front of one of the ET-RR-1 reactor horizontal channels is described. The system consists of two rotors, suspended in magnetic field, spinning at speeds up to 16000 rpm with a constant phase angle relative to each producing bursts of monochromatic neutrons at the sample. Each of the rotors, 32 cm in diameter and 27 Kg in weight, has two slits to produce two neutron bursts per revolution. The slits are with radius of curvature 65.65 cm and 7 x 10 sq.mm cross-sectional area. The jitters of the phase between the rotors were measured at different rotation rates and were found not to exceed +- 1.5 μsec. The transmission function of one rotor system was measured and found to be in agreement with that theoretically predicted. (Auth.)

  12. Transmission test of the polyethylene shield against 40 and 65 MeV quasi monochrome neutron

    International Nuclear Information System (INIS)

    Nakao, Makoto; Nakamura, Takashi; Sakuya, Yoshimasa; Nauchi, Yasushi; Nakao, Noriaki; Tanaka, Susumu; Sakamoto, Yukio; Nakajima, Hiroshi; Nakane, Yoshihiro.

    1996-01-01

    Using 40 and 65 MeV quasi monochrome neutron of the AVF cyclotron installed at Takasaki Laboratory, Japan Atomic Energy Research Institute, the neutron energy spectra were measured after transmitting the polyethylene shield. Results of the shielding experiments using concrete and iron recognized as main shielding material were proposed previously. As data obtained in the experiments were useful for a bench-mark experiment to investigate for shielding calculation and sectional data set, a shielding calculation simulated with new experiment to compare with and investigate for the previous experimental data. As a result, it was found that calculation result of neutron flux transmitting through the polyethylene shield showed difference with increase of the shield thickness. And, reducing distance of the peak neutron was also found to be over-estimated in its calculation value, such as three and five times on 43 MeV at 120 and 180 cm thick, respectively. (G.K.)

  13. A water-cooled x-ray monochromator for using off-axis undulator beam

    International Nuclear Information System (INIS)

    Khounsary, A.; Maser, J.

    2000-01-01

    Undulator beamlines at third-generation synchrotrons x-ray sources are designed to use the high-brilliance radiation that is contained in the central cone of the generated x-ray beams. The rest of the x-ray beam is often unused. Moreover, in some cases, such as in the zone-plate-based microfocusing beamlines, only a small part of the central radiation cone around the optical axis is used. In this paper, a side-station branch line at the Advanced Photon Source that takes advantage of some of the unused off-axis photons in a microfocusing x-ray beamline is described. Detailed information on the design and analysis of a high-heat-load water-cooled monochromator developed for this beamline is provided

  14. Rotation of X-ray polarization in the glitches of a silicon crystal monochromator.

    Science.gov (United States)

    Sutter, John P; Boada, Roberto; Bowron, Daniel T; Stepanov, Sergey A; Díaz-Moreno, Sofía

    2016-08-01

    EXAFS studies on dilute samples are usually carried out by collecting the fluorescence yield using a large-area multi-element detector. This method is susceptible to the 'glitches' produced by all single-crystal monochromators. Glitches are sharp dips or spikes in the diffracted intensity at specific crystal orientations. If incorrectly compensated, they degrade the spectroscopic data. Normalization of the fluorescence signal by the incident flux alone is sometimes insufficient to compensate for the glitches. Measurements performed at the state-of-the-art wiggler beamline I20-scanning at Diamond Light Source have shown that the glitches alter the spatial distribution of the sample's quasi-elastic X-ray scattering. Because glitches result from additional Bragg reflections, multiple-beam dynamical diffraction theory is necessary to understand their effects. Here, the glitches of the Si(111) four-bounce monochromator of I20-scanning just above the Ni  K edge are associated with their Bragg reflections. A fitting procedure that treats coherent and Compton scattering is developed and applied to a sample of an extremely dilute (100 micromolal) aqueous solution of Ni(NO 3 ) 2 . The depolarization of the wiggler X-ray beam out of the electron orbit is modeled. The fits achieve good agreement with the sample's quasi-elastic scattering with just a few parameters. The X-ray polarization is rotated up to ±4.3° within the glitches, as predicted by dynamical diffraction. These results will help users normalize EXAFS data at glitches.

  15. Rotation of X-ray polarization in the glitches of a silicon crystal monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Sutter, John P.; Boada, Roberto; Bowron, Daniel T.; Stepanov, Sergey A.; Díaz-Moreno, Sofía

    2016-07-06

    EXAFS studies on dilute samples are usually carried out by collecting the fluorescence yield using a large-area multi-element detector. This method is susceptible to the `glitches' produced by all single-crystal monochromators. Glitches are sharp dips or spikes in the diffracted intensity at specific crystal orientations. If incorrectly compensated, they degrade the spectroscopic data. Normalization of the fluorescence signal by the incident flux alone is sometimes insufficient to compensate for the glitches. Measurements performed at the state-of-the-art wiggler beamline I20-scanning at Diamond Light Source have shown that the glitches alter the spatial distribution of the sample's quasi-elastic X-ray scattering. Because glitches result from additional Bragg reflections, multiple-beam dynamical diffraction theory is necessary to understand their effects. Here, the glitches of the Si(111) four-bounce monochromator of I20-scanning just above the Ni Kedge are associated with their Bragg reflections. A fitting procedure that treats coherent and Compton scattering is developed and applied to a sample of an extremely dilute (100 micromolal) aqueous solution of Ni(NO3)2. The depolarization of the wiggler X-ray beam out of the electron orbit is modeled. The fits achieve good agreement with the sample's quasi-elastic scattering with just a few parameters. The X-ray polarization is rotated up to ±4.3° within the glitches, as predicted by dynamical diffraction. These results will help users normalize EXAFS data at glitches.

  16. High-resolution monochromated electron energy-loss spectroscopy of organic photovoltaic materials.

    Science.gov (United States)

    Alexander, Jessica A; Scheltens, Frank J; Drummy, Lawrence F; Durstock, Michael F; Hage, Fredrik S; Ramasse, Quentin M; McComb, David W

    2017-09-01

    Advances in electron monochromator technology are providing opportunities for high energy resolution (10 - 200meV) electron energy-loss spectroscopy (EELS) to be performed in the scanning transmission electron microscope (STEM). The energy-loss near-edge structure in core-loss spectroscopy is often limited by core-hole lifetimes rather than the energy spread of the incident illumination. However, in the valence-loss region, the reduced width of the zero loss peak makes it possible to resolve clearly and unambiguously spectral features at very low energy-losses (photovoltaics (OPVs): poly(3-hexlythiophene) (P3HT), [6,6] phenyl-C 61 butyric acid methyl ester (PCBM), copper phthalocyanine (CuPc), and fullerene (C 60 ). Data was collected on two different monochromated instruments - a Nion UltraSTEM 100 MC 'HERMES' and a FEI Titan 3 60-300 Image-Corrected S/TEM - using energy resolutions (as defined by the zero loss peak full-width at half-maximum) of 35meV and 175meV, respectively. The data was acquired to allow deconvolution of plural scattering, and Kramers-Kronig analysis was utilized to extract the complex dielectric functions. The real and imaginary parts of the complex dielectric functions obtained from the two instruments were compared to evaluate if the enhanced resolution in the Nion provides new opto-electronic information for these organic materials. The differences between the spectra are discussed, and the implications for STEM-EELS studies of advanced materials are considered. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Optimization of a constrained linear monochromator design for neutral atom beams

    International Nuclear Information System (INIS)

    Kaltenbacher, Thomas

    2016-01-01

    A focused ground state, neutral atom beam, exploiting its de Broglie wavelength by means of atom optics, is used for neutral atom microscopy imaging. Employing Fresnel zone plates as a lens for these beams is a well established microscopy technique. To date, even for favorable beam source conditions a minimal focus spot size of slightly below 1 μm was reached. This limitation is essentially given by the intrinsic spectral purity of the beam in combination with the chromatic aberration of the diffraction based zone plate. Therefore, it is important to enhance the monochromaticity of the beam, enabling a higher spatial resolution, preferably below 100 nm. We propose to increase the monochromaticity of a neutral atom beam by means of a so-called linear monochromator set-up – a Fresnel zone plate in combination with a pinhole aperture – in order to gain more than one order of magnitude in spatial resolution. This configuration is known in X-ray microscopy and has proven to be useful, but has not been applied to neutral atom beams. The main result of this work is optimal design parameters based on models for this linear monochromator set-up followed by a second zone plate for focusing. The optimization was performed for minimizing the focal spot size and maximizing the centre line intensity at the detector position for an atom beam simultaneously. The results presented in this work are for, but not limited to, a neutral helium atom beam. - Highlights: • The presented results are essential for optimal operation conditions of a neutral atom microscope set-up. • The key parameters for the experimental arrangement of a neutral microscopy set-up are identified and their interplay is quantified. • Insights in the multidimensional problem provide deep and crucial understanding for pushing beyond the apparent focus limitations. • This work points out the trade-offs for high intensity and high spatial resolution indicating several use cases.

  18. Optimization of a constrained linear monochromator design for neutral atom beams.

    Science.gov (United States)

    Kaltenbacher, Thomas

    2016-04-01

    A focused ground state, neutral atom beam, exploiting its de Broglie wavelength by means of atom optics, is used for neutral atom microscopy imaging. Employing Fresnel zone plates as a lens for these beams is a well established microscopy technique. To date, even for favorable beam source conditions a minimal focus spot size of slightly below 1μm was reached. This limitation is essentially given by the intrinsic spectral purity of the beam in combination with the chromatic aberration of the diffraction based zone plate. Therefore, it is important to enhance the monochromaticity of the beam, enabling a higher spatial resolution, preferably below 100nm. We propose to increase the monochromaticity of a neutral atom beam by means of a so-called linear monochromator set-up - a Fresnel zone plate in combination with a pinhole aperture - in order to gain more than one order of magnitude in spatial resolution. This configuration is known in X-ray microscopy and has proven to be useful, but has not been applied to neutral atom beams. The main result of this work is optimal design parameters based on models for this linear monochromator set-up followed by a second zone plate for focusing. The optimization was performed for minimizing the focal spot size and maximizing the centre line intensity at the detector position for an atom beam simultaneously. The results presented in this work are for, but not limited to, a neutral helium atom beam. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Optimization of a constrained linear monochromator design for neutral atom beams

    Energy Technology Data Exchange (ETDEWEB)

    Kaltenbacher, Thomas

    2016-04-15

    A focused ground state, neutral atom beam, exploiting its de Broglie wavelength by means of atom optics, is used for neutral atom microscopy imaging. Employing Fresnel zone plates as a lens for these beams is a well established microscopy technique. To date, even for favorable beam source conditions a minimal focus spot size of slightly below 1 μm was reached. This limitation is essentially given by the intrinsic spectral purity of the beam in combination with the chromatic aberration of the diffraction based zone plate. Therefore, it is important to enhance the monochromaticity of the beam, enabling a higher spatial resolution, preferably below 100 nm. We propose to increase the monochromaticity of a neutral atom beam by means of a so-called linear monochromator set-up – a Fresnel zone plate in combination with a pinhole aperture – in order to gain more than one order of magnitude in spatial resolution. This configuration is known in X-ray microscopy and has proven to be useful, but has not been applied to neutral atom beams. The main result of this work is optimal design parameters based on models for this linear monochromator set-up followed by a second zone plate for focusing. The optimization was performed for minimizing the focal spot size and maximizing the centre line intensity at the detector position for an atom beam simultaneously. The results presented in this work are for, but not limited to, a neutral helium atom beam. - Highlights: • The presented results are essential for optimal operation conditions of a neutral atom microscope set-up. • The key parameters for the experimental arrangement of a neutral microscopy set-up are identified and their interplay is quantified. • Insights in the multidimensional problem provide deep and crucial understanding for pushing beyond the apparent focus limitations. • This work points out the trade-offs for high intensity and high spatial resolution indicating several use cases.

  20. Evaluation of the Data-Ray DR96L 4 x 3 Aspect Ratio, 22-Inch Diagonal Flat Screen Monochrome CRT Monitor

    National Research Council Canada - National Science Library

    2001-01-01

    .... Based on results of our evaluation of the third sample NIDL cannot certify the Data-Ray DR96L monochrome monitor as being suitable for suitable monoscopic or stereoscopic operation in IEC workstations...

  1. Design, Build & Test of a Double Crystal Monochromator for Beamlines I09 & I23 at the Diamond Light Source

    Science.gov (United States)

    Kelly, J.; Lee, T.; Alcock, S.; Patel, H.

    2013-03-01

    A high stability Double Crystal Monochromator has been developed at The Diamond Light Source for beamlines I09 and I23. The design specification was a cryogenic, fixed exit, energy scanning monochromator, operating over an energy range of 2.1 - 25 keV using a Si(111) crystal set. The novel design concepts are the direct drive, air bearing Bragg axis, low strain crystal mounts and the cooling scheme. The instrument exhibited superb stability and repeatability on the B16 Test Beamline. A 20 keV Si(555), 1.4 μrad rocking curve was demonstrated. The DCM showed good stability without any evidence of vibration or Bragg angle nonlinearity.

  2. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  3. A high throughput 2 m normal incidence monochromator for SURF-II

    International Nuclear Information System (INIS)

    Ederer, D.L.; Cole, B.E.; West, J.B.

    1980-01-01

    The high intrinsic brightness of the circulating electron beam at SURF-II is used as the entrance slit for a two-meter normal incidence monochromator. A typical beam size for the electron beam is 100 μm high by 2 mm wide yielding an obserbed resolution of 0.4 Angstroem with a 200 μm exit slit and a 2400 lines/mm grating. The instrument accepts a beam with a 65 mrad horizontal divergence and a 10 mrad vertical divergence. A plane pre-mirror used near normal incidence reflects the incoming radiation onto the 2 m grating; this combination provides a horizontal exit beam, and enables the experiment to be located three meters from the orbit tangent point. With magnesium fluoride coated aluminium optics a flux of 2 x 10'' photon/s x Angstroem at 1200 Angstroem is observed with a 10 mA circulating current. A flux of 5 x 10 10 photon/s x Angstroem at 600 Angstroem is obserbed with an osmium coated grating and a 10 mA circulating current. Sample spectra of the angle-resolved photoelectron spectrum of CO are presented. (orig.)

  4. Performance of synchrotron x-ray monochromators under heat load: How reliable are the predictions?

    International Nuclear Information System (INIS)

    Freund, A.K.; Hoszowska, J.; Migliore, J.-S.; Mocella, V.; Zhang, L.; Ferrero, C.

    2000-01-01

    With the ongoing development of insertion devices with smaller gaps the heat load generated by modern synchrotron sources increases continuously. To predict the overall performance of experiments on beam lines it is of crucial importance to be able to predict the efficiency of x-ray optics and in particular that of crystal monochromators. We report on a detailed comparison between theory and experiment for a water-cooled silicon crystal exposed to bending magnet radiation of up to 237 W total power and 1.3 W/mm2 power density. The thermal deformation has been calculated by the code ANSYS and its output has been injected into a finite difference code based on the Takagi-Taupin diffraction theory for distorted crystals. Several slit settings, filters and reflection orders were used to vary the geometrical conditions and the x-ray penetration depth in the crystal. In general, good agreement has been observed between the calculated and the observed values for the rocking curve width

  5. Neutron monochromators of BeO, MgO and ZnO single crystals

    Science.gov (United States)

    Adib, M.; Habib, N.; Bashter, I. I.; Morcos, H. N.; El-Mesiry, M. S.; Mansy, M. S.

    2014-05-01

    The monochromatic features of BeO, MgO and ZnO single crystals are discussed in terms of orientation, mosaic spread, and thickness within the wavelength band from 0.05 up to 0.5 nm. A computer program MONO, written in “FORTRAN”, has been developed to carry out the required calculations. Calculation shows that a 5 mm thick MgO single crystal cut along its (2 0 0) plane having mosaic spread of 0.5° FWHM has the optimum parameters when it is used as a neutron monochromator. Moreover, at wavelengths shorter than 0.24 nm the reflected monochromatic neutrons are almost free from the higher order ones. The same features are seen with BeO (0 0 2) with less reflectivity than that of the former. Also, ZnO cut along its (0 0 2) plane is preferred over the others only at wavelengths longer than 0.20 nm. When the selected monochromatic wavelength is longer than 0.24 nm, the neutron intensities of higher orders from a thermal reactor flux are higher than those of the first-order one. For a cold reactor flux, the first order of BeO and MgO single crystals is free from the higher orders up to 0.4 nm, and ZnO at wavelengths up to 0.5 nm.

  6. Beryllium, zinc and lead single crystals as a thermal neutron monochromators

    Science.gov (United States)

    Adib, M.; Habib, N.; Bashter, I. I.; Morcos, H. N.; El-Mesiry, M. S.; Mansy, M. S.

    2015-03-01

    The monochromatic features of Be, Zn and Pb single crystals are discussed in terms of orientation, mosaic spread, and thickness within the wavelength band from 0.04 up to 0.5 nm. A computer program MONO written in "FORTRAN-77", has been adapted to carry out the required calculations. Calculations show that a 5 mm thick of beryllium (HCP structure) single crystal cut along its (0 0 2) plane having 0.6° FWHM are the optimum parameters when it is used as a monochromator with high reflected neutron intensity from a thermal neutron flux. Furthermore, at wavelengths shorter than 0.16 nm it is free from the accompanying higher order ones. Zinc (HCP structure) has the same parameters, with intensity much less than the latter. The same features are seen with lead (FCC structure) cut along its (3 1 1) plane with less reflectivity than the former. However, Pb (3 1 1) is more preferable than others at neutron wavelengths ⩽ 0.1 nm, since the glancing angle (θ ∼ 20°) is more suitable to carry out diffraction experiments. For a cold neutron flux, the first-order neutrons reflected from beryllium is free from the higher orders up to 0.36 nm. While for Zn single crystal is up to 0.5 nm.

  7. Design and fabrication of a Czerny-Turner monochromator-cum-spectrograph

    International Nuclear Information System (INIS)

    Murty, M.V.R.K.; Shukla, R.P.; Bhattacharya, S.S.; Krishnamurthy, G.

    1987-01-01

    The design and fabrication of a Czerny-Turner monochromator cum spectrograph is described. It consists of a classically ruled grating having 1200 grooves/mm. The collimator is a concave spherical mirror having a radius of curvature 1.025 metre while the focusing element is a concave spherical mirror of radius of curvature 0.925 metre. The design of two unequal radii of curvature for collimating and focusing mirrors is chosen to eliminate the chromatic aberration at the wavelength of 5000A. The linear reciprocal dispersion on the focal surface is about 8A/mm. The resolution of the instrument at the coma corrected wavelength i.e. 5000A is 0.1A. The resolution at the other wavelengths is limited by the residual chromatic aberration which increases linearly with wavelength on either side of the 5000A. Therefore the resolution at the wavelength 2000A and 8000A is about 0.2A. 7 figures. (author)

  8. Mechanical design aspects of a soft X-ray plane grating monochromator

    CERN Document Server

    Vasina, R; Dolezel, P; Mynar, M; Vondracek, M; Chab, V; Slezak, J A; Comicioli, C; Prince, K C

    2001-01-01

    A plane grating monochromator based on the SX-700 concept has been constructed for the Materials Science Beamline, Elettra, which is attached to a bending magnet. The tuning range is from 35 to 800 eV with calculated spectral resolving power epsilon/DELTA epsilon better than 4000 in the whole range. The optical elements consist of a toroidal prefocusing mirror, polarization aperture, entrance slit, plane pre-mirror, single plane grating (blazed), spherical mirror, exit slit and toroidal refocusing mirror. The plane grating is operated in the fixed focus mode with C sub f sub f =2.4. Energy scanning is performed by rotation of the plane grating and simultaneous translation and rotation of the plane pre-mirror. A novel solution is applied for the motion of the plane pre-mirror, namely by a translation and mechanically coupling the rotation by a cam. The slits have no moving parts in vacuum to reduce cost and increase ruggedness, and can be fully closed without risk of damage. In the first tests, a resolving pow...

  9. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  10. Advantages of a monochromated transmission electron microscope for solid state physics

    International Nuclear Information System (INIS)

    Grogger, W.; Kothleitner, G.; Hofer, F.

    2006-01-01

    Full text: The characterization of nanostructured devices and functional materials at a nanometer scale is paramount for the understanding of their physical and chemical properties. Transmission electron microscopy (TEM) plays a central role, especially in terms of structural and chemical analysis on a nearly atomic scale. In particular, electron energy-loss spectrometry (EELS) can obtain information not only about the chemical composition of a thin sample, but also about chemical bonding and electronic structure (ionization edge fine structures) and optical properties (through valence loss EELS). Recent instrumental advances like monochromators for the electron gun in the TEM have made it possible to reduce the energy resolution to 0.15 eV at an acceleration voltage of 200 kV. Another strong point of the method lies in the combination with a fine electron probe (0.2 nm) which allows to record EELS spectra with high energy resolution and spatial resolution in the range of 1 nm. The improved energy resolution opens new possibilities for studying detailed electronic structure and bonding effects in solids such as transmission metal oxides. The experimental results will be compared with x-ray absorption spectroscopy and band structure calculations. A better energy-resolution is particularly important for measurements in the low loss region of the EELS spectrum which provides the information about the band gap and the dielectric function. We will highlight the potential of the method for studying metallic nanoparticles and semiconducting devices. Additionally, the influence of the intrinsic effects like core-hole and excited lifetime broadening and delocalization of the inelastically scattered electrons will be discussed. (author)

  11. Neutron monochromators of BeO, MgO and ZnO single crystals

    Energy Technology Data Exchange (ETDEWEB)

    Adib, M.; Habib, N. [Reactor Physics Department, NRC, AEAE, Cairo (Egypt); Bashter, I.I. [Physics Department, Faculty of Science, Zagazig University (Egypt); Morcos, H.N.; El-Mesiry, M.S. [Reactor Physics Department, NRC, AEAE, Cairo (Egypt); Mansy, M.S., E-mail: mohamedmansy_np@yahoo.com [Physics Department, Faculty of Science, Zagazig University (Egypt)

    2014-05-21

    The monochromatic features of BeO, MgO and ZnO single crystals are discussed in terms of orientation, mosaic spread, and thickness within the wavelength band from 0.05 up to 0.5 nm. A computer program MONO, written in “FORTRAN”, has been developed to carry out the required calculations. Calculation shows that a 5 mm thick MgO single crystal cut along its (2 0 0) plane having mosaic spread of 0.5° FWHM has the optimum parameters when it is used as a neutron monochromator. Moreover, at wavelengths shorter than 0.24 nm the reflected monochromatic neutrons are almost free from the higher order ones. The same features are seen with BeO (0 0 2) with less reflectivity than that of the former. Also, ZnO cut along its (0 0 2) plane is preferred over the others only at wavelengths longer than 0.20 nm. When the selected monochromatic wavelength is longer than 0.24 nm, the neutron intensities of higher orders from a thermal reactor flux are higher than those of the first-order one. For a cold reactor flux, the first order of BeO and MgO single crystals is free from the higher orders up to 0.4 nm, and ZnO at wavelengths up to 0.5 nm. - Highlights: • Monochromatic features of BeO, MgO and ZnO single crystals. • Calculations of neutron reflectivity using a computer program MONO. • Optimum mosaic spread, thickness and cutting plane of single crystals.

  12. Beryllium, zinc and lead single crystals as a thermal neutron monochromators

    Energy Technology Data Exchange (ETDEWEB)

    Adib, M.; Habib, N. [Reactor Physics Department, NRC, Atomic Energy Authority, Cairo (Egypt); Bashter, I.I. [Physics Department, Faculty of Science, Zagazig University (Egypt); Morcos, H.N.; El-Mesiry, M.S. [Reactor Physics Department, NRC, Atomic Energy Authority, Cairo (Egypt); Mansy, M.S., E-mail: drmohamedmansy88@hotmail.com [Physics Department, Faculty of Science, Zagazig University (Egypt)

    2015-03-15

    Highlights: •Monochromatic features of Be, Zn and Pb single crystals. •Calculations of neutron reflectivity using a computer program MONO. •Optimum mosaic spread, thickness and cutting plane of single crystals. -- Abstract: The monochromatic features of Be, Zn and Pb single crystals are discussed in terms of orientation, mosaic spread, and thickness within the wavelength band from 0.04 up to 0.5 nm. A computer program MONO written in “FORTRAN-77”, has been adapted to carry out the required calculations. Calculations show that a 5 mm thick of beryllium (HCP structure) single crystal cut along its (0 0 2) plane having 0.6° FWHM are the optimum parameters when it is used as a monochromator with high reflected neutron intensity from a thermal neutron flux. Furthermore, at wavelengths shorter than 0.16 nm it is free from the accompanying higher order ones. Zinc (HCP structure) has the same parameters, with intensity much less than the latter. The same features are seen with lead (FCC structure) cut along its (3 1 1) plane with less reflectivity than the former. However, Pb (3 1 1) is more preferable than others at neutron wavelengths ⩽ 0.1 nm, since the glancing angle (θ ∼ 20°) is more suitable to carry out diffraction experiments. For a cold neutron flux, the first-order neutrons reflected from beryllium is free from the higher orders up to 0.36 nm. While for Zn single crystal is up to 0.5 nm.

  13. DNS: Diffuse scattering neutron time-of-flight spectrometer

    Directory of Open Access Journals (Sweden)

    Yixi Su

    2015-08-01

    Full Text Available DNS is a versatile diffuse scattering instrument with polarisation analysis operated by the Jülich Centre for Neutron Science (JCNS, Forschungszentrum Jülich GmbH, outstation at the Heinz Maier-Leibnitz Zentrum (MLZ. Compact design, a large double-focusing PG monochromator and a highly efficient supermirror-based polarizer provide a polarized neutron flux of about 107 n cm-2 s-1. DNS is used for the studies of highly frustrated spin systems, strongly correlated electrons, emergent functional materials and soft condensed matter.

  14. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    Energy Technology Data Exchange (ETDEWEB)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582, Japan and Department of Radiology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582 (Japan)

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  15. Development of a bent Laue beam-expanding double-crystal monochromator for biomedical X-ray imaging

    International Nuclear Information System (INIS)

    Martinson, Mercedes; Samadi, Nazanin; Belev, George; Bassey, Bassey; Lewis, Rob; Aulakh, Gurpreet; Chapman, Dean

    2014-01-01

    A bent Laue beam-expanding double-crystal monochromator was developed and tested at the Biomedical Imaging and Therapy beamline at the Canadian Light Source. The expander will reduce scanning time for micro-computed tomography and allow dynamic imaging that has not previously been possible at this beamline. The Biomedical Imaging and Therapy (BMIT) beamline at the Canadian Light Source has produced some excellent biological imaging data. However, the disadvantage of a small vertical beam limits its usability in some applications. Micro-computed tomography (micro-CT) imaging requires multiple scans to produce a full projection, and certain dynamic imaging experiments are not possible. A larger vertical beam is desirable. It was cost-prohibitive to build a longer beamline that would have produced a large vertical beam. Instead, it was proposed to develop a beam expander that would create a beam appearing to originate at a source much farther away. This was accomplished using a bent Laue double-crystal monochromator in a non-dispersive divergent geometry. The design and implementation of this beam expander is presented along with results from the micro-CT and dynamic imaging tests conducted with this beam. Flux (photons per unit area per unit time) has been measured and found to be comparable with the existing flat Bragg double-crystal monochromator in use at BMIT. This increase in overall photon count is due to the enhanced bandwidth of the bent Laue configuration. Whilst the expanded beam quality is suitable for dynamic imaging and micro-CT, further work is required to improve its phase and coherence properties

  16. Diffusion of Zonal Variables Using Node-Centered Diffusion Solver

    Energy Technology Data Exchange (ETDEWEB)

    Yang, T B

    2007-08-06

    Tom Kaiser [1] has done some preliminary work to use the node-centered diffusion solver (originally developed by T. Palmer [2]) in Kull for diffusion of zonal variables such as electron temperature. To avoid numerical diffusion, Tom used a scheme developed by Shestakov et al. [3] and found their scheme could, in the vicinity of steep gradients, decouple nearest-neighbor zonal sub-meshes leading to 'alternating-zone' (red-black mode) errors. Tom extended their scheme to couple the sub-meshes with appropriate chosen artificial diffusion and thereby solved the 'alternating-zone' problem. Because the choice of the artificial diffusion coefficient could be very delicate, it is desirable to use a scheme that does not require the artificial diffusion but still able to avoid both numerical diffusion and the 'alternating-zone' problem. In this document we present such a scheme.

  17. Angular distribution measurement of fragment ions from a molecule using a new beamline consisting of a Grasshopper monochromator

    Science.gov (United States)

    Saito, Norio; Suzuki, Isao H.; Onuki, Hideo; Nishi, Morotake

    1989-07-01

    Optical characteristics of a new beamline consisting of a premirror, a Grasshopper monochromator, and a refocusing mirror have been investigated. The intensity of the monochromatic soft x-ray was estimated to be about 108 photons/(s 100 mA) at 500 eV with the storage electron energy of 600 MeV and the minimum slit width. This slit width provides a resolution of about 500. Angular distributions of fragment ions from an inner-shell excited nitrogen molecule have been measured with a rotatable time-of-flight mass spectrometer by using this beamline.

  18. Angular distribution measurement of fragment ions from a molecule using a new beamline consisting of a Grasshopper monochromator

    International Nuclear Information System (INIS)

    Saito, N.; Suzuki, I.H.; Onuki, H.; Nishi, M.

    1989-01-01

    Optical characteristics of a new beamline consisting of a premirror, a Grasshopper monochromator, and a refocusing mirror have been investigated. The intensity of the monochromatic soft x-ray was estimated to be about 10 8 photons/(s 100 mA) at 500 eV with the storage electron energy of 600 MeV and the minimum slit width. This slit width provides a resolution of about 500. Angular distributions of fragment ions from an inner-shell excited nitrogen molecule have been measured with a rotatable time-of-flight mass spectrometer by using this beamline

  19. Use of a GPGPU means for the development of search programs of defects of monochrome half-tone pictures

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Sereda, T.M.; Us, S.A.; Shestakov, M.V.

    2013-01-01

    Application of a GPGPU means for the development of search programs of defects of monochrome half-tone pictures is described. The description of realization of algorithm of search of images' defects by the means of technology CUDA (Compute Unified Device Architecture - the unified hardware-software decision for parallel calculations on GPU) companies NVIDIA is resulted. It is done the comparison of the temporary characteristics of performance of images' updating without application GPU and with use of opportunities of graphic processor GeForce 8800.

  20. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  1. High heat load performance of an inclined crystal monochromator with liquid gallium cooling on the CHESS-ANL undulator

    International Nuclear Information System (INIS)

    Macrander, A.T.; Lee, W.K.; Smither, R.K.; Mills, D.M.

    1992-01-01

    Recent results for the performance of a novel double crystal monochromator subjected to high heat loads on an APS prototype undulator at the Cornell High Energy Synchrotron Source (CHESS) are presented. The monochromator was designed to achieve symmetric diffraction from asymmetric planes to spread out the beam footprint thereby lowering the incident power density. Both crystals had (111) oriented surfaces and were arranged such that the beam was diffracted from the (11 bar 1) planes at 5 KeV. Rocking curves with minimal distortion were obtained at a ring electron current of 100 mA. This corresponded to 380 Watts total power and an average power density of 40 Watts/mm 2 normal to the incident beam. These results are compared to data obtained from the same crystals in the standard geometry (diffracting planes parallel to surface). The footprint area in the inclined case was three times that of the standard case. We also obtained rocking curve data for the (333) reflection at 15 KeV for both standard and inclined cases, and these data also showed a minimal distortion only for the inclined case. In addition, thermal data were obtained via infrared pyrometry. Liquid gallium flow rates of up to 2 gallons per minute were investigated. The diffraction data revealed a dramatically improved performance for the inclined crystal case

  2. Wake monochromator in asymmetric and symmetric Bragg and Laue geometry for self-seeding the European X-ray FEL

    International Nuclear Information System (INIS)

    Geloni, Gianluca; Kocharyan, Vitali; Saldin, Evgeni; Serkez, Svitozar; Tolkiehn, Martin

    2013-01-01

    We discuss the use of self-seeding schemes with wake monochromators to produce TW power, fully coherent pulses for applications at the dedicated bio-imaging beamline at the European X-ray FEL, a concept for an upgrade of the facility beyond the baseline previously proposed by the authors. We exploit the asymmetric and symmetric Bragg and Laue reflections (sigma polarization) in diamond crystal. Optimization of the bio-imaging beamline is performed with extensive start-to-end simulations, which also take into account effects such as the spatio-temporal coupling caused by the wake monochromator. The spatial shift is maximal in the range for small Bragg angles. A geometry with Bragg angles close to π/2 would be a more advantageous option from this viewpoint, albeit with decrease of the spectral tunability. We show that it will be possible to cover the photon energy range from 3 keV to 13 keV by using four different planes of the same crystal with one rotational degree of freedom.

  3. Wake monochromator in asymmetric and symmetric Bragg and Laue geometry for self-seeding the European X-ray FEL

    Energy Technology Data Exchange (ETDEWEB)

    Geloni, Gianluca [European XFEL GmbH, Hamburg (Germany); Kocharyan, Vitali; Saldin, Evgeni; Serkez, Svitozar; Tolkiehn, Martin [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-01-15

    We discuss the use of self-seeding schemes with wake monochromators to produce TW power, fully coherent pulses for applications at the dedicated bio-imaging beamline at the European X-ray FEL, a concept for an upgrade of the facility beyond the baseline previously proposed by the authors. We exploit the asymmetric and symmetric Bragg and Laue reflections (sigma polarization) in diamond crystal. Optimization of the bio-imaging beamline is performed with extensive start-to-end simulations, which also take into account effects such as the spatio-temporal coupling caused by the wake monochromator. The spatial shift is maximal in the range for small Bragg angles. A geometry with Bragg angles close to {pi}/2 would be a more advantageous option from this viewpoint, albeit with decrease of the spectral tunability. We show that it will be possible to cover the photon energy range from 3 keV to 13 keV by using four different planes of the same crystal with one rotational degree of freedom.

  4. A comparison of absolute calibrations of a radiation thermometer based on a monochromator and a tunable source

    Energy Technology Data Exchange (ETDEWEB)

    Keawprasert, T. [National Institute of Metrology Thailand, Pathum thani (Thailand); Anhalt, K.; Taubert, D. R.; Sperling, A.; Schuster, M.; Nevas, S. [Physikalisch Technische Bundesanstalt, Braunschweig and Berlin (Germany)

    2013-09-11

    An LP3 radiation thermometer was absolutely calibrated at a newly developed monochromator-based set-up and the TUneable Lasers in Photometry (TULIP) facility of PTB in the wavelength range from 400 nm to 1100 nm. At both facilities, the spectral radiation of the respective sources irradiates an integrating sphere, thus generating uniform radiance across its precision aperture. The spectral irradiance of the integrating sphere is determined via an effective area of a precision aperture and a Si trap detector, traceable to the primary cryogenic radiometer of PTB. Due to the limited output power from the monochromator, the absolute calibration was performed with the measurement uncertainty of 0.17 % (k= 1), while the respective uncertainty at the TULIP facility is 0.14 %. Calibration results obtained by the two facilities were compared in terms of spectral radiance responsivity, effective wavelength and integral responsivity. It was found that the measurement results in integral responsivity at the both facilities are in agreement within the expanded uncertainty (k= 2). To verify the calibration accuracy, the absolutely calibrated radiation thermometer was used to measure the thermodynamic freezing temperatures of the PTB gold fixed-point blackbody.

  5. A comparison of absolute calibrations of a radiation thermometer based on a monochromator and a tunable source

    International Nuclear Information System (INIS)

    Keawprasert, T.; Anhalt, K.; Taubert, D. R.; Sperling, A.; Schuster, M.; Nevas, S.

    2013-01-01

    An LP3 radiation thermometer was absolutely calibrated at a newly developed monochromator-based set-up and the TUneable Lasers in Photometry (TULIP) facility of PTB in the wavelength range from 400 nm to 1100 nm. At both facilities, the spectral radiation of the respective sources irradiates an integrating sphere, thus generating uniform radiance across its precision aperture. The spectral irradiance of the integrating sphere is determined via an effective area of a precision aperture and a Si trap detector, traceable to the primary cryogenic radiometer of PTB. Due to the limited output power from the monochromator, the absolute calibration was performed with the measurement uncertainty of 0.17 % (k= 1), while the respective uncertainty at the TULIP facility is 0.14 %. Calibration results obtained by the two facilities were compared in terms of spectral radiance responsivity, effective wavelength and integral responsivity. It was found that the measurement results in integral responsivity at the both facilities are in agreement within the expanded uncertainty (k= 2). To verify the calibration accuracy, the absolutely calibrated radiation thermometer was used to measure the thermodynamic freezing temperatures of the PTB gold fixed-point blackbody

  6. Design of a high-efficiency grazing incidence monochromator with multilayer-coated laminar gratings for the 1-6 keV region

    International Nuclear Information System (INIS)

    Koike, Masato; Ishino, Masahiko; Sasai, Hiroyuki

    2006-01-01

    A grazing incidence objective monochromator consisting of a spherical mirror, a varied-line-spacing plane grating with multilayered coating, a movable plane multilayered mirror, and a fixed exit slit for the 1-6 keV region has been designed. The included angle at the grating was chosen to satisfy the grating equation and extended Bragg condition simultaneously. The aberration was corrected by means of a hybrid design method. A spectral resolving power of ∼600-∼6000 and a throughput of ∼2%-∼40% is expected for the monochromator when used in an undulator beamline

  7. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  8. Spatial Mapping of Translational Diffusion Coefficients Using Diffusion Tensor Imaging: A Mathematical Description.

    Science.gov (United States)

    Shetty, Anil N; Chiang, Sharon; Maletic-Savatic, Mirjana; Kasprian, Gregor; Vannucci, Marina; Lee, Wesley

    2014-01-01

    In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal-Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain.

  9. Optimization of the bent perfect Si(311)-crystal monochromator for a residual strain/stress instrument at the HANARO reactor - Part I

    Czech Academy of Sciences Publication Activity Database

    Moon, MK; Lee, Ch.H.; Vyacheslav, T.; Mikula, Pavol

    2005-01-01

    Roč. 369, - (2005), s. 1-7 ISSN 0921-4526 R&D Projects: GA ČR GA202/03/0891 Institutional research plan: CEZ:AV0Z10480505 Keywords : neutron monochromator * residual stress measurement * neutron diffraction Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.796, year: 2005

  10. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, Simon G., E-mail: simon.alcock@diamond.ac.uk; Nistea, Ioana; Sawhney, Kawal [Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2016-05-15

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  11. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad

    International Nuclear Information System (INIS)

    Alcock, Simon G.; Nistea, Ioana; Sawhney, Kawal

    2016-01-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM’s autocollimator adds into the overall measured value of the mirror’s slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  12. Nano-metrology: The art of measuring X-ray mirrors with slope errors <100 nrad.

    Science.gov (United States)

    Alcock, Simon G; Nistea, Ioana; Sawhney, Kawal

    2016-05-01

    We present a comprehensive investigation of the systematic and random errors of the nano-metrology instruments used to characterize synchrotron X-ray optics at Diamond Light Source. With experimental skill and careful analysis, we show that these instruments used in combination are capable of measuring state-of-the-art X-ray mirrors. Examples are provided of how Diamond metrology data have helped to achieve slope errors of <100 nrad for optical systems installed on synchrotron beamlines, including: iterative correction of substrates using ion beam figuring and optimal clamping of monochromator grating blanks in their holders. Simulations demonstrate how random noise from the Diamond-NOM's autocollimator adds into the overall measured value of the mirror's slope error, and thus predict how many averaged scans are required to accurately characterize different grades of mirror.

  13. Liquid-metal-cooled, curved-crystal monochromator for Advanced Photon Source bending-magnet beamline 1-BM

    International Nuclear Information System (INIS)

    Brauer, S.; Rodricks, B.; Assoufid, L.; Beno, M.A.; Knapp, G.S.

    1996-06-01

    The authors describe a horizontally focusing curved-crystal monochromator that invokes a 4-point bending scheme and a liquid-metal cooling bath. The device has been designed for dispersive diffraction and spectroscopy in the 5--20 keV range, with a predicted focal spot size of ≤ 100 microm. To minimize thermal distortions and thermal equilibration time, the 355 x 32 x 0.8 mm crystal will be nearly half submerged in a bath of Ga-In-Sn-Zn alloy. The liquid metal thermally couples the crystal to the water-cooled Cu frame, while permitting the required crystal bending. Calculated thermal profiles and anticipated focusing properties are discussed

  14. Interface of the transport systems research vehicle monochrome display system to the digital autonomous terminal access communication data bus

    Science.gov (United States)

    Easley, W. C.; Tanguy, J. S.

    1986-01-01

    An upgrade of the transport systems research vehicle (TSRV) experimental flight system retained the original monochrome display system. The original host computer was replaced with a Norden 11/70, a new digital autonomous terminal access communication (DATAC) data bus was installed for data transfer between display system and host, while a new data interface method was required. The new display data interface uses four split phase bipolar (SPBP) serial busses. The DATAC bus uses a shared interface ram (SIR) for intermediate storage of its data transfer. A display interface unit (DIU) was designed and configured to read from and write to the SIR to properly convert the data from parallel to SPBP serial and vice versa. It is found that separation of data for use by each SPBP bus and synchronization of data tranfer throughout the entire experimental flight system are major problems which require solution in DIU design. The techniques used to accomplish these new data interface requirements are described.

  15. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  16. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  17. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  18. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  19. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  20. Conservative diffusions

    International Nuclear Information System (INIS)

    Carlen, E.A.

    1984-01-01

    In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions. These diffusions are formally given by stochastic differential equations with extremely singular coefficients. Using PDE methods, we prove the existence of solutions. This reult provides a rigorous basis for stochastic mechanics. (orig.)

  1. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  2. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  3. Diffusion in glass

    Energy Technology Data Exchange (ETDEWEB)

    Mubarak, A S

    1991-12-31

    Rutherford backscattering spectromertry technique (RBS) was used to characterize and investigate the depth distribution profiles of Ca-impurities of Ca-doped soda-time glass. The purposely added Ca-impurities were introduced inti the glass matrix by a normal ion exchange diffusion process. The measurements and analysis were performed using 2 MeV {sup 2}He{sup +} ions supplied from the University of Jordan Van de Graff acceierator (JOVAG). The normalized concetration versus depth profile distributions for the Ca-imourities were determined, both theoretically and experimentally. The theoretical treatment was carried out by setting up and soiving the diffusion equation under the conditions of the experiment. The resulting profiles are characterized by a compiementary error function. the theoretical treeatment was extended to include the various methods of enhancing the diffusion process, e.g. using an electric field. The diffusion coefficient, assumed constant, of the Ca-impurities exchanged in the soda-lime glass was determined to be 1.23 x 10{sup 13} cm{sup 2}/s. A comparison between theoretically and experimentally determined profiles is made and commented at, where several conclusions are drawn and suggestions for future work are mentioned. (author). 38 refs., 21 figs., 10 Tabs.

  4. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  5. Design, Build and Test of a Double Crystal Monochromator for Beamlines I09 and I23 at the Diamond Light Source

    International Nuclear Information System (INIS)

    Kelly, J; Lee, T; Alcock, S; Patel, H

    2013-01-01

    A high stability Double Crystal Monochromator has been developed at The Diamond Light Source for beamlines I09 and I23. The design specification was a cryogenic, fixed exit, energy scanning monochromator, operating over an energy range of 2.1 – 25 keV using a Si(111) crystal set. The novel design concepts are the direct drive, air bearing Bragg axis, low strain crystal mounts and the cooling scheme. The instrument exhibited superb stability and repeatability on the B16 Test Beamline. A 20 keV Si(555), 1.4 μrad rocking curve was demonstrated. The DCM showed good stability without any evidence of vibration or Bragg angle nonlinearity.

  6. A possibility of parallel and anti-parallel diffraction measurements on neu- tron diffractometer employing bent perfect crystal monochromator at the monochromatic focusing condition

    Science.gov (United States)

    Choi, Yong Nam; Kim, Shin Ae; Kim, Sung Kyu; Kim, Sung Baek; Lee, Chang-Hee; Mikula, Pavel

    2004-07-01

    In a conventional diffractometer having single monochromator, only one position, parallel position, is used for the diffraction experiment (i.e. detection) because the resolution property of the other one, anti-parallel position, is very poor. However, a bent perfect crystal (BPC) monochromator at monochromatic focusing condition can provide a quite flat and equal resolution property at both parallel and anti-parallel positions and thus one can have a chance to use both sides for the diffraction experiment. From the data of the FWHM and the Delta d/d measured on three diffraction geometries (symmetric, asymmetric compression and asymmetric expansion), we can conclude that the simultaneous diffraction measurement in both parallel and anti-parallel positions can be achieved.

  7. Models of diffuse solar radiation

    Energy Technology Data Exchange (ETDEWEB)

    Boland, John; Ridley, Barbara [Centre for Industrial and Applied Mathematics, University of South Australia, Mawson Lakes Boulevard, Mawson Lakes, SA 5095 (Australia); Brown, Bruce [Department of Statistics and Applied Probability, National University of Singapore, Singapore 117546 (Singapore)

    2008-04-15

    For some locations both global and diffuse solar radiation are measured. However, for many locations, only global is measured, or inferred from satellite data. For modelling solar energy applications, the amount of radiation on a tilted surface is needed. Since only the direct component on a tilted surface can be calculated from trigonometry, we need to have diffuse on the horizontal available. There are regression relationships for estimating the diffuse on a tilted surface from diffuse on the horizontal. Models for estimating the diffuse radiation on the horizontal from horizontal global that have been developed in Europe or North America have proved to be inadequate for Australia [Spencer JW. A comparison of methods for estimating hourly diffuse solar radiation from global solar radiation. Sol Energy 1982; 29(1): 19-32]. Boland et al. [Modelling the diffuse fraction of global solar radiation on a horizontal surface. Environmetrics 2001; 12: 103-16] developed a validated model for Australian conditions. We detail our recent advances in developing the theoretical framework for the approach reported therein, particularly the use of the logistic function instead of piecewise linear or simple nonlinear functions. Additionally, we have also constructed a method, using quadratic programming, for identifying values that are likely to be erroneous. This allows us to eliminate outliers in diffuse radiation values, the data most prone to errors in measurement. (author)

  8. Enhancement of diffusers BRDF accuracy

    Science.gov (United States)

    Otter, Gerard; Bazalgette Courrèges-Lacoste, Gregory; van Brug, Hedser; Schaarsberg, Jos Groote; Delwart, Steven; del Bello, Umberto

    2017-11-01

    This paper reports the result of an ESA study conducted at TNO to investigate properties of various diffusers. Diffusers are widely used in space instruments as part of the on-board absolute calibration. Knowledge of the behaviour of the diffuser is therefore most important. From measurements of launched instruments in-orbit it has been discovered that when a diffuser is used in the vacuum of space the BRDF can change with respect to the one in ambient conditions. This is called the air/vacuum effect and has been simulated in this study by measuring the BRDF in a laboratory in ambient as well as vacuum conditions. Another studied effect is related to the design parameters of the optical system and the scattering properties of the diffuser. The effect is called Spectral Features and is a noise like structure superimposed on the diffuser BRDF. Modern space spectrometers, which have high spectral resolution and/or a small field of view (high spatial resolution) are suffering from this effect. The choice of diffuser can be very critical with respect to the required absolute radiometric calibration of an instrument. Even if the Spectral Features are small it can influence the error budget of the retrieval algorithms for the level 2 products. in this presentation diffuser trade-off results are presented and the Spectral Features model applied to the optical configuration of the MERIS instrument is compared to in-flight measurements of MERIS.

  9. Fractional Diffusion Equations and Anomalous Diffusion

    Science.gov (United States)

    Evangelista, Luiz Roberto; Kaminski Lenzi, Ervin

    2018-01-01

    Preface; 1. Mathematical preliminaries; 2. A survey of the fractional calculus; 3. From normal to anomalous diffusion; 4. Fractional diffusion equations: elementary applications; 5. Fractional diffusion equations: surface effects; 6. Fractional nonlinear diffusion equation; 7. Anomalous diffusion: anisotropic case; 8. Fractional Schrödinger equations; 9. Anomalous diffusion and impedance spectroscopy; 10. The Poisson–Nernst–Planck anomalous (PNPA) models; References; Index.

  10. Biological monochromator with a high flux in the visible spectrum; Un monochromateur biologique a haut flux dans le visible

    Energy Technology Data Exchange (ETDEWEB)

    Andre, M; Guerin de Montgareuil, P [Commissariat a l' Energie Atomique, Cadarache (France). Centre d' Etudes Nucleaires

    1965-07-01

    The object is to carry out research into photosynthesis using energetic illuminations similar to those employed with white light studies. The limitations are due mainly to the source. A comparison of various possible solutions has led to the choice of the sun used in conjunction with 4 large gratings. In an intermediate stage, a description is given of a medium-aperture monochromator with a 3 kW xenon arc and a single grating. With this set-up it is possible to obtain the following performance, given as an example; energy illumination, 1.3 mW/cm{sup 2} over a surface of 50 cm{sup 2} and for a bandwidth at half-height of 50 Angstroms. (authors) [French] L'objectif est de poursuivre en lumiere monochromatique des etudes de photosynthese avec des eclairements energetiques analogues a ceux qu'on utilise en lumiere blanche. Les limitations se situent principalement au niveau de la source. Une comparaison effectuee entre differentes solutions possibles conduit a preconiser l'emploi du soleil associe a 4 grands reseaux. En etape intermediaire on decrit un monochromateur de moyenne ouverture, avec un arc au xenon de 3 kW et un seul reseau, qui permet d'atteindre les performances suivantes donnees a titre d'exemple: eclairement energetique de 1,3 mW/cm{sup 2} sur une surface de 50 cm{sup 2} et pour une bande passante a mi-hauteur de 50 Angstroems. (auteurs)

  11. Spectroscopic studies of xenon EUV emission in the 40-80 nm wavelength range using an absolutely calibrated monochromator

    Energy Technology Data Exchange (ETDEWEB)

    Merabet, H [Mathematic and Sciences Unit, Dhofar University, Salalah 211, Sultanate of (Oman); Bista, R [Department of Physics, University of Nevada Reno, Reno, NV 89557 (United States); Bruch, R [Department of Physics, University of Nevada Reno, Reno, NV 89557 (United States); Fuelling, S [Department of Physics, University of Nevada Reno, Reno, NV 89557 (United States)

    2007-03-01

    We have measured and identified numerous Extreme UltraViolet (EUV) radiative line structures arising from xenon (Xe) ions in charge state q = 1 to 10 in the wavelength range 40-80 nm. To obtain reasonable intensities of different charged Xe ions, we have used a compact microwave plasma source which was designed and developed at the Lawrence Berkeley National Laboratory (LBNL). The EUV emission of the ECR plasma has been measured by a 1.5 m grazing incidence monochromator that was absolutely calibrated in the 10-80 nm wavelength range using well known and calibrated EUV light at the Advanced Light Source (ALS), LBNL. This calibration has enabled us to determine absolute intensities of previously measured EUV radiative lines in the wavelengths regions investigated for different ionization stages of Xe. In addition, emission spectra of xenon ions for corresponding measured lines have been calculated. The calculations have been carried out within the relativistic Hartree-Fock (HF) approximation. Results of calculations are found to be in good agreement with current and available experimental and theoretical data.

  12. Reliability and short-term intra-individual variability of telomere length measurement using monochrome multiplexing quantitative PCR.

    Directory of Open Access Journals (Sweden)

    Sangmi Kim

    Full Text Available Studies examining the association between telomere length and cancer risk have often relied on measurement of telomere length from a single blood draw using a real-time PCR technique. We examined the reliability of telomere length measurement using sequential samples collected over a 9-month period.Relative telomere length in peripheral blood was estimated using a single tube monochrome multiplex quantitative PCR assay in blood DNA samples from 27 non-pregnant adult women (aged 35 to 74 years collected in 7 visits over a 9-month period. A linear mixed model was used to estimate the components of variance for telomere length measurements attributed to variation among women and variation between time points within women. Mean telomere length measurement at any single visit was not significantly different from the average of 7 visits. Plates had a significant systematic influence on telomere length measurements, although measurements between different plates were highly correlated. After controlling for plate effects, 64% of the remaining variance was estimated to be accounted for by variance due to subject. Variance explained by time of visit within a subject was minor, contributing 5% of the remaining variance.Our data demonstrate good short-term reliability of telomere length measurement using blood from a single draw. However, the existence of technical variability, particularly plate effects, reinforces the need for technical replicates and balancing of case and control samples across plates.

  13. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  15. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  16. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  17. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  18. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  19. Noite e dia e alguns monocromos psíquicos Night and day - and some psychical monochromes

    Directory of Open Access Journals (Sweden)

    Edson Luiz André de Sousa

    2006-06-01

    Full Text Available O artigo apresenta uma leitura do conto de Jack London "A sombra e o brilho" mostrando o funcionamento do princípio da mímesis no processo de identificação. Propõe-se a expressão monocromos psíquicos para esses espaços mentais de indiferenciação entre o eu e o Outro. Adota-se a tese de Caillois, que afirma que o eu é permeável ao espaço. Nessa perspectiva, o tema do duplo, amplamente desenvolvido por Freud, é fundamental. Partindo-se de notas sobre o trabalho do fotógrafo cego Bavcar, procura-se mostrar alguns traços da estrutura do olhar. O artigo finaliza mostrando as conexões possíveis dessas reflexões para a prática psicanalítica.The paper presents a reading of Jack London's tale "The Shadow and the brightness", showing how the principle of mimesis works in the process of the identification. We propose to call psychical monochromes the spaces of mental indifference between the self and the other. We follow the thesis of Roger Caillois: "the self is permeable in the space". In this perspective, the subject of the double, developped by Freud is essential. We try to show the dialectic of the structure of the look based in some notes about the work of the blind photographer Bavcar. The article finish with showing the possibles connections of all these points with the clinical work.

  20. Diffusion bonding

    International Nuclear Information System (INIS)

    Anderson, R.C.

    1976-01-01

    A method is described for joining beryllium to beryllium by diffusion bonding. At least one surface portion of at least two beryllium pieces is coated with nickel. A coated surface portion is positioned in a contiguous relationship with another surface portion and subjected to an environment having an atmosphere at a pressure lower than ambient pressure. A force is applied on the beryllium pieces for causing the contiguous surface portions to abut against each other. The contiguous surface portions are heated to a maximum temperature less than the melting temperature of the beryllium, and the applied force is decreased while increasing the temperature after attaining a temperature substantially above room temperature. A portion of the applied force is maintained at a temperature corresponding to about maximum temperature for a duration sufficient to effect the diffusion bond between the contiguous surface portions

  1. Multipassage diffuser

    International Nuclear Information System (INIS)

    Lalis, A.; Rouviere, R.; Simon, G.

    1976-01-01

    A multipassage diffuser having 2p passages comprises a leak-tight cylindrical enclosure closed by a top cover and a bottom end-wall, parallel porous tubes which are rigidly assembled in sectors between tube plates and through which the gas mixture flows, the tube sectors being disposed at uniform intervals on the periphery of the enclosure. The top tube plates are rigidly fixed to an annular header having the shape of a half-torus and adapted to communicate with the tubes of the corresponding sector. Each passage is constituted by a plurality of juxtaposed sectors in which the mixture circulates in the same direction, the header being divided into p portions limited by radial partition-walls and each constituting two adjacent passages. The diffuser is provided beneath the bottom end-wall with p-1 leak-tight chambers each adapted to open into two different portions of the header, and with two collector-chambers each fitted with a nozzle for introducing the gas mixture and discharging the fraction of the undiffused mixture. By means of a central orifice formed in the bottom end-wall the enclosure communicates with a shaft for discharging the diffused fraction of the gas mixture

  2. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  3. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  4. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  5. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  6. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas; Tempone, Raul

    2014-01-01

    jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time

  7. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  8. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  9. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  10. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  11. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  12. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  13. Identification of the Diffusion Parameter in Nonlocal Steady Diffusion Problems

    Energy Technology Data Exchange (ETDEWEB)

    D’Elia, M., E-mail: mdelia@fsu.edu, E-mail: mdelia@sandia.gov [Sandia National Laboratories (United States); Gunzburger, M. [Florida State University (United States)

    2016-04-15

    The problem of identifying the diffusion parameter appearing in a nonlocal steady diffusion equation is considered. The identification problem is formulated as an optimal control problem having a matching functional as the objective of the control and the parameter function as the control variable. The analysis makes use of a nonlocal vector calculus that allows one to define a variational formulation of the nonlocal problem. In a manner analogous to the local partial differential equations counterpart, we demonstrate, for certain kernel functions, the existence of at least one optimal solution in the space of admissible parameters. We introduce a Galerkin finite element discretization of the optimal control problem and derive a priori error estimates for the approximate state and control variables. Using one-dimensional numerical experiments, we illustrate the theoretical results and show that by using nonlocal models it is possible to estimate non-smooth and discontinuous diffusion parameters.

  14. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  15. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  16. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  17. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  18. Quantum diffusion

    International Nuclear Information System (INIS)

    Habib, S.

    1994-01-01

    We consider a simple quantum system subjected to a classical random force. Under certain conditions it is shown that the noise-averaged Wigner function of the system follows an integro-differential stochastic Liouville equation. In the simple case of polynomial noise-couplings this equation reduces to a generalized Fokker-Planck form. With nonlinear noise injection new ''quantum diffusion'' terms rise that have no counterpart in the classical case. Two special examples that are not of a Fokker-Planck form are discussed: the first with a localized noise source and the other with a spatially modulated noise source

  19. Hereditary Diffuse Gastric Cancer

    Science.gov (United States)

    ... Hereditary Diffuse Gastric Cancer Request Permissions Hereditary Diffuse Gastric Cancer Approved by the Cancer.Net Editorial Board , 10/2017 What is hereditary diffuse gastric cancer? Hereditary diffuse gastric cancer (HDGC) is a rare ...

  20. A high-energy double-crystal fixed exit monochromator for the X17 superconducting wiggler beam line at the NSLS

    International Nuclear Information System (INIS)

    Garrett, R.F.; Dilmanian, F.A.; Oversluizen, T.; Lenhard, A.; Berman, L.E.; Chapman, L.D.; Stoeber, W.

    1992-01-01

    A high-energy double-crystal x-ray monochromator has been constructed for use on the X-17 beam line at the National Synchrotron Light Source (NSLS). Its design is based on the ''boomerang'' right angle linkage, and features a fixed exit beam, a cooled first crystal, and an energy range of 8--92 keV. The entire mechanism is UHV compatible. The design is described and performance details, obtained in testing at the X17 beam line, are presented

  1. Error Decomposition and Adaptivity for Response Surface Approximations from PDEs with Parametric Uncertainty

    KAUST Repository

    Bryant, C. M.; Prudhomme, S.; Wildey, T.

    2015-01-01

    In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.

  2. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  3. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  4. Optimisation of monochrome images

    International Nuclear Information System (INIS)

    Potter, R.

    1983-01-01

    Gamma cameras with modern imaging systems usually digitize the signals to allow storage and processing of the image in a computer. Although such computer systems are widely used for the extraction of quantitative uptake estimates and the analysis of time variant data, the vast majority of nuclear medicine images is still interpreted on the basis of an observer's visual assessment of a photographic hardcopy image. The optimisation of hardcopy devices is therefore vital and factors such as resolution, uniformity, noise grey scales and display matrices are discussed. Once optimum display parameters have been determined, routine procedures for quality control need to be established; suitable procedures are discussed. (U.K.)

  5. Theory of monochromators based on holographic toroidal arrays for the X-UV spectrum band. Tests of the 'TGM 10 metres, 4 degrees' on the ACO storage ring

    International Nuclear Information System (INIS)

    Lizon a Lugrin, Eric

    1988-01-01

    As the use of synchrotron radiation is strongly increasing, needs for monochromators in the X-UV range are very important. This research thesis aimed at the development of prototype monochromator based toroidal lamellar arrays with grazing incidence. In the first part, the author recalls theoretical aspects of light scattering rules adapted to a lamellar array, and of wave-matter interaction rules. In the second part, he reports the calculation of the monochromator, its mechanical description, and its implementation on the light line of the ACO storage ring. In the third part, the author reports tests performed without any input slot and in reverse optical configuration on the ACO storage ring. The energy range, the linearity with respect to wave length, the rejection of higher orders of scattered light, flow and resolution are in compliance with expected values [fr

  6. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  7. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  8. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  9. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  10. A laboratory-based hard x-ray monochromator for high-resolution x-ray emission spectroscopy and x-ray absorption near edge structure measurements

    Energy Technology Data Exchange (ETDEWEB)

    Seidler, G. T., E-mail: seidler@uw.edu; Mortensen, D. R.; Remesnik, A. J.; Pacold, J. I.; Ball, N. A.; Barry, N.; Styczinski, M.; Hoidn, O. R. [Physics Department, University of Washington, Seattle, Washington 98195-1560 (United States)

    2014-11-15

    We report the development of a laboratory-based Rowland-circle monochromator that incorporates a low power x-ray (bremsstrahlung) tube source, a spherically bent crystal analyzer, and an energy-resolving solid-state detector. This relatively inexpensive, introductory level instrument achieves 1-eV energy resolution for photon energies of ∼5 keV to ∼10 keV while also demonstrating a net efficiency previously seen only in laboratory monochromators having much coarser energy resolution. Despite the use of only a compact, air-cooled 10 W x-ray tube, we find count rates for nonresonant x-ray emission spectroscopy comparable to those achieved at monochromatized spectroscopy beamlines at synchrotron light sources. For x-ray absorption near edge structure, the monochromatized flux is small (due to the use of a low-powered x-ray generator) but still useful for routine transmission-mode studies of concentrated samples. These results indicate that upgrading to a standard commercial high-power line-focused x-ray tube or rotating anode x-ray generator would result in monochromatized fluxes of order 10{sup 6}–10{sup 7} photons/s with no loss in energy resolution. This work establishes core technical capabilities for a rejuvenation of laboratory-based hard x-ray spectroscopies that could have special relevance for contemporary research on catalytic or electrical energy storage systems using transition-metal, lanthanide, or noble-metal active species.

  11. Minimization of spurious strains by using a Si bent-perfect-crystal monochromator: neutron surface strain scanning of a shot-peened sample

    Science.gov (United States)

    Rebelo Kornmeier, Joana; Gibmeier, Jens; Hofmann, Michael

    2011-06-01

    Neutron strain measurements are critical at the surface. When scanning close to a sample surface, aberration peak shifts arise due to geometrical and divergence effects. These aberration peak shifts can be of the same order as the peak shifts related to residual strains. In this study it will be demonstrated that by optimizing the horizontal bending radius of a Si (4 0 0) monochromator, the aberration peak shifts from surface effects can be strongly reduced. A stress-free sample of fine-grained construction steel, S690QL, was used to find the optimal instrumental conditions to minimize aberration peak shifts. The optimized Si (4 0 0) monochromator and instrument settings were then applied to measure the residual stress depth gradient of a shot-peened SAE 4140 steel sample to validate the effectiveness of the approach. The residual stress depth profile is in good agreement with results obtained by x-ray diffraction measurements from an international round robin test (BRITE-EURAM-project ENSPED). The results open very promising possibilities to bridge the gap between x-ray diffraction and conventional neutron diffraction for non-destructive residual stress analysis close to surfaces.

  12. Minimization of spurious strains by using a Si bent-perfect-crystal monochromator: neutron surface strain scanning of a shot-peened sample

    International Nuclear Information System (INIS)

    Rebelo Kornmeier, Joana; Hofmann, Michael; Gibmeier, Jens

    2011-01-01

    Neutron strain measurements are critical at the surface. When scanning close to a sample surface, aberration peak shifts arise due to geometrical and divergence effects. These aberration peak shifts can be of the same order as the peak shifts related to residual strains. In this study it will be demonstrated that by optimizing the horizontal bending radius of a Si (4 0 0) monochromator, the aberration peak shifts from surface effects can be strongly reduced. A stress-free sample of fine-grained construction steel, S690QL, was used to find the optimal instrumental conditions to minimize aberration peak shifts. The optimized Si (4 0 0) monochromator and instrument settings were then applied to measure the residual stress depth gradient of a shot-peened SAE 4140 steel sample to validate the effectiveness of the approach. The residual stress depth profile is in good agreement with results obtained by x-ray diffraction measurements from an international round robin test (BRITE-EURAM-project ENSPED). The results open very promising possibilities to bridge the gap between x-ray diffraction and conventional neutron diffraction for non-destructive residual stress analysis close to surfaces

  13. Synchrotron X-ray adaptative monochromator: study and realization of a prototype; Monochromateur adaptatif pour rayonnement X synchrotron: etude et realisation d`un prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dezoret, D.

    1995-12-12

    This work presents a study of a prototype of a synchrotron X-ray monochromator. The spectral qualities of this optic are sensitive to the heat loads which are particularly important on third synchrotron generation like ESRF. Indeed, powers generated by synchrotron beams can reach few kilowatts and power densities about a few tens watts per square millimeters. The mechanical deformations of the optical elements of the beamlines issue issue of the heat load can damage their spectral efficiencies. In order to compensate the deformations, wa have been studying the transposition of the adaptive astronomical optics technology to the x-ray field. First, we have considered the modifications of the spectral characteristics of a crystal induced by x-rays. We have established the specifications required to a technological realisation. Then, thermomechanical and technological studies have been required to transpose the astronomical technology to an x-ray technology. After these studies, we have begun the realisation of a prototype. This monochromator is composed by a crystal of silicon (111) bonded on a piezo-electric structure. The mechanical control is a loop system composed by a infrared light, a Shack-Hartmann CDD and wave front analyser. This system has to compensate the deformations of the crystal in the 5 kcV to 60 kcV energy range with a power density of 1 watt per square millimeters. (authors).

  14. A sub-50meV spectrometer and energy filter for use in combination with 200kV monochromated (S)TEMs.

    Science.gov (United States)

    Brink, H A; Barfels, M M G; Burgner, R P; Edwards, B N

    2003-09-01

    A high-energy resolution post-column spectrometer for the purpose of electron energy loss spectroscopy (EELS) and energy-filtered TEM in combination with a monochromated (S)TEM is presented. The prism aberrations were corrected up to fourth order using multipole elements improving the electron optical energy resolution and increasing the acceptance of the spectrometer for a combination of object area and collection angles. Electronics supplying the prism, drift tube, high-tension reference and critical lenses have been newly designed such that, in combination with the new electron optics, a sub-50 meV energy resolution has been realized, a 10-fold improvement over past post-column spectrometer designs. The first system has been installed on a 200 kV monochromated TEM at the Delft University of Technology. Total system energy resolution of sub-100 meV has been demonstrated. For a 1s exposure the resolution degraded to 110 meV as a result of noise. No further degradation in energy resolution was measured for exposures up to 1 min at 120 kV. Spectral resolution measurements, performed on the pi* peak of the BN K-edge, demonstrated a 350 meV (FWHM) peak width at 200 kV. This measure is predominantly determined by the natural line width of the BN K-edge.

  15. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  16. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  17. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  18. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  19. Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach

    KAUST Repository

    Collier, Nathan; Radwan, Hany; Dalcin, Lisandro; Calo, Victor M.

    2011-01-01

    We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity

  20. Diffusion archeology for diffusion progression history reconstruction.

    Science.gov (United States)

    Sefer, Emre; Kingsford, Carl

    2016-11-01

    Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring - perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on submodularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than the existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.

  1. Symmetries and modelling functions for diffusion processes

    International Nuclear Information System (INIS)

    Nikitin, A G; Spichak, S V; Vedula, Yu S; Naumovets, A G

    2009-01-01

    A constructive approach to the theory of diffusion processes is proposed, which is based on application of both symmetry analysis and the method of modelling functions. An algorithm for construction of the modelling functions is suggested. This algorithm is based on the error function expansion (ERFEX) of experimental concentration profiles. The high-accuracy analytical description of the profiles provided by ERFEX approximation allows a convenient extraction of the concentration dependence of diffusivity from experimental data and prediction of the diffusion process. Our analysis is exemplified by its employment in experimental results obtained for surface diffusion of lithium on the molybdenum (1 1 2) surface precovered with dysprosium. The ERFEX approximation can be directly extended to many other diffusion systems.

  2. Error characterization for asynchronous computations: Proxy equation approach

    Science.gov (United States)

    Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath

    2017-11-01

    Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.

  3. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  4. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  5. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  6. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  7. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  8. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  9. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  10. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  11. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Excess Entropy and Diffusivity

    Indian Academy of Sciences (India)

    First page Back Continue Last page Graphics. Excess Entropy and Diffusivity. Excess entropy scaling of diffusivity (Rosenfeld,1977). Analogous relationships also exist for viscosity and thermal conductivity.

  13. An improved procedure for determining grain boundary diffusion coefficients from averaged concentration profiles

    Science.gov (United States)

    Gryaznov, D.; Fleig, J.; Maier, J.

    2008-03-01

    Whipple's solution of the problem of grain boundary diffusion and Le Claire's relation, which is often used to determine grain boundary diffusion coefficients, are examined for a broad range of ratios of grain boundary to bulk diffusivities Δ and diffusion times t. Different reasons leading to errors in determining the grain boundary diffusivity (DGB) when using Le Claire's relation are discussed. It is shown that nonlinearities of the diffusion profiles in lnCav-y6/5 plots and deviations from "Le Claire's constant" (-0.78) are the major error sources (Cav=averaged concentration, y =coordinate in diffusion direction). An improved relation (replacing Le Claire's constant) is suggested for analyzing diffusion profiles particularly suited for small diffusion lengths (short times) as often required in diffusion experiments on nanocrystalline materials.

  14. Error prevention at a radon measurement service laboratory

    International Nuclear Information System (INIS)

    Cohen, B.L.; Cohen, F.

    1989-01-01

    This article describes the steps taken at a high volume counting laboratory to avoid human, instrument, and computer errors. The laboratory analyzes diffusion barrier charcoal adsorption canisters which have been used to test homes and commercial buildings. A series of computer and human cross-checks are utilized to assure that accurate results are reported to the correct client

  15. Assessment of using a double rotor neutron monochromator system in studying the dynamics of solids and liquids

    Energy Technology Data Exchange (ETDEWEB)

    Adib, M.; Maayouf, R.M.A.; Abdel-Kawy, A.; Gwaily, S.E.; Hamouda, I. (Atomic Energy Establishment, Inshas (Egypt). Reactor and Neutron Physics Dept.)

    1981-01-01

    Two soil samples were subjected to comprehensive study of the self-diffusion coefficient of Zn in soils previously treated with ZnSO/sub 4/, EDTA and Zn-EDTA. The effect of chelating compounds on the ratio between solid phase fraction of the labile Zn and its concentration in the soil solution (capacity factor) was also studied. The data revealed the following items of more interest: (1) The use of chelating agents, i.e. EDTA and Zn-EDTA, increased the amount of Zn in soil solution hence, the capacity factors was different according to the type of soil, i.e. calcareous and alluviel. (2) The increasing of Zn-concentration in the soil solution, due to the use of chelating agents, increased the self-diffusion coefficent of Zn in the investigated soils. The self-diffusion coefficient for Zn in the alluvial soils was more than that of calcareous one. (3) The practical implication of the present study is that organic ameniments and chelated Zn fertilizers are expected to be more effective than soluble Zn salts in alleviating its deficiency in such soils.

  16. Real depletion in nodal diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P.T.

    2002-01-01

    The fuel depletion is described by more than one hundred fuel isotopes in the advanced lattice codes like HELIOS, but only a few fuel isotopes are accounted for even in the advanced steady-state diffusion codes. The general assumption that the number densities of the majority of the fuel isotopes depend only on the fuel burnup is seriously in error if high burnup is considered. The real depletion conditions in the reactor core differ from the asymptotic ones at the stage of lattice depletion calculations. This study reveals which fuel isotopes should be explicitly accounted for in the diffusion codes in order to predict adequately the real depletion effects in the core. A somewhat strange conclusion is that if the real number densities of the main fissionable isotopes are not explicitly accounted for in the diffusion code, then Sm-149 should not be accounted for either, because the net error in k-inf is smaller (Authors)

  17. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  18. Three-energy focusing Laue monochromator for the diamond light source x-ray pair distribution function beamline I15-1

    Energy Technology Data Exchange (ETDEWEB)

    Sutter, John P., E-mail: john.sutter@diamond.ac.uk; Chater, Philip A.; Hillman, Michael R.; Keeble, Dean S.; Wilhelm, Heribert [Diamond Light Source Ltd, Harwell Science and Innovation Campus, Chilton, Didcot, Oxfordshire OX11 0DE (United Kingdom); Tucker, Matt G. [Diamond Light Source Ltd, Harwell Science and Innovation Campus, Chilton, Didcot, Oxfordshire OX11 0DE (United Kingdom); ISIS Neutron and Muon Source, Science and Technology Facilities Council, Rutherford Appleton Laboratory, Harwell Oxford, Didcot, Oxfordshire OX11 0QX (United Kingdom)

    2016-07-27

    The I15-1 beamline, the new side station to I15 at the Diamond Light Source, will be dedicated to the collection of atomic pair distribution function data. A Laue monochromator will be used consisting of three silicon crystals diffracting X-rays at a common Bragg angle of 2.83°. The crystals use the (1 1 1), (2 2 0), and (3 1 1) planes to select 40, 65, and 76 keV X-rays, respectively, and will be bent meridionally to horizontally focus the selected X-rays onto the sample. All crystals will be cut to the same optimized asymmetry angle in order to eliminate image broadening from the crystal thickness. Finite element calculations show that the thermal distortion of the crystals will affect the image size and bandpass.

  19. Three-energy focusing Laue monochromator for the diamond light source x-ray pair distribution function beamline I15-1

    International Nuclear Information System (INIS)

    Sutter, John P.; Chater, Philip A.; Hillman, Michael R.; Keeble, Dean S.; Wilhelm, Heribert; Tucker, Matt G.

    2016-01-01

    The I15-1 beamline, the new side station to I15 at the Diamond Light Source, will be dedicated to the collection of atomic pair distribution function data. A Laue monochromator will be used consisting of three silicon crystals diffracting X-rays at a common Bragg angle of 2.83°. The crystals use the (1 1 1), (2 2 0), and (3 1 1) planes to select 40, 65, and 76 keV X-rays, respectively, and will be bent meridionally to horizontally focus the selected X-rays onto the sample. All crystals will be cut to the same optimized asymmetry angle in order to eliminate image broadening from the crystal thickness. Finite element calculations show that the thermal distortion of the crystals will affect the image size and bandpass.

  20. A compact low cost “master–slave” double crystal monochromator for x-ray cameras calibration of the Laser MégaJoule Facility

    Energy Technology Data Exchange (ETDEWEB)

    Hubert, S., E-mail: sebastien.hubert@cea.fr; Prévot, V.

    2014-12-21

    The Alternative Energies and Atomic Energy Commission (CEA-CESTA, France) built a specific double crystal monochromator (DCM) to perform calibration of x-ray cameras (CCD, streak and gated cameras) by means of a multiple anode diode type x-ray source for the MégaJoule Laser Facility. This DCM, based on pantograph geometry, was specifically modeled to respond to relevant engineering constraints and requirements. The major benefits are mechanical drive of the second crystal on the first one, through a single drive motor, as well as compactness of the entire device. Designed for flat beryl or Ge crystals, this DCM covers the 0.9–10 keV range of our High Energy X-ray Source. In this paper we present the mechanical design of the DCM, its features quantitatively measured and its calibration to finally provide monochromatized spectra displaying spectral purities better than 98%.

  1. Use of zero order diffraction of a grating monochromator towards convenient and sensitive detection of fluorescent analytes in multi fluorophoric systems

    Science.gov (United States)

    Panigrahi, Suraj Kumar; Mishra, Ashok Kumar

    2018-02-01

    White light excitation fluorescence (WLEF) is known to possess analytical advantage in terms of enhanced sensitivity and facile capture of the entire fluorescence spectral signature of multi component fluorescence systems. Using the zero order diffraction of the grating monochromator on the excitation side of a commercial spectrofluorimeter, it has been shown that WLEF spectral measurements can be conveniently carried out. Taking analyte multi-fluorophoric systems like (i) drugs and vitamins spiked in urine sample, (ii) adulteration of extra virgin olive oil with olive pomace oil and (iii) mixture of fabric dyes, it was observed that there is a significant enhancement of measurement sensitivity. The total fluorescence spectral response could be conveniently analysed using PLS2 regression. This work brings out the ease of the use of a conventional fluorimeter for WLEF measurements.

  2. Effects of beamline components (undulator, monochromator, focusing device) on the beam intensity at ID18F (ESRF)

    Energy Technology Data Exchange (ETDEWEB)

    Somogyi, A. E-mail: somogyia@esrf.fr; Drakopoulos, M.; Vekemans, B.; Vincze, L.; Simionovici, A.; Adams, F

    2003-01-01

    The ID18F microprobe end-station of the European Synchrotron Radiation Facility (ESRF) is dedicated to precise and reproducible quantitative X-ray fluorescence analysis in the ppm level with {<=}5% accuracy for elements of Z{>=}19 and micron-size spatial resolution. In order to fulfill this requirement the precise monitoring and normalization of the intensity variation of the focused micro-beam is necessary. The various effects influencing the intensity variation, hence the stability of the {mu}-beam, were investigated by placing different detectors (miniature ionization chamber, photodiodes) into the monochromatic beam. The theoretical statistical error of the measured signal in each detector was estimated on the basis of the absorption and e{sup -}-ion-pair production processes and was compared with the measured statistical errors.

  3. Effects of beamline components (undulator, monochromator, focusing device) on the beam intensity at ID18F (ESRF)

    International Nuclear Information System (INIS)

    Somogyi, A.; Drakopoulos, M.; Vekemans, B.; Vincze, L.; Simionovici, A.; Adams, F.

    2003-01-01

    The ID18F microprobe end-station of the European Synchrotron Radiation Facility (ESRF) is dedicated to precise and reproducible quantitative X-ray fluorescence analysis in the ppm level with ≤5% accuracy for elements of Z≥19 and micron-size spatial resolution. In order to fulfill this requirement the precise monitoring and normalization of the intensity variation of the focused micro-beam is necessary. The various effects influencing the intensity variation, hence the stability of the μ-beam, were investigated by placing different detectors (miniature ionization chamber, photodiodes) into the monochromatic beam. The theoretical statistical error of the measured signal in each detector was estimated on the basis of the absorption and e - -ion-pair production processes and was compared with the measured statistical errors

  4. Effects of beamline components (undulator, monochromator, focusing device) on the beam intensity at ID18F (ESRF)

    CERN Document Server

    Somogyi, A; Vekemans, B; Vincze, L; Simionovici, A; Adams, F

    2003-01-01

    The ID18F microprobe end-station of the European Synchrotron Radiation Facility (ESRF) is dedicated to precise and reproducible quantitative X-ray fluorescence analysis in the ppm level with =19 and micron-size spatial resolution. In order to fulfill this requirement the precise monitoring and normalization of the intensity variation of the focused micro-beam is necessary. The various effects influencing the intensity variation, hence the stability of the mu-beam, were investigated by placing different detectors (miniature ionization chamber, photodiodes) into the monochromatic beam. The theoretical statistical error of the measured signal in each detector was estimated on the basis of the absorption and e sup - -ion-pair production processes and was compared with the measured statistical errors.

  5. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  6. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.

    2013-01-01

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our

  7. Finite-difference schemes for anisotropic diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Es, Bram van, E-mail: es@cwi.nl [Centrum Wiskunde and Informatica, P.O. Box 94079, 1090GB Amsterdam (Netherlands); FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM (Netherlands); Koren, Barry [Eindhoven University of Technology (Netherlands); Blank, Hugo J. de [FOM Institute DIFFER, Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM (Netherlands)

    2014-09-01

    In fusion plasmas diffusion tensors are extremely anisotropic due to the high temperature and large magnetic field strength. This causes diffusion, heat conduction, and viscous momentum loss, to effectively be aligned with the magnetic field lines. This alignment leads to different values for the respective diffusive coefficients in the magnetic field direction and in the perpendicular direction, to the extent that heat diffusion coefficients can be up to 10{sup 12} times larger in the parallel direction than in the perpendicular direction. This anisotropy puts stringent requirements on the numerical methods used to approximate the MHD-equations since any misalignment of the grid may cause the perpendicular diffusion to be polluted by the numerical error in approximating the parallel diffusion. Currently the common approach is to apply magnetic field-aligned coordinates, an approach that automatically takes care of the directionality of the diffusive coefficients. This approach runs into problems at x-points and at points where there is magnetic re-connection, since this causes local non-alignment. It is therefore useful to consider numerical schemes that are tolerant to the misalignment of the grid with the magnetic field lines, both to improve existing methods and to help open the possibility of applying regular non-aligned grids. To investigate this, in this paper several discretization schemes are developed and applied to the anisotropic heat diffusion equation on a non-aligned grid.

  8. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  9. Diffusing diffusivity: Rotational diffusion in two and three dimensions

    Science.gov (United States)

    Jain, Rohit; Sebastian, K. L.

    2017-06-01

    We consider the problem of calculating the probability distribution function (pdf) of angular displacement for rotational diffusion in a crowded, rearranging medium. We use the diffusing diffusivity model and following our previous work on translational diffusion [R. Jain and K. L. Sebastian, J. Phys. Chem. B 120, 3988 (2016)], we show that the problem can be reduced to that of calculating the survival probability of a particle undergoing Brownian motion, in the presence of a sink. We use the approach to calculate the pdf for the rotational motion in two and three dimensions. We also propose new dimensionless, time dependent parameters, αr o t ,2 D and αr o t ,3 D, which can be used to analyze the experimental/simulation data to find the extent of deviation from the normal behavior, i.e., constant diffusivity, and obtain explicit analytical expressions for them, within our model.

  10. Diffusion in solids

    International Nuclear Information System (INIS)

    Tiwari, G.P.; Kale, G.B.; Patil, R.V.

    1999-01-01

    The article presents a brief survey of process of diffusion in solids. It is emphasised that the essence of diffusion is the mass transfer through the atomic jumps. To begin with formal equations for diffusion coefficient are presented. This is followed by discussions on mechanisms of diffusion. Except for solutes which form interstitial solid solution, diffusion in majority of cases is mediated through exchange of sites between an atom and its neighbouring vacancy. Various vacancy parameters such as activation volume, correlation factor, mass effect etc are discussed and their role in establishing the mode of diffusion is delineated. The contribution of dislocations and grain boundaries in diffusion process is brought out. The experimental determination of different types of diffusion coefficients are described. Finally, the pervasive nature of diffusion process in number of commercial processes is outlined to show the importance of diffusion studies in materials science and technology. (author)

  11. Electrolyte diffusion in compacted montmorillonite engineered barriers

    International Nuclear Information System (INIS)

    Jahnke, F.M.; Radke, C.J.

    1985-09-01

    The bentonite-based engineered barrier or packing is a proposed component of several designs conceived to dispose of high-level nuclear waste in geologic repositories. Once radionuclides escape the waste package, they must first diffuse through the highly impermeable clay-rich barrier before they reach the host repository. To determine the effectiveness of the packing as a sorption barrier in the transient release period and as a mass-transfer barrier in the steady release period over the geologic time scales involved in nuclear waste disposal, a fundamental understanding of the diffusion of electrolytes in compacted clays is required. We present, and compare with laboratory data, a model quantifying the diffusion rates of cationic cesium and uncharged tritium in compacted montmorillonite clay. Neutral tritium characterizes the geometry (i.e., tortuosity) of the particulate gel. After accounting for cation exchange, we find that surface diffusion is the dominant mechanism of cation transport, with an approximate surface diffusion coefficient of 2 x 10 -6 cm 2 /s for cesium. This value increases slightly with increasing background ionic strength. The implications of this work for the packing as a migration barrier are twofold. During the transient release period, K/sub d/ values are of little importance in retarding ion migration. This is because sorption also gives rise to a surface diffusion path, and it is surface diffusion which controls the diffusion rate of highly sorbing cations in compacted montmorillonite. During the steady release period, the presence of surface diffusion leads to a flux through the packing which is greatly enhanced. In either case, if surface diffusion is neglected, the appropriate diffusion coefficient of ions in compacted packing will be in considerable error relative to current design recommendations. 11 refs., 4 figs., 1 tab

  12. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  13. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  14. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  15. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  16. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  17. Diffusion archeology for diffusion progression history reconstruction

    OpenAIRE

    Sefer, Emre; Kingsford, Carl

    2015-01-01

    Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring — perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial d...

  18. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Error estimation for variational nodal calculations

    International Nuclear Information System (INIS)

    Zhang, H.; Lewis, E.E.

    1998-01-01

    Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations

  20. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  1. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  2. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  3. High-throughput ab-initio dilute solute diffusion database.

    Science.gov (United States)

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-19

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  4. Spin-diffusions and diffusive molecular dynamics

    Science.gov (United States)

    Farmer, Brittan; Luskin, Mitchell; Plecháč, Petr; Simpson, Gideon

    2017-12-01

    Metastable configurations in condensed matter typically fluctuate about local energy minima at the femtosecond time scale before transitioning between local minima after nanoseconds or microseconds. This vast scale separation limits the applicability of classical molecular dynamics (MD) methods and has spurned the development of a host of approximate algorithms. One recently proposed method is diffusive MD which aims at integrating a system of ordinary differential equations describing the likelihood of occupancy by one of two species, in the case of a binary alloy, while quasistatically evolving the locations of the atoms. While diffusive MD has shown itself to be efficient and provide agreement with observations, it is fundamentally a model, with unclear connections to classical MD. In this work, we formulate a spin-diffusion stochastic process and show how it can be connected to diffusive MD. The spin-diffusion model couples a classical overdamped Langevin equation to a kinetic Monte Carlo model for exchange amongst the species of a binary alloy. Under suitable assumptions and approximations, spin-diffusion can be shown to lead to diffusive MD type models. The key assumptions and approximations include a well-defined time scale separation, a choice of spin-exchange rates, a low temperature approximation, and a mean field type approximation. We derive several models from different assumptions and show their relationship to diffusive MD. Differences and similarities amongst the models are explored in a simple test problem.

  5. Tiny Molybdenites Tell Diffusion Tales

    Science.gov (United States)

    Stein, H. J.; Hannah, J. L.

    2014-12-01

    Diffusion invokes micron-scale exchange during crystal growth and dissolution in magma chambers on short time-scales. Fundamental to interpreting such data are assumptions on magma-fluid dynamics at all scales. Nevertheless, elemental diffusion profiles are used to estimate time scales for magma storage, eruption, and recharge. An underutilized timepiece to evaluate diffusion and 3D mobility of magmatic fluids is high-precision Re-Os dating of molybdenite. With spatially unique molybdenite samples from a young ore system (e.g., 1 Ma) and a double Os spike, analytical errors of 1-3 ka unambiguously separate events in time. Re-Os ages show that hydrous shallow magma chambers locally recharge and expel Cu-Mo-Au-silica as superimposed stockwork vein networks at time scales less than a few thousand years [1]. Re-Os ages provide diffusion rates controlled by a dynamic crystal mush, accumulation and expulsion of metalliferous fluid, and magma reorganization after explosive crystallization events. Importantly, this approach has broad application far from ore deposits. Here, we use Re-Os dating of molybdenite to assess time scales for generating and diffusing metals through the deep crust. To maximize opportunity for chemical diffusion, we use a continental-scale Sveconorwegian mylonite zone for the study area. A geologically constrained suite of molybdenite samples was acquired from quarry exposures. Molybdenite, previously unreported, is extremely scarce. Tiny but telling molybdenites include samples from like occurrences to assure geologic accuracy in Re-Os ages. Ages range from mid-Mesoproterozoic to mid-Neoproterozoic, and correspond to early metamorphic dehydration of a regionally widespread biotite-rich gneiss, localized melting of gneiss to form cm-m-scale K-feldspar ± quartz pods, development of vapor-rich, vuggy mm stringers that serve as volatile collection surfaces in felsic leucosomes, and low-angle (relative to foliation) cross-cutting cm-scale quartz veins

  6. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  7. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  8. Effective Diffusion Coefficients in Coal Chars

    DEFF Research Database (Denmark)

    Johnsson, Jan Erik; Jensen, Anker

    2001-01-01

    Knowledge of effective diffusion coefficients in char particles is important when interpreting experimental reactivity measurements and modeling char combustion or NO and N2O reduction. In this work, NO and N2O reaction with a bituminous coal char was studied in a fixed-bed quartz glass reactor....... In the case of strong pore diffusion limitations, the error in the interpretation of experimental results using the mean pore radius could be a factor of 5 on the intrinsic rate constant. For an average coal char reacting with oxygen at 1300 K, this would be the case for particle sizes larger than about 50...

  9. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  10. Diffusion in reactor materials

    International Nuclear Information System (INIS)

    Fedorov, G.B.; Smirnov, E.A.

    1984-01-01

    The monograph contains a brief description of the principles underlying the theory of diffusion, as well as modern methods of studying diffusion. Data on self-diffusion and diffusion of impurities in a nuclear fuel and fissionable materials (uranium, plutonium, thorium, zirconium, titanium, hafnium, niobium, molybdenum, tungsten, beryllium, etc.) is presented. Anomalous diffusion, diffusion of components, and interdiffusion in binary and ternary alloys were examined. The monograph presents the most recent reference material on diffusion. It is intended for a wide range of researchers working in the field of diffusion in metals and alloys and attempting to discover new materials for application in nuclear engineering. It will also be useful for teachers, research scholars and students of physical metallurgy

  11. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  12. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  13. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  16. Diffusion in flowing gas

    International Nuclear Information System (INIS)

    Reus, K.W.

    1979-01-01

    This thesis is concerned with the back-diffusion method of calculating the mutual diffusion coefficient of two gases. The applicability of this method for measuring diffusion coefficients at temperatures up to 1300 K is considered. A further aim of the work was to make a contribution to the description of the interatomic potential energy of noble gases at higher energies as a function of the internuclear distance. This was achieved with the measured diffusion coefficients, especially with those for high temperatures. (Auth.)

  17. Diffusion Under Geometrical Constraint

    OpenAIRE

    Ogawa, Naohisa

    2014-01-01

    Here we discus the diffusion of particles in a curved tube. This kind of transport phenomenon is observed in biological cells and porous media. To solve such a problem, we discuss the three dimensional diffusion equation with a confining wall forming a thinner tube. We find that the curvature appears in a effective diffusion coefficient for such a quasi-one-dimensional system. As an application to higher dimensional case, we discuss the diffusion in a curved surface with ...

  18. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  19. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  20. Diffuse ceiling ventilation

    DEFF Research Database (Denmark)

    Zhang, Chen

    Diffuse ceiling ventilation is an innovative ventilation concept where the suspended ceiling serves as air diffuser to supply fresh air into the room. Compared with conventional ventilation systems, diffuse ceiling ventilation can significantly reduce or even eliminate draught risk due to the low...

  1. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  2. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  3. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  4. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  5. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  6. Improved diffusion coefficients generated from Monte Carlo codes

    International Nuclear Information System (INIS)

    Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.

    2013-01-01

    Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)

  7. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  8. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  9. Thermal diffusion (1963); Diffusion thermique (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Lemarechal, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-07-01

    This report brings together the essential principles of thermal diffusion in the liquid and gaseous phases. The macroscopic and molecular aspects of the thermal diffusion constant are reviewed, as well as the various measurement method; the most important developments however concern the operation of the CLUSIUS and DICKEL thermo-gravitational column and its applications. (author) [French] Ce rapport rassemble les principes essentiels de la diffusion thermique en phase liquide et en phase gazeuse. Les aspects macroscopique et moleculaire de la constante de diffusion thermique sont passes en revue ainsi que ses differentes methodes de mesure; mais les developpements les plus importants concernent le fonctionnement de ls colonne thermogravitationnelle de CLUSIUS et DICKEL et ses applications. (auteur)

  10. Fractional diffusion equations and anomalous diffusion

    CERN Document Server

    Evangelista, Luiz Roberto

    2018-01-01

    Anomalous diffusion has been detected in a wide variety of scenarios, from fractal media, systems with memory, transport processes in porous media, to fluctuations of financial markets, tumour growth, and complex fluids. Providing a contemporary treatment of this process, this book examines the recent literature on anomalous diffusion and covers a rich class of problems in which surface effects are important, offering detailed mathematical tools of usual and fractional calculus for a wide audience of scientists and graduate students in physics, mathematics, chemistry and engineering. Including the basic mathematical tools needed to understand the rules for operating with the fractional derivatives and fractional differential equations, this self-contained text presents the possibility of using fractional diffusion equations with anomalous diffusion phenomena to propose powerful mathematical models for a large variety of fundamental and practical problems in a fast-growing field of research.

  11. An energy-dispersive X-ray monochromator for measurements in the soft X-ray spectra: design, construction and first measurements. Ein energiedispersiver Roentgenmonochromator mit der Moeglichkeit von Messungen im weichen Roentgenbereich: Entwurf, Aufbau und erste Messungen

    Energy Technology Data Exchange (ETDEWEB)

    Steil, S.

    1993-12-01

    An Energy-Dispersive X-ray Monochromator (EDM) for time-resolved X-ray absorption spectroscopy was built in the Synchrotron radiation laboratory at the 3.5 GeV ELectron Stretcher and Accelerator (ELSA). Bragg angles up to 70 and a specially designed vacuum system allow measurements down to an energy of 2.149 keV (P K-edge) with a Si(111)-crystal. Compared to a standard double crystal monochromator and for an EXAFS spectrum at the Cu K-edge at 8.979 keV for concentrated samples, the EDM boosts time resolution by 3 orders of magnitude. The time resolution increases by a factor of 50 for a XANES spectrum at the S K-edge at 2.472 keV for a rubber sample with 4% sulfur. The energy resolution of the EDM is limited by the Darwin width [Omega] of the Bragg crystal. The harmonics in the 'monochromatized' beam, which increase to lower energies, could be nearly eliminated by using a quartz mirror. The spherical aberration of the focus was described theoretically for a cylindrically bent crystal and compared with measurements. In a first time-resolved measurement at the S K-edge, which comprehended about 120 spectra taken in 40 minutes, the thermal ageing of a rubber sample was investigated to demonstrate the performance of the monochromator. (orig.)

  12. Comparison of TXRF detection limits for low Z elements in different beam geometries at the PTB monochromator beamline for undulator radiation at Bessy II

    International Nuclear Information System (INIS)

    Beckhoff, B.; Ulm, G.; Pepponi, G.; Streli, C.; Wobrauschek, P.; Fabry, L.; Pahlke, S.

    2000-01-01

    A set of initial TXRF experiments were conducted at the PTB plane grating monochromator beamline for undulator radiation at the electron storage ring BESSY II allowing for exciting energies between 0.1 keV and 1.9 keV. Here, the lower limits of detection of TXRF analysis investigated for some low Z elements such as C, N, 0, Al, Mg and Na in two different detection geometries for various excitation modes. Compared to ordinary XRF geometries involving large incident angles, the TXRF variant offers also at low excitation energies drastically reduced background contributions due to the small penetration depth caused by the total reflection of the incident beam at the polished surface of a flat specimen carrier such as a silicon wafer. For the sake of an application-oriented TXRF approach, droplet samples on Si wafer surfaces were prepared by Wacker Siltronic and investigated in the TXRF irradiation chamber of the Atominstitut offering a semiconductor detector with a thin entrance window that was only 300 nm thick. (author)

  13. Compact and Light-Weight Solar Spaceflight Instrument Designs Utilizing Newly Developed Miniature Free-Standing Zone Plates: EUV Radiometer and Limb-Scanning Monochromator

    Science.gov (United States)

    Seely, J. F.; McMullin, D. R.; Bremer, J.; Chang, C.; Sakdinawat, A.; Jones, A. R.; Vest, R.

    2014-12-01

    Two solar instrument designs are presented that utilize newly developed miniature free-standing zone plates having interconnected Au opaque bars and no support membrane resulting in excellent long-term stability in space. Both instruments are based on a zone plate having 4 mm outer diameter and 1 to 2 degree field of view. The zone plate collects EUV radiation and focuses a narrow bandpass through a pinhole aperture and onto a silicon photodiode detector. As a miniature radiometer, EUV irradiance is accurately determined from the zone plate efficiency and the photodiode responsivity that are calibrated at the NIST SURF synchrotron facility. The EUV radiometer is pointed to the Sun and measures the absolute solar EUV irradiance in high time cadence suitable for solar physics and space weather applications. As a limb-scanning instrument in low earth orbit, a miniature zone-plate monochromator measures the extinction of solar EUV radiation by scattering through the upper atmosphere which is a measure of the variability of the ionosphere. Both instruments are compact and light-weight and are attractive for CubeSats and other missions where resources are extremely limited.

  14. Design of mirror and monochromator crystals for a high-resolution multiwavelength anomalous diffraction beam line on a bending magnet at the ESRF

    International Nuclear Information System (INIS)

    Roth, M.; Ferrer, J.; Simon, J.; Geissler, E.

    1992-01-01

    High intensity for diffraction experiments with high-energy resolution on an intense x-ray beam, like the bending magnet beam lines at the ESRF, requires a strict control of the curvature of the optical elements placed in the beam for geometrical focusing and for wavelength monochromatization. Unwanted curvatures can come from nonuniform and variable heating of the optical elements produced by the absorption of x rays. To design the CRG/D2AM beam line described in the accompanying paper, some new techniques were developed to control these effects based on geometrical, i.e., topological, considerations. (1) Cooling of the entrance mirror: longitudinal curvature can be strongly reduced by cooling the mirror from the sides (and not from the rear) and only near the reflecting surface (i.e., not over the whole lateral surface). The cooling can be achieved for instance with an isothermal liquid Ga eutectic bath. (2) Cooling of the first single-crystal Si monochromator: because of the size of the crystal, only cooling from the rear is conceivable in this case. It can be shown by calculation that the curvature due to the front-to-rear gradient can be exactly compensated by the thermal expansion of a metallic layer at the rear of the crystal, having a larger expansion coefficient than Si

  15. Luminescent zinc(ii) and copper(i) complexes for high-performance solution-processed monochromic and white organic light-emitting devices.

    Science.gov (United States)

    Cheng, Gang; So, Gary Kwok-Ming; To, Wai-Pong; Chen, Yong; Kwok, Chi-Chung; Ma, Chensheng; Guan, Xiangguo; Chang, Xiaoyong; Kwok, Wai-Ming; Che, Chi-Ming

    2015-08-01

    The synthesis and spectroscopic properties of luminescent tetranuclear zinc(ii) complexes of substituted 7-azaindoles and a series of luminescent copper(i) complexes containing 7,8-bis(diphenylphosphino)-7,8-dicarba- nido -undecaborate ligand are described. These complexes are stable towards air and moisture. Thin film samples of the luminescent copper(i) complexes in 2,6-dicarbazolo-1,5-pyridine and zinc(ii) complexes in poly(methyl methacrylate) showed emission quantum yields of up to 0.60 (for Cu-3 ) and 0.96 (for Zn-1 ), respectively. Their photophysical properties were examined by ultrafast time-resolved emission spectroscopy, temperature dependent emission lifetime measurements and density functional theory calculations. Monochromic blue and orange solution-processed OLEDs with these Zn(ii) and Cu(i) complexes as light-emitting dopants have been fabricated, respectively. Maximum external quantum efficiency (EQE) of 5.55% and Commission Internationale de l'Eclairage (CIE) coordinates of (0.16, 0.19) were accomplished with the optimized Zn-1 -OLED while these values were, respectively 15.64% and (0.48, 0.51) for the optimized Cu-3 -OLED. Solution-processed white OLEDs having maximum EQE of 6.88%, CIE coordinates of (0.42, 0.44), and colour rendering index of 81 were fabricated by using these luminescent Zn(ii) and Cu(i) complexes as blue and orange light-emitting dopant materials, respectively.

  16. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  17. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  18. Diffusion in molybdenum disilicide

    International Nuclear Information System (INIS)

    Salamon, M.; Mehrer, H.

    2005-01-01

    The diffusion behaviour of the high-temperature material molybdenum disilicide (MoSi 2 ) was completely unknown until recently. In this paper we present studies of Mo self-diffusion and compare our present results with our already published studies of Si and Ge diffusion in MoSi 2 . Self-diffusion of molybdenum in monocrystalline MoSi 2 was studied by the radiotracer technique using the radioisotope 99 Mo. Deposition of the radiotracer and serial sectioning after the diffusion anneals to determine the concentration-depth profiles was performed using a sputtering device. Diffusion of Mo is a very slow process. In the entire temperature region investigated (1437 to 2173 K), the 99 Mo diffusivities in both principal directions of the tetragonal MoSi 2 crystals obey Arrhenius laws, where the diffusion perpendicular to the tetragonal axis is faster by two to three orders of magnitude than parallel to it. The activation enthalpies for diffusion perpendicular and parallel to the tetragonal axis are Q perpendicular to = 468 kJ mol -1 (4.85 eV) and Q parallel = 586 kJ mol -1 (6.07 eV), respectively. Diffusion of Si and its homologous element Ge is fast and is mediated by thermal vacancies of the Si sublattice of MoSi 2 . The diffusion of Mo is by several orders of magnitude slower than the diffusion of Si and Ge. This large difference suggests that Si and Mo diffusion are decoupled and that the diffusion of Mo likely takes place via vacancies on the Mo sublattice. (orig.)

  19. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  20. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  1. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  2. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  3. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  5. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  6. Metric diffusion along foliations

    CERN Document Server

    Walczak, Szymon M

    2017-01-01

    Up-to-date research in metric diffusion along compact foliations is presented in this book. Beginning with fundamentals from the optimal transportation theory and the theory of foliations; this book moves on to cover Wasserstein distance, Kantorovich Duality Theorem, and the metrization of the weak topology by the Wasserstein distance. Metric diffusion is defined, the topology of the metric space is studied and the limits of diffused metrics along compact foliations are discussed. Essentials on foliations, holonomy, heat diffusion, and compact foliations are detailed and vital technical lemmas are proved to aide understanding. Graduate students and researchers in geometry, topology and dynamics of foliations and laminations will find this supplement useful as it presents facts about the metric diffusion along non-compact foliation and provides a full description of the limit for metrics diffused along foliation with at least one compact leaf on the two dimensions.

  7. Error Analysis of Variations on Larsen's Benchmark Problem

    International Nuclear Information System (INIS)

    Azmy, YY

    2001-01-01

    Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L 1 , L 2 , converge to zero with mesh refinement, the pointwise L ∞ norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD

  8. Correlated diffusion imaging

    International Nuclear Information System (INIS)

    Wong, Alexander; Glaister, Jeffrey; Cameron, Andrew; Haider, Masoom

    2013-01-01

    Prostate cancer is one of the leading causes of cancer death in the male population. Fortunately, the prognosis is excellent if detected at an early stage. Hence, the detection and localization of prostate cancer is crucial for diagnosis, as well as treatment via targeted focal therapy. New imaging techniques can potentially be invaluable tools for improving prostate cancer detection and localization. In this study, we introduce a new form of diffusion magnetic resonance imaging called correlated diffusion imaging, where the tissue being imaged is characterized by the joint correlation of diffusion signal attenuation across multiple gradient pulse strengths and timings. By taking into account signal attenuation at different water diffusion motion sensitivities, correlated diffusion imaging can provide improved delineation between cancerous tissue and healthy tissue when compared to existing diffusion imaging modalities. Quantitative evaluation using receiver operating characteristic (ROC) curve analysis, tissue class separability analysis, and visual assessment by an expert radiologist were performed to study correlated diffusion imaging for the task of prostate cancer diagnosis. These results are compared with that obtained using T2-weighted imaging and standard diffusion imaging (via the apparent diffusion coefficient (ADC)). Experimental results suggest that correlated diffusion imaging provide improved delineation between healthy and cancerous tissue and may have potential as a diagnostic tool for cancer detection and localization in the prostate gland. A new form of diffusion magnetic resonance imaging called correlated diffusion imaging (CDI) was developed for the purpose of aiding radiologists in cancer detection and localization in the prostate gland. Preliminary results show CDI shows considerable promise as a diagnostic aid for radiologists in the detection and localization of prostate cancer

  9. Gaseous diffusion system

    International Nuclear Information System (INIS)

    Garrett, G.A.; Shacter, J.

    1978-01-01

    A gaseous diffusion system is described comprising a plurality of diffusers connected in cascade to form a series of stages, each of the diffusers having a porous partition dividing it into a high pressure chamber and a low pressure chamber, and means for combining a portion of the enriched gas from a succeeding stage with a portion of the enriched gas from the low pressure chamber of each stage and feeding it into one extremity of the high pressure chamber thereof

  10. Inpainting using airy diffusion

    Science.gov (United States)

    Lorduy Hernandez, Sara

    2015-09-01

    One inpainting procedure based on Airy diffusion is proposed, implemented via Maple and applied to some digital images. Airy diffusion is a partial differential equation with spatial derivatives of third order in contrast with the usual diffusion with spatial derivatives of second order. Airy diffusion generates the Airy semigroup in terms of the Airy functions which can be rewritten in terms of Bessel functions. The Airy diffusion can be used to smooth an image with the corresponding noise elimination via convolution. Also the Airy diffusion can be used to erase objects from an image. We build an algorithm using the Maple package ImageTools and such algorithm is tested using some images. Our results using Airy diffusion are compared with the similar results using standard diffusion. We observe that Airy diffusion generates powerful filters for image processing which could be incorporated in the usual packages for image processing such as ImageJ and Photoshop. Also is interesting to consider the possibility to incorporate the Airy filters as applications for smartphones and smart-glasses.

  11. Diffusion in compacted betonite

    International Nuclear Information System (INIS)

    Muurinen, A.; Rantanen, J.

    1985-01-01

    The objective of this report is to collect the literature bearing on the diffusion in compacted betonite, which has been suggested as possible buffer material for the disposal of spent fuel. Diffusion in a porous, water-saturated material is usually described as diffusion in the pore-water where sorption on the solid matter can delay the migration in the instationary state. There are also models which take into consideration that the sorbed molecules can also move while being sorbed. Diffusion experiments in compacted bentonite have been reported by many authors. Gases, anions, cations and actinides have been used as diffusing molecules. The report collects the results and the information on the measurement methods. On the basis of the results can be concluded that different particles possibly follow different diffusion mechanisms. The parameters which affect the diffusion seem to be for example the size, the electric charge and the sorption properties of the diffusing molecule. The report also suggest the parameters to be used in the diffusion calculation of the safety analyses of spent fuel disposal. (author)

  12. Diffusion of zinc into an unpassivated surface of indium phosphide

    International Nuclear Information System (INIS)

    Budko, T.O.; Gushchinskaya, E.V.; Emelyanenko, Yu.S.; Malyshev, S.A.

    1989-01-01

    Peculiarities are studied of the diffusion of Zn into an unpassivated surface of InP in an open gasflow system. In the region where the carrier concentration profile is described by an erfc (error function compliment), the diffusion coefficient and activation energy are determined. It is shown that thermal processes cause changes in the charge state of Zn in InP which result in a variation of the carrier profile in the semiconductor. (author)

  13. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  14. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  15. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  16. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  17. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  18. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  19. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  20. Negligence, genuine error, and litigation

    OpenAIRE

    Sohn DH

    2013-01-01

    David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort syst...

  1. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  2. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  3. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  4. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  5. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  6. Discrimination of thermal diffusivity

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2009-01-01

    Materials such as wood or metal which are at equal temperatures are perceived to be of different ‘coldness’ due to differences in thermal properties, such as the thermal diffusivity. The thermal diffusivity of a material is a parameter that controls the rate with which heat is extracted from the

  7. Diffusion Based Photon Mapping

    DEFF Research Database (Denmark)

    Schjøth, Lars; Fogh Olsen, Ole; Sporring, Jon

    2007-01-01

    . To address this problem we introduce a novel photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way we preserve the important illumination features......, while eliminating noise. We call our method diffusion based photon mapping....

  8. Diffusion Based Photon Mapping

    DEFF Research Database (Denmark)

    Schjøth, Lars; Olsen, Ole Fogh; Sporring, Jon

    2006-01-01

    . To address this problem we introduce a novel photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way we preserve the important illumination features......, while eliminating noise. We call our method diffusion based photon mapping....

  9. Adaptation and Cultural Diffusion.

    Science.gov (United States)

    Ormrod, Richard K.

    1992-01-01

    Explores the role of adaptation in cultural diffusion. Explains that adaptation theory recognizes the lack of independence between innovations and their environmental settings. Discusses testing and selection, modification, motivation, and cognition. Suggests that adaptation effects are pervasive in cultural diffusion but require a broader, more…

  10. Modelling of Innovation Diffusion

    Directory of Open Access Journals (Sweden)

    Arkadiusz Kijek

    2010-01-01

    Full Text Available Since the publication of the Bass model in 1969, research on the modelling of the diffusion of innovation resulted in a vast body of scientific literature consisting of articles, books, and studies of real-world applications of this model. The main objective of the diffusion model is to describe a pattern of spread of innovation among potential adopters in terms of a mathematical function of time. This paper assesses the state-of-the-art in mathematical models of innovation diffusion and procedures for estimating their parameters. Moreover, theoretical issues related to the models presented are supplemented with empirical research. The purpose of the research is to explore the extent to which the diffusion of broadband Internet users in 29 OECD countries can be adequately described by three diffusion models, i.e. the Bass model, logistic model and dynamic model. The results of this research are ambiguous and do not indicate which model best describes the diffusion pattern of broadband Internet users but in terms of the results presented, in most cases the dynamic model is inappropriate for describing the diffusion pattern. Issues related to the further development of innovation diffusion models are discussed and some recommendations are given. (original abstract

  11. Thermal diffusion (1963)

    International Nuclear Information System (INIS)

    Lemarechal, A.

    1963-01-01

    This report brings together the essential principles of thermal diffusion in the liquid and gaseous phases. The macroscopic and molecular aspects of the thermal diffusion constant are reviewed, as well as the various measurement method; the most important developments however concern the operation of the CLUSIUS and DICKEL thermo-gravitational column and its applications. (author) [fr

  12. Diffusion of Botulinum Toxins

    Directory of Open Access Journals (Sweden)

    Matthew A. Brodsky

    2012-08-01

    Full Text Available Background: It is generally agreed that diffusion of botulinum toxin occurs, but the extent of the spread and its clinical importance are disputed. Many factors have been suggested to play a role but which have the most clinical relevance is a subject of much discussion.Methods: This review discusses the variables affecting diffusion, including protein composition and molecular size as well as injection factors (e.g., volume, dose, injection method. It also discusses data on diffusion from comparative studies in animal models and human clinical trials that illustrate differences between the available botulinum toxin products (onabotulinumtoxinA, abobotulinumtoxinA, incobotulinumtoxinA, and rimabotulinumtoxinB.Results: Neither molecular weight nor the presence of complexing proteins appears to affect diffusion; however, injection volume, concentration, and dose all play roles and are modifiable. Both animal and human studies show that botulinum toxin products are not interchangeable, and that some products are associated with greater diffusion and higher rates of diffusion-related adverse events than others.Discussion: Each of the botulinum toxins is a unique pharmacologic entity. A working knowledge of the different serotypes is essential to avoid unwanted diffusion-related adverse events. In addition, clinicians should be aware that the factors influencing diffusion may range from properties intrinsic to the drug to accurate muscle selection as well as dilution, volume, and dose injected.

  13. Diffusion in Coulomb crystals.

    Science.gov (United States)

    Hughto, J; Schneider, A S; Horowitz, C J; Berry, D K

    2011-07-01

    Diffusion in Coulomb crystals can be important for the structure of neutron star crusts. We determine diffusion constants D from molecular dynamics simulations. We find that D for Coulomb crystals with relatively soft-core 1/r interactions may be larger than D for Lennard-Jones or other solids with harder-core interactions. Diffusion, for simulations of nearly perfect body-centered-cubic lattices, involves the exchange of ions in ringlike configurations. Here ions "hop" in unison without the formation of long lived vacancies. Diffusion, for imperfect crystals, involves the motion of defects. Finally, we find that diffusion, for an amorphous system rapidly quenched from Coulomb parameter Γ=175 to Coulomb parameters up to Γ=1750, is fast enough that the system starts to crystalize during long simulation runs. These results strongly suggest that Coulomb solids in cold white dwarf stars, and the crust of neutron stars, will be crystalline and not amorphous.

  14. Atomic diffusion in stars

    CERN Document Server

    Michaud, Georges; Richer, Jacques

    2015-01-01

    This book gives an overview of atomic diffusion, a fundamental physical process, as applied to all types of stars, from the main sequence to neutron stars. The superficial abundances of stars as well as their evolution can be significantly affected. The authors show where atomic diffusion plays an essential role and how it can be implemented in modelling.  In Part I, the authors describe the tools that are required to include atomic diffusion in models of stellar interiors and atmospheres. An important role is played by the gradient of partial radiative pressure, or radiative acceleration, which is usually neglected in stellar evolution. In Part II, the authors systematically review the contribution of atomic diffusion to each evolutionary step. The dominant effects of atomic diffusion are accompanied by more subtle effects on a large number of structural properties throughout evolution. One of the goals of this book is to provide the means for the astrophysicist or graduate student to evaluate the importanc...

  15. Degenerate nonlinear diffusion equations

    CERN Document Server

    Favini, Angelo

    2012-01-01

    The aim of these notes is to include in a uniform presentation style several topics related to the theory of degenerate nonlinear diffusion equations, treated in the mathematical framework of evolution equations with multivalued m-accretive operators in Hilbert spaces. The problems concern nonlinear parabolic equations involving two cases of degeneracy. More precisely, one case is due to the vanishing of the time derivative coefficient and the other is provided by the vanishing of the diffusion coefficient on subsets of positive measure of the domain. From the mathematical point of view the results presented in these notes can be considered as general results in the theory of degenerate nonlinear diffusion equations. However, this work does not seek to present an exhaustive study of degenerate diffusion equations, but rather to emphasize some rigorous and efficient techniques for approaching various problems involving degenerate nonlinear diffusion equations, such as well-posedness, periodic solutions, asympt...

  16. Particle Simulation of Fractional Diffusion Equations

    KAUST Repository

    Allouch, Samer

    2017-07-12

    This work explores different particle-based approaches to the simulation of one-dimensional fractional subdiffusion equations in unbounded domains. We rely on smooth particle approximations, and consider four methods for estimating the fractional diffusion term. The first method is based on direct differentiation of the particle representation, it follows the Riesz definition of the fractional derivative and results in a non-conservative scheme. The other three methods follow the particle strength exchange (PSE) methodology and are by construction conservative, in the sense that the total particle strength is time invariant. The first PSE algorithm is based on using direct differentiation to estimate the fractional diffusion flux, and exploiting the resulting estimates in an integral representation of the divergence operator. Meanwhile, the second one relies on the regularized Riesz representation of the fractional diffusion term to derive a suitable interaction formula acting directly on the particle representation of the diffusing field. A third PSE construction is considered that exploits the Green\\'s function of the fractional diffusion equation. The performance of all four approaches is assessed for the case of a one-dimensional diffusion equation with constant diffusivity. This enables us to take advantage of known analytical solutions, and consequently conduct a detailed analysis of the performance of the methods. This includes a quantitative study of the various sources of error, namely filtering, quadrature, domain truncation, and time integration, as well as a space and time self-convergence analysis. These analyses are conducted for different values of the order of the fractional derivatives, and computational experiences are used to gain insight that can be used for generalization of the present constructions.

  17. Particle Simulation of Fractional Diffusion Equations

    KAUST Repository

    Allouch, Samer; Lucchesi, Marco; Maî tre, O. P. Le; Mustapha, K. A.; Knio, Omar

    2017-01-01

    This work explores different particle-based approaches to the simulation of one-dimensional fractional subdiffusion equations in unbounded domains. We rely on smooth particle approximations, and consider four methods for estimating the fractional diffusion term. The first method is based on direct differentiation of the particle representation, it follows the Riesz definition of the fractional derivative and results in a non-conservative scheme. The other three methods follow the particle strength exchange (PSE) methodology and are by construction conservative, in the sense that the total particle strength is time invariant. The first PSE algorithm is based on using direct differentiation to estimate the fractional diffusion flux, and exploiting the resulting estimates in an integral representation of the divergence operator. Meanwhile, the second one relies on the regularized Riesz representation of the fractional diffusion term to derive a suitable interaction formula acting directly on the particle representation of the diffusing field. A third PSE construction is considered that exploits the Green's function of the fractional diffusion equation. The performance of all four approaches is assessed for the case of a one-dimensional diffusion equation with constant diffusivity. This enables us to take advantage of known analytical solutions, and consequently conduct a detailed analysis of the performance of the methods. This includes a quantitative study of the various sources of error, namely filtering, quadrature, domain truncation, and time integration, as well as a space and time self-convergence analysis. These analyses are conducted for different values of the order of the fractional derivatives, and computational experiences are used to gain insight that can be used for generalization of the present constructions.

  18. Nodal spectrum method for solving neutron diffusion equation

    International Nuclear Information System (INIS)

    Sanchez, D.; Garcia, C. R.; Barros, R. C. de; Milian, D.E.

    1999-01-01

    Presented here is a new numerical nodal method for solving static multidimensional neutron diffusion equation in rectangular geometry. Our method is based on a spectral analysis of the nodal diffusion equations. These equations are obtained by integrating the diffusion equation in X, Y directions and then considering flat approximations for the current. These flat approximations are the only approximations that are considered in this method, as a result the numerical solutions are completely free from truncation errors. We show numerical results to illustrate the methods accuracy for coarse mesh calculations

  19. Oxygen diffusion in monazite

    Science.gov (United States)

    Cherniak, D. J.; Zhang, X. Y.; Nakamura, M.; Watson, E. B.

    2004-09-01

    We report measurements of oxygen diffusion in natural monazites under both dry, 1-atm conditions and hydrothermal conditions. For dry experiments, 18O-enriched CePO4 powder and monazite crystals were sealed in Ag-Pd capsules with a solid buffer (to buffer at NNO) and annealed in 1-atm furnaces. Hydrothermal runs were conducted in cold-seal pressure vessels, where monazite grains were encapsulated with 18O-enriched water. Following the diffusion anneals, oxygen concentration profiles were measured with Nuclear Reaction Analysis (NRA) using the reaction 18O(p,α)15N. Over the temperature range 850-1100 °C, the Arrhenius relation determined for dry diffusion experiments on monazite is given by: Under wet conditions at 100 MPa water pressure, over the temperature range 700-880 °C, oxygen diffusion can be described by the Arrhenius relationship: Oxygen diffusion under hydrothermal conditions has a significantly lower activation energy for diffusion than under dry conditions, as has been found the case for many other minerals, both silicate and nonsilicate. Given these differences in activation energies, the differences between dry and wet diffusion rates increase with lower temperatures; for example, at 600 °C, dry diffusion will be more than 4 orders of magnitude slower than diffusion under hydrothermal conditions. These disparate diffusivities will result in pronounced differences in the degree of retentivity of oxygen isotope signatures. For instance, under dry conditions (presumably rare in the crust) and high lower-crustal temperatures (∼800 °C), monazite cores of 70-μm radii will preserve O isotope ratios for about 500,000 years; by comparison, they would be retained at this temperature under wet conditions for about 15,000 years.

  20. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  1. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  2. Transport equivalent diffusion constants for reflector region in PWRs

    International Nuclear Information System (INIS)

    Tahara, Yoshihisa; Sekimoto, Hiroshi

    2002-01-01

    The diffusion-theory-based nodal method is widely used in PWR core designs for reason of its high computing speed in three-dimensional calculations. The baffle/reflector (B/R) constants used in nodal calculations are usually calculated based on a one-dimensional transport calculation. However, to achieve high accuracy of assembly power prediction, two-dimensional model is needed. For this reason, the method for calculating transport equivalent diffusion constants of reflector material was developed so that the neutron currents on the material boundaries could be calculated exactly in diffusion calculations. Two-dimensional B/R constants were calculated using the transport equivalent diffusion constants in the two-dimensional diffusion calculation whose geometry reflected the actual material configuration in the reflector region. The two-dimensional B/R constants enabled us to predict assembly power within an error of 1.5% at hot full power conditions. (author)

  3. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  4. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  5. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  6. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  7. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  8. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  9. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  10. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  11. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  12. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  13. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  14. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  15. Human Error and Organizational Management

    Directory of Open Access Journals (Sweden)

    Alecxandrina DEACONU

    2009-01-01

    Full Text Available The concern for performance is a topic that raises interest in the businessenvironment but also in other areas that – even if they seem distant from thisworld – are aware of, interested in or conditioned by the economy development.As individual performance is very much influenced by the human resource, wechose to analyze in this paper the mechanisms that generate – consciously or not–human error nowadays.Moreover, the extremely tense Romanian context,where failure is rather a rule than an exception, made us investigate thephenomenon of generating a human error and the ways to diminish its effects.

  16. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  17. The Sustained Influence of an Error on Future Decision-Making.

    Science.gov (United States)

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  18. The Sustained Influence of an Error on Future Decision-Making

    Directory of Open Access Journals (Sweden)

    Björn C. Schiffler

    2017-06-01

    Full Text Available Post-error slowing (PES is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants’ response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters’ role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  19. Diffuse interstellar clouds

    International Nuclear Information System (INIS)

    Black, J.H.

    1987-01-01

    The author defines and discusses the nature of diffuse interstellar clouds. He discusses how they contribute to the general extinction of starlight. The atomic and molecular species that have been identified in the ultraviolet, visible, and near infrared regions of the spectrum of a diffuse cloud are presented. The author illustrates some of the practical considerations that affect absorption line observations of interstellar atoms and molecules. Various aspects of the theoretical description of diffuse clouds required for a full interpretation of the observations are discussed

  20. Infrared diffuse interstellar bands

    Science.gov (United States)

    Galazutdinov, G. A.; Lee, Jae-Joon; Han, Inwoo; Lee, Byeong-Cheol; Valyavin, G.; Krełowski, J.

    2017-05-01

    We present high-resolution (R ˜ 45 000) profiles of 14 diffuse interstellar bands in the ˜1.45 to ˜2.45 μm range based on spectra obtained with the Immersion Grating INfrared Spectrograph at the McDonald Observatory. The revised list of diffuse bands with accurately estimated rest wavelengths includes six new features. The diffuse band at 15 268.2 Å demonstrates a very symmetric profile shape and thus can serve as a reference for finding the 'interstellar correction' to the rest wavelength frame in the H range, which suffers from a lack of known atomic/molecular lines.

  1. Self diffusion in tungsten

    International Nuclear Information System (INIS)

    Mundy, J.N.; Rothman, S.J.; Lam, N.Q.; Nowicki, L.J.; Hoff, H.A.

    1978-01-01

    The lack of understanding of self-diffusion in Group VI metals together with the wide scatter in the measured values of tungsten self-diffusion has prompted the present measurements to be made over a wide temperature range (1/2Tsub(m) to Tsub(m)). The diffusion coefficients have been measured in the temperature range 1430-2630 0 C. The present measurements show non-linear Arrhenius behavior but a reliable two-exponential fit of the data should await further measurements. (Auth.)

  2. The numerical simulation of convection delayed dominated diffusion equation

    Directory of Open Access Journals (Sweden)

    Mohan Kumar P. Murali

    2016-01-01

    Full Text Available In this paper, we propose a fitted numerical method for solving convection delayed dominated diffusion equation. A fitting factor is introduced and the model equation is discretized by cubic spline method. The error analysis is analyzed for the consider problem. The numerical examples are solved using the present method and compared the result with the exact solution.

  3. Estimation of diffuse from measured global solar radiation

    International Nuclear Information System (INIS)

    Moriarty, W.W.

    1991-01-01

    A data set of quality controlled radiation observations from stations scattered throughout Australia was formed and further screened to remove residual doubtful observations. It was then divided into groups by solar elevation, and used to find average relationships for each elevation group between relative global radiation (clearness index - the measured global radiation expressed as a proportion of the radiation on a horizontal surface at the top of the atmosphere) and relative diffuse radiation. Clear-cut relationships were found, which were then fitted by polynomial expressions giving the relative diffuse radiation as a function of relative global radiation and solar elevation. When these expressions were used to estimate the diffuse radiation from the global, the results had a slightly smaller spread of errors than those from an earlier technique given by Spencer. It was found that the errors were related to cloud amount, and further relationships were developed giving the errors as functions of global radiation, solar elevation, and the fraction of sky obscured by high cloud and by opaque (low and middle level) cloud. When these relationships were used to adjust the first estimates of diffuse radiation, there was a considerable reduction in the number of large errors

  4. Medication errors in pediatric inpatients

    DEFF Research Database (Denmark)

    Rishoej, Rikke Mie; Almarsdóttir, Anna Birna; Christesen, Henrik Thybo

    2017-01-01

    The aim was to describe medication errors (MEs) in hospitalized children reported to the national mandatory reporting and learning system, the Danish Patient Safety Database (DPSD). MEs were extracted from DPSD from the 5-year period of 2010–2014. We included reports from public hospitals on pati...... safety in pediatric inpatients.(Table presented.)...

  5. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  6. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  7. and Correlated Error-Regressor

    African Journals Online (AJOL)

    Nekky Umera

    in queuing theory and econometrics, where the usual assumption of independent error terms may not be plausible in most cases. Also, when using time-series data on a number of micro-economic units, such as households and service oriented channels, where the stochastic disturbance terms in part reflect variables which ...

  8. Rank error-correcting pairs

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto; Pellikaan, Ruud

    2017-01-01

    Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...

  9. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  10. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  11. The Errors of Our Ways

    Science.gov (United States)

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  12. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  13. Diffusion of Wilson loops

    International Nuclear Information System (INIS)

    Brzoska, A.M.; Lenz, F.; Thies, M.; Negele, J.W.

    2005-01-01

    A phenomenological analysis of the distribution of Wilson loops in SU(2) Yang-Mills theory is presented in which Wilson loop distributions are described as the result of a diffusion process on the group manifold. It is shown that, in the absence of forces, diffusion implies Casimir scaling and, conversely, exact Casimir scaling implies free diffusion. Screening processes occur if diffusion takes place in a potential. The crucial distinction between screening of fundamental and adjoint loops is formulated as a symmetry property related to the center symmetry of the underlying gauge theory. The results are expressed in terms of an effective Wilson loop action and compared with various limits of SU(2) Yang-Mills theory

  14. Diffusion between evolving interfaces

    International Nuclear Information System (INIS)

    Juntunen, Janne; Merikoski, Juha

    2010-01-01

    Diffusion in an evolving environment is studied by continuous-time Monte Carlo simulations. Diffusion is modeled by continuous-time random walkers on a lattice, in a dynamic environment provided by bubbles between two one-dimensional interfaces driven symmetrically towards each other. For one-dimensional random walkers constrained by the interfaces, the bubble size distribution dominates diffusion. For two-dimensional random walkers, it is also controlled by the topography and dynamics of the interfaces. The results of the one-dimensional case are recovered in the limit where the interfaces are strongly driven. Even with simple hard-core repulsion between the interfaces and the particles, diffusion is found to depend strongly on the details of the dynamical rules of particles close to the interfaces.

  15. On Diffusion and Permeation

    KAUST Repository

    Peppin, Stephen S. L.

    2009-01-01

    concentrations they form a nearly rigid porous glass through which the fluid permeates. The theoretically determined pressure drop is nonlinear in the diffusion regime and linear in the permeation regime, in quantitative agreement with experimental measurements

  16. Diffusing Best Practices

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Baskerville, Richard

    2014-01-01

    approach. The study context is a design case in which an organization desires to diffuse its best practices across different groups. The design goal is embodied in organizational mechanisms to achieve this diffusion. The study used Theory of Planned Behavior (TPB) as a kernel theory. The artifacts...... resulting from the design were two-day training workshops conceptually anchored to TBP. The design theory was evaluated through execution of eight diffusion workshops involving three different groups in the same company. The findings indicate that the match between the practice and the context materialized...... that the behavior will be effective). These two factors were especially critical if the source context of the best practice is qualitatively different from the target context into which the organization is seeking to diffuse the best practice....

  17. Detection of diffusible substances

    Energy Technology Data Exchange (ETDEWEB)

    Warembourg, M [Lille-1 Univ., 59 - Villeneuve-d' Ascq (France)

    1976-12-01

    The different steps of a radioautographic technique for the detection of diffusible substances are described. Using this radioautographic method, the topographic distribution of estradiol-concentrating neurons was studied in the nervous system and pituitary of the ovariectomized mouse and guinea-pig. A relatively good morphological preservation of structures can be ascertained on sections from unfixed, unembedded tissues prepared at low temperatures and kept-under relatively low humidity. The translocation or extraction of diffusible substances is avoided by directly mounting of frozen sections on dried photographic emulsion. Since no solvent is used, this technique excludes the major sources of diffusion artifacts and permits to be in favourable conditions for the localization of diffusible substances.

  18. On Diffusion and Permeation

    KAUST Repository

    Peppin, Stephen S. L.

    2009-01-01

    Diffusion and permeation are discussed within the context of irreversible thermodynamics. A new expression for the generalized Stokes-Einstein equation is obtained which links the permeability to the diffusivity of a two-component solution and contains the poroelastic Biot-Willis coefficient. The theory is illustrated by predicting the concentration and pressure profiles during the filtration of a protein solution. At low concentrations the proteins diffuse independently while at higher concentrations they form a nearly rigid porous glass through which the fluid permeates. The theoretically determined pressure drop is nonlinear in the diffusion regime and linear in the permeation regime, in quantitative agreement with experimental measurements. © 2009 Walter de Gruyter, Berlin, New York.

  19. Drift-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    K. Banoo

    1998-01-01

    equation in the discrete momentum space. This is shown to be similar to the conventional drift-diffusion equation except that it is a more rigorous solution to the Boltzmann equation because the current and carrier densities are resolved into M×1 vectors, where M is the number of modes in the discrete momentum space. The mobility and diffusion coefficient become M×M matrices which connect the M momentum space modes. This approach is demonstrated by simulating electron transport in bulk silicon.

  20. Advanced manufacturing: Technology diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Tesar, A.

    1995-12-01

    In this paper we examine how manufacturing technology diffuses rom the developers of technology across national borders to those who do not have the capability or resources to develop advanced technology on their own. None of the wide variety of technology diffusion mechanisms discussed in this paper are new, yet the opportunities to apply these mechanisms are growing. A dramatic increase in technology diffusion occurred over the last decade. The two major trends which probably drive this increase are a worldwide inclination towards ``freer`` markets and diminishing isolation. Technology is most rapidly diffusing from the US In fact, the US is supplying technology for the rest of the world. The value of the technology supplied by the US more than doubled from 1985 to 1992 (see the Introduction for details). History shows us that technology diffusion is inevitable. It is the rates at which technologies diffuse to other countries which can vary considerably. Manufacturers in these countries are increasingly able to absorb technology. Their manufacturing efficiency is expected to progress as technology becomes increasingly available and utilized.

  1. Now consider diffusion

    International Nuclear Information System (INIS)

    Dungey, J.W.

    1984-01-01

    The authors want to talk about future work, but first he will reply to Stan Cowley's comment on his naivety in believing in the whole story to 99% confidence in '65, when he knew about Fairfield's results. Does it matter whether you make the right judgment about theories? Yes, it does, particularly for experimentalists perhaps, but also for theorists. The work you do later depends on the judgment you've made on previous work. People have wasted a lot of time developing on insecure or even wrong foundations. Now for future work. One mild surprise the authors have had is that they haven't heard more about diffusion, in two contexts. Gordon Rostoker is yet to come and he may talk about particles getting into the magnetosphere by diffusion. Lots of noise is observed and so diffusion must happen. If time had not been short, the authors were planning to discuss in a handwaving way what sort of diffusion mechanisms one might consider. The other aspect of diffusion he was going to talk about is at the other end of things and is velocity diffusion, which is involved in anomalous resistivity

  2. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  3. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  4. Diffuse solar radiation estimation models for Turkey's big cities

    International Nuclear Information System (INIS)

    Ulgen, Koray; Hepbasli, Arif

    2009-01-01

    literature in terms of the widely used statistical indicators, namely; the relative percentage error (E), coefficient of determination (R 2 ), the mean percentage error (MPE), the mean absolute percentage error (MAPE), the sum of the squares of relative errors (SSRE), the relative standard error (RSE), the mean bias error (MBE), the root mean square error (RMSE), and the t-statistic (t-stat) method combining the last two errors. It may be concluded that the new models predict the values of cloudness index (K d ) and diffuse coefficient (K dd ) as a function of clearness index (K T ) and sunshine fraction (S/S o ) for three big cities in Turkey better than other available models, while all the models tested appear to be location independent models for diffuse radiation predictions, at least for three big cities in Turkey. It is also expected that the models reviewed and developed will be beneficial to everyone involved or interested in the design and study of solar energy

  5. Understanding Human Error in Naval Aviation Mishaps.

    Science.gov (United States)

    Miranda, Andrew T

    2018-04-01

    To better understand the external factors that influence the performance and decisions of aviators involved in Naval aviation mishaps. Mishaps in complex activities, ranging from aviation to nuclear power operations, are often the result of interactions between multiple components within an organization. The Naval aviation mishap database contains relevant information, both in quantitative statistics and qualitative reports, that permits analysis of such interactions to identify how the working atmosphere influences aviator performance and judgment. Results from 95 severe Naval aviation mishaps that occurred from 2011 through 2016 were analyzed using Bayes' theorem probability formula. Then a content analysis was performed on a subset of relevant mishap reports. Out of the 14 latent factors analyzed, the Bayes' application identified 6 that impacted specific aspects of aviator behavior during mishaps. Technological environment, misperceptions, and mental awareness impacted basic aviation skills. The remaining 3 factors were used to inform a content analysis of the contextual information within mishap reports. Teamwork failures were the result of plan continuation aggravated by diffused responsibility. Resource limitations and risk management deficiencies impacted judgments made by squadron commanders. The application of Bayes' theorem to historical mishap data revealed the role of latent factors within Naval aviation mishaps. Teamwork failures were seen to be considerably damaging to both aviator skill and judgment. Both the methods and findings have direct application for organizations interested in understanding the relationships between external factors and human error. It presents real-world evidence to promote effective safety decisions.

  6. Color Histogram Diffusion for Image Enhancement

    Science.gov (United States)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  7. WWER radial reflector modeling by diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P. T.; Mittag, S.

    2005-01-01

    The two commonly used approaches to describe the WWER radial reflectors in diffusion codes, by albedo on the core-reflector boundary and by a ring of diffusive assembly size nodes, are discussed. The advantages and disadvantages of the first approach are presented first, then the Koebke's equivalence theory is outlined and its implementation for the WWER radial reflectors is discussed. Results for the WWER-1000 reactor are presented. Then the boundary conditions on the outer reflector boundary are discussed. The possibility to divide the library into fuel assembly and reflector parts and to generate each library by a separate code package is discussed. Finally, the homogenization errors for rodded assemblies are presented and discussed (Author)

  8. An Adaptive Approach to Variational Nodal Diffusion Problems

    International Nuclear Information System (INIS)

    Zhang Hui; Lewis, E.E.

    2001-01-01

    An adaptive grid method is presented for the solution of neutron diffusion problems in two dimensions. The primal hybrid finite elements employed in the variational nodal method are used to reduce the diffusion equation to a coupled set of elemental response matrices. An a posteriori error estimator is developed to indicate the magnitude of local errors stemming from the low-order elemental interface approximations. An iterative procedure is implemented in which p refinement is applied locally by increasing the polynomial order of the interface approximations. The automated algorithm utilizes the a posteriori estimator to achieve local error reductions until an acceptable level of accuracy is reached throughout the problem domain. Application to a series of X-Y benchmark problems indicates the reduction of computational effort achievable by replacing uniform with adaptive refinement of the spatial approximations

  9. Greedy algorithms for diffuse optical tomography reconstruction

    Science.gov (United States)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of

  10. Lead diffusion in monazite

    International Nuclear Information System (INIS)

    Gardes, E.

    2006-06-01

    Proper knowledge of the diffusion rates of lead in monazite is necessary to understand the U-Th-Pb age anomalies of this mineral, which is one of the most used in geochronology after zircon. Diffusion experiments were performed in NdPO 4 monocrystals and in Nd 0.66 Ca 0.17 Th 0.17 PO 4 polycrystals from Nd 0.66 Pb 0.17 Th 0.17 PO 4 thin films to investigate Pb 2+ + Th 4+ ↔ 2 Nd 3+ and Pb 2+ ↔ Ca 2+ exchanges. Diffusion annealings were run between 1200 and 1500 Celsius degrees, at room pressure, for durations ranging from one hour to one month. The diffusion profiles were analysed using TEM (transmission electronic microscopy) and RBS (Rutherford backscattering spectroscopy). The diffusivities extracted for Pb 2+ + Th 4+ ↔ 2 Nd 3+ exchange follow an Arrhenius law with parameters E equals 509 ± 24 kJ mol -1 and log(D 0 (m 2 s -1 )) equals -3.41 ± 0.77. Preliminary data for Pb 2+ ↔ Ca 2+ exchange are in agreement with this result. The extrapolation of our data to crustal temperatures yields very slow diffusivities. For instance, the time necessary for a 50 μm grain to lose all of its lead at 800 Celsius degrees is greater than the age of the Earth. From these results and other evidence from the literature, we conclude that most of the perturbations in U-Th-Pb ages of monazite cannot be attributed to lead diffusion, but rather to interactions with fluids. (author)

  11. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  12. Total Survey Error for Longitudinal Surveys

    NARCIS (Netherlands)

    Lynn, Peter; Lugtig, P.J.

    2016-01-01

    This article describes the application of the total survey error paradigm to longitudinal surveys. Several aspects of survey error, and of the interactions between different types of error, are distinct in the longitudinal survey context. Furthermore, error trade-off decisions in survey design and

  13. Diffusion tensor MR microscopy of tissues with low diffusional anisotropy.

    Science.gov (United States)

    Bajd, Franci; Mattea, Carlos; Stapf, Siegfried; Sersa, Igor

    2016-06-01

    Diffusion tensor imaging exploits preferential diffusional motion of water molecules residing within tissue compartments for assessment of tissue structural anisotropy. However, instrumentation and post-processing errors play an important role in determination of diffusion tensor elements. In the study, several experimental factors affecting accuracy of diffusion tensor determination were analyzed. Effects of signal-to-noise ratio and configuration of the applied diffusion-sensitizing gradients on fractional anisotropy bias were analyzed by means of numerical simulations. In addition, diffusion tensor magnetic resonance microscopy experiments were performed on a tap water phantom and bovine articular cartilage-on-bone samples to verify the simulation results. In both, the simulations and the experiments, the multivariate linear regression of the diffusion-tensor analysis yielded overestimated fractional anisotropy with low SNRs and with low numbers of applied diffusion-sensitizing gradients. An increase of the apparent fractional anisotropy due to unfavorable experimental conditions can be overcome by applying a larger number of diffusion sensitizing gradients with small values of the condition number of the transformation matrix. This is in particular relevant in magnetic resonance microscopy, where imaging gradients are high and the signal-to-noise ratio is low.

  14. Negligence, genuine error, and litigation

    Science.gov (United States)

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  15. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  16. Error studies of Halbach Magnets

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-03-02

    These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially, 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-­4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σn≥sextupole an 2+bn 2) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-­4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.

  17. [Errors in laboratory daily practice].

    Science.gov (United States)

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  18. Technical errors in MR arthrography

    International Nuclear Information System (INIS)

    Hodler, Juerg

    2008-01-01

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  19. Technical errors in MR arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Hodler, Juerg [Orthopaedic University Hospital of Balgrist, Radiology, Zurich (Switzerland)

    2008-01-15

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  20. Diffusion Influenced Adsorption Kinetics.

    Science.gov (United States)

    Miura, Toshiaki; Seki, Kazuhiko

    2015-08-27

    When the kinetics of adsorption is influenced by the diffusive flow of solutes, the solute concentration at the surface is influenced by the surface coverage of solutes, which is given by the Langmuir-Hinshelwood adsorption equation. The diffusion equation with the boundary condition given by the Langmuir-Hinshelwood adsorption equation leads to the nonlinear integro-differential equation for the surface coverage. In this paper, we solved the nonlinear integro-differential equation using the Grünwald-Letnikov formula developed to solve fractional kinetics. Guided by the numerical results, analytical expressions for the upper and lower bounds of the exact numerical results were obtained. The upper and lower bounds were close to the exact numerical results in the diffusion- and reaction-controlled limits, respectively. We examined the validity of the two simple analytical expressions obtained in the diffusion-controlled limit. The results were generalized to include the effect of dispersive diffusion. We also investigated the effect of molecular rearrangement of anisotropic molecules on surface coverage.

  1. Bicarbonate diffusion through mucus.

    Science.gov (United States)

    Livingston, E H; Miller, J; Engel, E

    1995-09-01

    The mucus layer overlying duodenal epithelium maintains a pH gradient against high luminal acid concentrations. Despite these adverse conditions, epithelial surface pH remains close to neutrality. The exact nature of the gradient-forming barrier remains unknown. The barrier consists of mucus into which HCO3- is secreted. Quantification of the ability of HCO3- to establish and maintain the gradient depends on accurate measurement of this ion's diffusion coefficient through mucus. We describe new experimental and mathematical methods for diffusion measurement and report diffusion coefficients for HCO3- diffusion through saline, 5% mucin solutions, and rat duodenal mucus. The diffusion coefficients were 20.2 +/- 0.10, 3.02 +/- 0.31, and 1.81 +/- 0.12 x 10(-6) cm2/s, respectively. Modeling of the mucobicarbonate layer with this latter value suggests that for conditions of high luminal acid strength the neutralization of acid by HCO3- occurs just above the epithelial surface. Under these conditions the model predicts that fluid convection toward the lumen could be important in maintaining the pH gradient. In support of this hypothesis we were able to demonstrate a net luminal fluid flux of 5 microliters.min-1.cm-2 after perfusion of 0.15 N HCl in the rat duodenum.

  2. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  3. Cesium diffusion in graphite

    International Nuclear Information System (INIS)

    Evans, R.B. III; Davis, W. Jr.; Sutton, A.L. Jr.

    1980-05-01

    Experiments on diffusion of 137 Cs in five types of graphite were performed. The document provides a completion of the report that was started and includes a presentation of all of the diffusion data, previously unpublished. Except for data on mass transfer of 137 Cs in the Hawker-Siddeley graphite, analyses of experimental results were initiated but not completed. The mass transfer process of cesium in HS-1-1 graphite at 600 to 1000 0 C in a helium atmosphere is essentially pure diffusion wherein values of (E/epsilon) and ΔE of the equation D/epsilon = (D/epsilon) 0 exp [-ΔE/RT] are about 4 x 10 -2 cm 2 /s and 30 kcal/mole, respectively

  4. Apparatus for diffusion separation

    International Nuclear Information System (INIS)

    Nierenberg, W.A.; Pontius, R.B.

    1976-01-01

    The method of testing the separation efficiency of porous permeable membranes is described which comprises causing a stream of a gaseous mixture to flow into contact with one face of a finely porous permeable membrane under such conditions that a major fraction of the mixture diffuses through the membrane, maintaining a rectangular cross section of the gaseous stream so flowing past said membrane, continuously recirculating the gas that diffuses through said membrane and continuously withdrawing the gas that does not diffuse through said membrane and maintaining the volume of said recirculating gas constant by continuously introducing into said continuously recirculating gas stream a mass of gas equivalent to that which is continuously withdrawn from said gas stream and comparing the concentrations of the light component in the entering gas, the withdrawn gas and the recirculated gas in order to determine the efficiency of said membrane

  5. Diffusion in flexible pipes

    Energy Technology Data Exchange (ETDEWEB)

    Brogaard Kristensen, S.

    2000-06-01

    This report describes the work done on modelling and simulation of the complex diffusion of gas through the wall of a flexible pipe. The diffusion and thus the pressure in annulus depends strongly on the diffusion and solubility parameters of the gas-polymer system and on the degree of blocking of the outer surface of the inner liner due to pressure reinforcements. The report evaluates the basis modelling required to describe the complex geometries and flow patterns. Qualitatively results of temperature and concentration profiles are shown in the report. For the program to serve any modelling purpose in 'real life' the results need to be validated and possibly the model needs corrections. Hopefully, a full-scale test of a flexible pipe will provide the required temperatures and pressures in annulus to validate the models. (EHS)

  6. Distributed Control Diffusion

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2007-01-01

    . Programming a modular, self-reconfigurable robot is however a complicated task: the robot is essentially a real-time, distributed embedded system, where control and communication paths often are tightly coupled to the current physical configuration of the robot. To facilitate the task of programming modular....... This approach allows the programmer to dynamically distribute behaviors throughout a robot and moreover provides a partial abstraction over the concrete physical shape of the robot. We have implemented a prototype of a distributed control diffusion system for the ATRON modular, self-reconfigurable robot......, self-reconfigurable robots, we present the concept of distributed control diffusion: distributed queries are used to identify modules that play a specific role in the robot, and behaviors that implement specific control strategies are diffused throughout the robot based on these role assignments...

  7. Diffuse Ceiling Ventilation

    DEFF Research Database (Denmark)

    Zhang, Chen; Yu, Tao; Heiselberg, Per Kvols

    cooling capacity, energy saving, low investment cost and low noise level; while the limitations include condensation risk and the limit on the room geometry. Furthermore, the crucial design parameters are summarized and their effects on the system performance are discussed. In addition to the stand...... is not well structured with this system. These become the motivations in developing the design guide. This design guide aims to establish a systematic understanding of diffuse ceiling ventilation and provide assistance in designing of such a system. The guide is targeted at design engineers, architects...... and manufacturers and the users of diffuse ceiling technology. The design guide introduces the principle and key characteristics of room air distribution with diffuse ceiling ventilation. It provides an overview of potential benefit and limitations of this technology. The benefits include high thermal comfort, high...

  8. Diffusion and mass transfer

    CERN Document Server

    Vrentas, James S

    2013-01-01

    The book first covers the five elements necessary to formulate and solve mass transfer problems, that is, conservation laws and field equations, boundary conditions, constitutive equations, parameters in constitutive equations, and mathematical methods that can be used to solve the partial differential equations commonly encountered in mass transfer problems. Jump balances, Green’s function solution methods, and the free-volume theory for the prediction of self-diffusion coefficients for polymer–solvent systems are among the topics covered. The authors then use those elements to analyze a wide variety of mass transfer problems, including bubble dissolution, polymer sorption and desorption, dispersion, impurity migration in plastic containers, and utilization of polymers in drug delivery. The text offers detailed solutions, along with some theoretical aspects, for numerous processes including viscoelastic diffusion, moving boundary problems, diffusion and reaction, membrane transport, wave behavior, sedime...

  9. Diffusion in flexible pipes

    Energy Technology Data Exchange (ETDEWEB)

    Brogaard Kristensen, S

    2000-06-01

    This report describes the work done on modelling and simulation of the complex diffusion of gas through the wall of a flexible pipe. The diffusion and thus the pressure in annulus depends strongly on the diffusion and solubility parameters of the gas-polymer system and on the degree of blocking of the outer surface of the inner liner due to pressure reinforcements. The report evaluates the basis modelling required to describe the complex geometries and flow patterns. Qualitatively results of temperature and concentration profiles are shown in the report. For the program to serve any modelling purpose in 'real life' the results need to be validated and possibly the model needs corrections. Hopefully, a full-scale test of a flexible pipe will provide the required temperatures and pressures in annulus to validate the models. (EHS)

  10. The Trouble with Diffusion

    Directory of Open Access Journals (Sweden)

    R.T. DeHoff

    2002-09-01

    Full Text Available The phenomenological formalism, which yields Fick's Laws for diffusion in single phase multicomponent systems, is widely accepted as the basis for the mathematical description of diffusion. This paper focuses on problems associated with this formalism. This mode of description of the process is cumbersome, defining as it does matrices of interdiffusion coefficients (the central material properties that require a large experimental investment for their evaluation in three component systems, and, indeed cannot be evaluated for systems with more than three components. It is also argued that the physical meaning of the numerical values of these properties with respect to the atom motions in the system remains unknown. The attempt to understand the physical content of the diffusion coefficients in the phenomenological formalism has been the central fundamental problem in the theory of diffusion in crystalline alloys. The observation by Kirkendall that the crystal lattice moves during diffusion led Darken to develop the concept of intrinsic diffusion, i.e., atom motion relative to the crystal lattice. Darken and his successors sought to relate the diffusion coefficients computed for intrinsic fluxes to those obtained from the motion of radioactive tracers in chemically homogeneous samples which directly report the jump frequencies of the atoms as a function of composition and temperature. This theoretical connection between tracer, intrinsic and interdiffusion behavior would provide the basis for understanding the physical content of interdiffusion coefficients. Definitive tests of the resulting theoretical connection have been carried out for a number of binary systems for which all three kinds of observations are available. In a number of systems predictions of intrinsic coefficients from tracer data do not agree with measured values although predictions of interdiffusion coefficients appear to give reasonable agreement. Thus, the complete

  11. Nonlinear diffusion equations

    CERN Document Server

    Wu Zhuo Qun; Li Hui Lai; Zhao Jun Ning

    2001-01-01

    Nonlinear diffusion equations, an important class of parabolic equations, come from a variety of diffusion phenomena which appear widely in nature. They are suggested as mathematical models of physical problems in many fields, such as filtration, phase transition, biochemistry and dynamics of biological groups. In many cases, the equations possess degeneracy or singularity. The appearance of degeneracy or singularity makes the study more involved and challenging. Many new ideas and methods have been developed to overcome the special difficulties caused by the degeneracy and singularity, which

  12. Phase transformation and diffusion

    CERN Document Server

    Kale, G B; Dey, G K

    2008-01-01

    Given that the basic purpose of all research in materials science and technology is to tailor the properties of materials to suit specific applications, phase transformations are the natural key to the fine-tuning of the structural, mechanical and corrosion properties. A basic understanding of the kinetics and mechanisms of phase transformation is therefore of vital importance. Apart from a few cases involving crystallographic martensitic transformations, all phase transformations are mediated by diffusion. Thus, proper control and understanding of the process of diffusion during nucleation, g

  13. Ambipolar diffusion in plasma

    International Nuclear Information System (INIS)

    Silva, T.L. da.

    1987-01-01

    Is this thesis, a numerical method for the solution of the linear diffusion equation for a plasma containing two types of ions, with the possibility of charge exchange, has been developed. It has been shown that the decay time of the electron and ion densities is much smaller than that in a plasma containing only a single type of ion. A non-linear diffusion equation, which includes the effects of an external electric field varying linearly in time, to describe a slightly ionized plasma has also been developed. It has been verified that the decay of the electron density in the presence of such an electric field is very slow. (author)

  14. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    Science.gov (United States)

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  15. Application of TRIZ Methodology in Diffusion Welding System Optimization

    Science.gov (United States)

    Ravinder Reddy, N.; Satyanarayana, V. V.; Prashanthi, M.; Suguna, N.

    2017-12-01

    Welding is tremendously used in metal joining processes in the manufacturing process. In recent years, diffusion welding method has significantly increased the quality of a weld. Nevertheless, diffusion welding has some extent short research and application progress. Therefore, diffusion welding has a lack of relevant information, concerned with the joining of thick and thin materials with or without interlayers, on welding design such as fixture, parameters selection and integrated design. This article intends to combine innovative methods in the application of diffusion welding design. This will help to decrease trial and error or failure risks in the welding process being guided by the theory of inventive problem solving (TRIZ) design method. This article hopes to provide welding design personnel with innovative design ideas under research and for practical application.

  16. Diffuse axonal injury: detection of changes in anisotropy of water diffusion by diffusion-weighted imaging

    International Nuclear Information System (INIS)

    Chan, J.H.M.; Tsui, E.Y.K.; Yuen, M.K.; Peh, W.C.G.; Fong, D.; Fok, K.F.; Leung, K.M.; Fung, K.K.L.

    2003-01-01

    Myelinated axons of white matter demonstrate prominent directional differences in water diffusion. We performed diffusion-weighted imaging on ten patients with head injury to explore the feasibility of using water diffusion anisotropy for quantitating diffuse axonal injury. We showed significant decrease in diffusion anisotropy indices in areas with or without signal abnormality on T2 and T2*-weighted images. We conclude that the water diffusion anisotropy index a potentially useful, sensitive and quantitative way of diagnosing and assessing patients with diffuse axonal injury. (orig.)

  17. Anisotropy in "isotropic diffusion" measurements due to nongaussian diffusion

    DEFF Research Database (Denmark)

    Jespersen, Sune Nørhøj; Olesen, Jonas Lynge; Ianuş, Andrada

    2017-01-01

    Designing novel diffusion-weighted NMR and MRI pulse sequences aiming to probe tissue microstructure with techniques extending beyond the conventional Stejskal-Tanner family is currently of broad interest. One such technique, multidimensional diffusion MRI, has been recently proposed to afford...... model-free decomposition of diffusion signal kurtosis into terms originating from either ensemble variance of isotropic diffusivity or microscopic diffusion anisotropy. This ability rests on the assumption that diffusion can be described as a sum of multiple Gaussian compartments, but this is often...

  18. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  19. WACC: Definition, misconceptions and errors

    OpenAIRE

    Fernandez, Pablo

    2011-01-01

    The WACC is just the rate at which the Free Cash Flows must be discounted to obtain the same result as in the valuation using Equity Cash Flows discounted at the required return to equity (Ke) The WACC is neither a cost nor a required return: it is a weighted average of a cost and a required return. To refer to the WACC as the "cost of capital" may be misleading because it is not a cost. The paper includes 7 errors due to not remembering the definition of WACC and shows the relationship betwe...

  20. Wavefront error sensing for LDR

    Science.gov (United States)

    Tubbs, Eldred F.; Glavich, T. A.

    1988-01-01

    Wavefront sensing is a significant aspect of the LDR control problem and requires attention at an early stage of the control system definition and design. A combination of a Hartmann test for wavefront slope measurement and an interference test for piston errors of the segments was examined and is presented as a point of departure for further discussion. The assumption is made that the wavefront sensor will be used for initial alignment and periodic alignment checks but that it will not be used during scientific observations. The Hartmann test and the interferometric test are briefly examined.